index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
18,700
Rethinking Mesh Watermark: Towards Highly Robust and Adaptable Deep 3D Mesh Watermarking Xingyu Zhu1,2,3, Guanhui Ye2, Xiapu Luo3, Xuetao Wei2,1* 1Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, China 2Department of Computer Science and Engineering, Southern University of Science and Technology, China 3Department of Computing, Hong Kong Polytechnic University, Hong Kong [email protected], [email protected], [email protected], [email protected] Abstract The goal of 3D mesh watermarking is to embed the message in 3D meshes that can withstand various attacks imperceptibly and reconstruct the message accurately from watermarked meshes. The watermarking algorithm is supposed to withstand multiple attacks, and the complexity should not grow significantly with the mesh size. Unfortunately, previous methods are less robust against attacks and lack of adaptability. In this paper, we propose a robust and adaptable deep 3D mesh watermarking DEE P3DMAR K that leverages attention-based convolutions in watermarking tasks to embed binary messages in vertex distributions without texture assistance. Furthermore, our DEE P3DMAR K exploits the property that simplified meshes inherit similar relations from the original ones, where the relation is the offset vector directed from one vertex to its neighbor. By doing so, our method can be trained on simplified meshes but remains effective on large size meshes (size adaptable) and unseen categories of meshes (geometry adaptable). Extensive experiments demonstrate our method remains efficient and effective even if the mesh size is 190× increased. Under mesh attacks, DE EP3DMA RK achieves 10%∼50% higher accuracy than traditional methods, and 2× higher SNR and 8% higher accuracy than previous DNN-based methods. Introduction Digital watermarking is a technology used in copyright protection of multimedia, such as images, videos, point clouds, and meshes. The goal of digital watermarking is to obtain watermarked media by embedding messages in the media in the embedding phase and reconstructing the message from the watermarked media in the reconstruction phase. However, previous 3D mesh watermarking methods pursue high capacity while ignoring robustness. The watermark should be imperceptible and robust so that it can withstand attacks and be adaptable so that it can be applied to arbitrary mesh sizes and geometries. Previous 3D mesh watermarking methods can be classified into DNN-based and traditional methods. Traditional methods focus on improving the capacity of watermarking (i.e., the number of embedded bits per vertices) while ignoring the robustness of their watermark. For example, some *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 11011010 11011010 Adaptability 01011010 01011010 Training ℰϕ 𝒟θ ℰϕ 𝒟θ Cover mesh 𝑁𝑣= 2500 Stega mesh Message Reconstructed Message Decimation Process Increased 𝑁𝑣 Unseen Geometry Message Stega mesh Reconstructed Message Robustness Train set 11011010 𝒟θ Reconstructed Message Mesh Attack Attacked stega mesh Figure 1: We train our DEEP3DMARK on simplified meshes (top) and test on varied mesh sizes and unseen geometry (middle) to show adaptation. We further test DEEP3DMARK on multiple mesh attacks (bottom). methods (Peng, Long, and Long 2021; Tsai 2020) embed secret messages in the least significant bits (LSBs) of vertex coordinates, but they are vulnerable to Gaussian noises. Others (Tsai and Liu 2022; Hou et al. 2023) embed secret messages in the most significant bits (MSBs) of vertex coordinates, which makes them robust against noises. However, they still cannot withstand rotation and scaling. Recent DNN-based methods show the possibility of embedding watermarks in either vertex domain (Wang et al. 2022) or texture domain (Yoo et al. 2022). However, these methods are either only able to extract messages with textured meshes(Yoo et al. 2022) or low in watermarked mesh quality (Wang et al. 2022). Moreover, watermark quality and embedding overhead should be less impacted by variations in mesh sizes and geometries for practical application consideration. Previous work has yet to explore a robust and adaptable watermarking method. In this paper, we propose a highly robust and adaptable deep 3D watermarking DEEP3DMARK. Compared to traditional methods, DEEP3DMARK is more robust against multiple mesh attacks even without prior knowledge of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7784 Message ෡𝑀 Message Decoder Input Mesh 𝐺(𝑉, 𝐸) Encoded Mesh ෠𝐺(෠𝑉, 𝐸) Encoder 𝐄 Message 𝑀 Latent Code 𝑧 Attack Layer Message Encoder Noised Mesh 𝐺𝑎𝑑𝑣(𝑉𝑎𝑑𝑣, 𝐸) Latent Code Ƹ𝑧 Adversarial Discriminator 𝐀 Decoder 𝐃 Encoded/Cover Encoder Loss 𝐿𝑒𝑛𝑐 Decoder Loss 𝐿𝑑𝑒𝑐 Discriminative Loss 𝐿𝑑𝑒𝑐 Message 𝑀 Figure 2: DE E P3DMARK overview. Message encoder first maps message M into latent code z, which is further fed to the watermark encoder along with the input mesh G to generate the encoded mesh ˆG. The attack layer generates a noised mesh Gadv. Given the noised mesh, the watermark decoder produces the decoded latent code ˆz followed by the message decoder, which decodes latent code into decoded message ˆ M. The adversarial discriminator encourages minimizing the difference between G and ˆG. type of attacks. Compared to previous DNN-based methods, DE EP3DMA R K generates higher-quality watermarked meshes and can be applied to different mesh sizes and unseen geometries. We watermark the vertex coordinates of meshes by adopting graph-attention network (GAT) (Veliˇckovi´c et al. 2017). To achieve robustness, we apply adversarial training with GAT-generated perturbations. To achieve adaptability, we provide an insight that different sizes of meshes under the same categories share similar features. By exploiting such similarities, we can train our DE E P3DMA R K with simplified meshes and keep effective on meshes with increased sizes. Specifically, our adopted GAT is the backbone of our DE E P3DMA R K, which can generate watermarked meshes given the original meshes and binary messages and reconstruct the binary messages from the watermarked meshes. Our DE E P3DMA R K consists of 1) an encoder, 2) a decoder, 3) a message autoencoder, 4) an attack layer, and 5) a discriminator, as shown in Fig. 2. We train our DEEP3DMARK using simplified data from the train set under all scenarios to better evaluate the adaptability. To prove effectiveness, our DE E P3DMA R K is tested on complete test data. To prove robustness, we test DEEP3DMARK on multiple mesh attacks. To prove adaptability, we test DEEP3DMARK on multiple datasets and different sizes of meshes. In summary, our contributions are the following: • We investigate mainstream watermarking methods and observe their low robustness. To tackle this problem, we propose a highly robust deep 3D watermarking DE E P3DMA R K, which embeds binary messages in vertex distributions by incorporating the graph attention network (GAT) and achieves robustness against unknown mesh attacks by incorporating adversarial training. • We achieve adaptability by exploiting the property that simplified meshes inherit similar relations from the original meshes, where the relation is an offset vector directed from a vertex to its neighbor. Our DEEP3DMARK can be trained on simplified meshes but remains effective on large-sized meshes and unseen categories of meshes. • We conduct extensive experiments on various datasets to prove DEEP3DMARK’s effectiveness, robustness, and adaptability. Our DEEP3DMARK achieves 10%∼50% higher accuracy when facing attacks compared to traditional methods and achieves 50% lower distortions and 8% higher accuracy compared to previous DNN-based methods. Our experiment shows that our method can also be robust against multiple unknown attacks. Related Work Mesh Watermarking Early 3D mesh watermarking methods (Son et al. 2017; AlKhafaji and Abhayaratne 2019) used Fourier and wavelet analysis to transfer meshes into the frequency domain and embed watermark bits into Fourier/wavelet coefficients. However, the time complexity of these methods grows cubically with the number of vertices. (Zhou et al. 2018; Jiang et al. 2017; Tsai and Liu 2022; Hou et al. 2023) proposed to embed watermarks into the least significant bits and the most significant bits of vertex coordinates. (Hou, Kim, and Lee 2017) leveraged the layering artifacts of 3D printed meshes to apply watermark embedding and reconstruction. Recently, (Yoo et al. 2022) and (Wang et al. 2022) explored the feasibility of DNN in watermarking work. (Wang et al. 2022) stacked graph residual blocks to embed and extract the watermark. (Yoo et al. 2022) embedded secret messages in textures of meshes and then extracted the message from the rendered 2D image, but cannot reconstruct an accurate message without the help of a texture encoder. In this The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7785 𝑣𝑖 𝑣𝑗 𝐷𝑣𝑖, 𝑣𝑗 𝐹𝑣𝑖 𝑙+1 = 1 𝒩𝑣𝑖 ෍ 𝑣𝑗∈𝒩𝑣𝑖 𝐹𝑣𝑗 𝑙𝑊Θ 𝐷𝑣𝑗, 𝑣𝑖 𝑣𝑖 𝐹𝑣𝑖 𝑙 𝐹𝑣𝑖 𝑙+1 𝐷𝑣𝑖, 𝑣𝑗 𝑊Θ 𝑊Θ 𝐷𝑣𝑖, 𝑣𝑗 𝑣𝑖 𝑣𝑗∈𝒩𝑣𝑖 𝐹𝑣𝑗 𝑙 Figure 3: The process of generating the new feature F l+1 vi of vi. Left: the local region centered at vi. Middle: within neighborhood vj ∈N(vi), an MLP generates weights between vi and vj given their relation D(vi, vj). Right: given vertex feature Fl = {F l v0, F l v2, ..., F l vN } and generated weights of neighborhood vj ∈N(vi), compute the new feature vector F l+1 vi as the weighted sum of neighbor features. case, replacing the texture image can completely remove the watermarks. Moreover, recent research (Dong, Kumar, and Liu 2022) also shows how to detect DNN-generated images. Hence, watermarking in mesh geometries is more secure than watermarking in textures. Neural Networks for 3D Meshes Existing methods for 3D data built features from faces (Xu, Dong, and Zhong 2017; Feng et al. 2019; Lian et al. 2019; Hertz et al. 2020; Hu et al. 2022; Kim and Chae 2022), edges (Simonovsky and Komodakis 2017; Veliˇckovi´c et al. 2017; Wang et al. 2019) and vertices (Qi et al. 2017a,b; Wu, Qi, and Fuxin 2019; Liu et al. 2019; Xu et al. 2018; Hermosilla et al. 2018; Groh, Wieschollek, and Lensch 2018). The built features were applied to downstream tasks such as classification and segmentation. (Veliˇckovi´c et al. 2017; Simonovsky and Komodakis 2017) introduced an attentionbased mechanism into graph convolution, where the weights of each neighbor were adjusted based on the edge information. Such graph-based convolution can be further extended to 3D meshes. Definition Triangle meshes can be viewed as undirected graphs G(V, E). Vertices V ∈RNv×Cv contains Nv vertices, and each vertex has Cv vertex elements such as coordinates and normals. Edges E can be transformed from faces set of triangle meshes, where each face is a triangle formed by three vertex indices. Since changes of E produce unexpected artifacts, we embed a binary message M ∈{0, 1}Nm into the vertex distribution V , i.e., we embed binary messages in vertex distributions V . Let V, ˆV , M, ˆ M denote the original vertex, watermarked vertex, binary messages, and reconstructed messages, respectively. We model the problem by the following equations: ˆV = Eϕ(V, M) ˆ M = Dθ( ˆV ) (1) In Eq 1, a parameterized encoding function Eϕ generates watermarked vertices ˆV given original vertices V and a binary message M. A parameterized decoding function Dθ reconstructs ˆ M from ˆV . The encoding function should minimize the perturbation between V and ˆV by minimizing the following loss to achieve imperceptible embedding: Lenc(ϕ, θ) = EV,M[∥ˆV −V ∥2 2] (2) To achieve precise reconstruction, we try to minimize the following loss: Ldec(ϕ, θ) = EV,M[∥ˆ M −M∥2 2] (3) Finally, we have combined the optimization problem: ϕ∗, θ∗= arg min ϕ,θ (Lenc(ϕ, θ) + Ldec(ϕ, θ)) (4) Method We propose DEEP3DMARK, an end-to-end imperceptible watermarking method that can be robust to arbitrary attacks and be adaptable to different mesh sizes and geometries. To watermark a graph signal G(V, E), we utilize local features in the spatial domain using graph attention network (GAT), which is the backbone of our DEEP3DMARK. We first introduce GAT on mesh. Then we introduce all DEEP3DMARK modules, followed by a detailed introduction to our training details. Graph Attention Network on Mesh Graph attention network (GAT) is a convolution operator defined on graphs. For l-th layer of GAT, the input is a set of vertex features Fl = {F l v0, F l v2, ..., F l vN }, where N is the number of vertices. This layer produces a new set of vertex features Fl+1 = {F l+1 v0 , F l+1 v2 , ..., F l+1 vN }. For each vertex vi, its new feature F l+1 vi is computed as the averaged weighted sum of its neighbor features F l vj for all vj ∈N(vi). To increase expressive power, weights for each neighbor vj are obtained from learnable linear transform WΘ. We view 3D meshes as graphs G(V, E). We first define our GAT on meshes as: F l+1 vi = 1 |N(vi)| X vj∈N (vi) F l vjWΘ(D(vj, vi)) (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7786 Origin Simplifed (a) Coordinates Distribution Origin Simplifed (b) Relation Distribution Figure 4: t-SNE (Van der Maaten and Hinton 2008) visualization of the distribution between original and simplified meshes. (a) shows the existence of distribution shifts between the original and decimated coordinates. However, (b) shows that the distributions of decimated relations D(vi, vj) are included in the distributions of the original ones. where F l+1(vi) is the feature vector of vi at (l +1)-th layer. The neighborhood N(vi) = {vj|(vj, vi) ∈E} ∪{vi} is defined as all points adjacent to the point vi and itself. We use a multilayer perceptron (MLP) to model the learnable linear transform WΘ. The input of the MLP is the relation D(vi, vj) between the target vertex vi and its neighbor vj. Figure 3 gives a visualization process of our GAT. We show that it is beneficial to learn from relation D(vi, vj). We define relation D(vi, vj) = ⃗vi −⃗vj as the coordinate offset from vi to vj. Such the relation can survive the mesh simplification algorithm (Lindstrom et al. 1998, 1999). First, we simplify meshes to reduce the vertex number to 1/5 of the original, i.e. N ′ v = 1/5Nv. Then we visualize the coordinates and relation distributions for both original and simplified meshes in Figure 4. Figure 4 shows coordinate distribution differences between original and simplified meshes, while relation distributions are still included in the original distributions. Based on this insight, our method is trained on simplified meshes and shows adaptability to increased-size meshes. Deep3DMark Our architecture (Figure 2) consists of five parameterized learnable components: 1) a message autoencoder that can map a binary message M to a latent code z and decodes z back to M, 2) an encoder E models function Eϕ that generates a watermarked vertices ˆV given V and z, 3) an attack layer applies perturbation over ˆV to increase the robustness in the way of data augmentation, 4) a decoder D models Dθ that reconstructs binary message from ˆv, and 5) a discriminator A encourages ˆV indistinguishable from V . The encoder E first applies convolutions to input V to form some intermediate representation. We aim to incorporate message latent code z in the way that the encoder learns to embed parts of it at any spatial location of V . To achieve this, we replicate the latent code and concatenate it to the intermediate representations. We apply more convolutions to transform the concatenated feature to watermarked vertices ˆV . The attack layer applies perturbations to generated ˆV . The perturbations consider several mesh attacks, including 1) Gaussian noise with mean µ and deviation σ, 2) random rotation attacks with rotate center (x, y, z) and degree α, 3) translation attack and 4) scaling attack with a scaling ratio s. Our ablation study shows that the attack layer effectively increases the robustness against multiple attacks. The decoder D first applies several convolutions to generate the intermediate representation of ˆV . It finally uses a global average pooling followed by an MLP layer to generate a vector of the same size as the latent code z. The global average pooling layer ensures that our method aggregates information from all vertices. The adversarial discriminator A shares a similar structure as the decoder except that its final MLP layer transforms the aggregated vector into a binary classification, which indicates whether the given ˆV is generated by the encoder E. According to Shannon’s capacity theory (Shannon 1948), redundancy is necessary to achieve robustness. The message autoencoder increases the robustness of our system by injecting redundancy into the system. Given a binary message M of length Nm, the message encoder maps it into a latent code z of length Nz > Nm, which can be used to recover M through a message decoder. We train the autoencoder in a way that the decoder can recover M from the noised latent code ˆz. We choose NECST (Choi et al. 2019), a learnable channel coding method, as our message autoencoder. Our message autoencoder is trained independently from the entire watermarking model. Training and Losses We achieve the objective in Eq 4 using three losses: encoding loss Lenc, reconstruction loss Ldec, and discriminative loss Ldis. Formally: ϕ∗, θ∗= arg min ϕ,θ (λencLenc(ϕ, θ) + λdecLdec(ϕ, θ) + λdisLdis(ϕ, θ)) (6) where λenc, λdec, λdis are weight factors. Both Lenc, Ldis encourage generated ˆV indistinguishable from V . For Lenc, we use both the L2 norm and infinite norm of geometry difference to penalize the distortion: Lenc = 1 Nv Nv X i (V [i] −ˆV [i])2 + max i {V [i] −ˆV [i]} (7) For Ldis, we use part of sigmoid cross entropy loss: Ldis = log(1 −σ(A( ˆV ))) (8) We apply standard sigmoid cross entropy loss to encourage precise message reconstruction: Ldec = PNm i (M[i] · log σ( ˆ M[i]) + (1 −M[i]) · log(1 −σ( ˆ M[i]))) Nm (9) The final message bits are computed from the following: Mfinal = clamp(sign( ˆ M −0.5), 0, 1) (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7787 Algorithm Geometry Difference Accuracy & Robustness (%) Time (s) L1d Hausdorff SNR w/o attack Gauss Trans Rot Scale Crop JIA N G2017 0.1051 1.3950 39.50 80.59 76.98 48.2 80.58 49.97 49.68 81.24 SPE C TR AL 0.0090 0.0125 44.99 87.48 54.31 50.41 50.40 54.29 56.01 70.350 PEN G2021 0.0182 0.0158 46.70 97.98 50.24 58.30 50.55 50.25 50.30 30.470 PEN G2022 0.0073 0.0375 36.94 80.89 79.28 80.89 80.89 80.89 54.12 3.240 TSA I2020 0.0101 0.0145 40.90 96.41 50.03 49.99 49.75 50.02 50.10 5.989 TSA I2022 0 0 inf 84.49 84.48 84.48 64.32 61.76 50.51 3.079 HOU2023 0.0194 0.0268 45.10 90.84 90.84 69.22 87.92 49.55 50.21 2.090 WAN G2022 0.1337 0.1856 15.04 97.50 90.84 90.22 90.92 90.55 70.21 0.010 DEE P3DMAR K (Ours) 0.0338 0.0617 30.84 98.17 91.06 98.17 96.84 98.17 78.90 0.0089 Table 1: Comparison of watermarked mesh quality (geometry difference), reconstruction accuracy without attack (w/o attack), robustness under Gaussian noise (Gauss), translation (Trans), rotation (Rot), scaling (Scale), and cropping attack (Crop). Experiment Experiment Setup Training Settings. Our experiment is conducted on Ubuntu 18.04, with 503GB RAM and five Nvidia RTX 3090. Our DE E P3DMA R K uses GATs to build E, D and A, where channel size are all 64. At the first layer, we take coordinates (x, y, z) as the feature of points, i.e., Cv = 3. Our experiment for both DE EP3DMARK and other baselines are evaluated under the message length Nm = 8. During training, we set λenc = 2, λdec = 1, λdis = 0.001 under the settings of 8-bit message lengths, and we set µ = 0, σ = 0.001, α ∈[0, π), s ∈[0.1, 1). Baselines. We adopt nine state-of-the-art watermarking algorithms as our baselines. JIANG2017 (Jiang et al. 2017), PE NG2022 (Peng, Liao, and Long 2022), TSAI2020 (Tsai 2020), TS AI2022 (Tsai and Liu 2022) and HOU2023 (Hou et al. 2023) are encrypted domain watermarking algorithms, whose geometry difference can only be evaluated after model decryption. We evaluate the geometry difference between the original and decrypted watermarked mesh. PE NG2021 (Peng, Long, and Long 2021), SPECT RAL (AlKhafaji and Abhayaratne 2019) and WANG2022 (Wang et al. 2022) are plain-text domain watermarking algorithms whose geometry difference can be directly evaluated between the original and watermarked mesh. We also compare with YO O2022 (Yoo et al. 2022) in geometry difference and accuracy. Metrics. To evaluate the extracted message accuracy, we use average bit accuracy. To evaluate geometry differences, we use Hausdorff distance, the L1 norm of vertex difference (L1d), and signal-to-noise ratio (SNR). Dataset Our DE EP3DMA RK is trained on a simplified train set from ModelNet40 (Wu et al. 2015) and then tested on the entire test of ModelNet40 and other datasets such as ShapeNet (Chang et al. 2015), GraspNet (Fang et al. 2020), ScanNet (Dai et al. 2017) and Hands (Romero, Tzionas, and Black 2022). For all datasets, we normalize the vertex coordinates (x, y, z) to [−1, 1] before meshes are fed into the network unless explicitly mentioned. We acquire simplified data through a simplification using CGAL (The CGAL Project 2022), which performs edgecollapse or half-edge-collapse algorithms to reduce the number of triangles by merging vertices. We generated two train sets m500 and m2500. The number of vertices in m500 and m2500 are Nv = 500 and Nv = 2500, respectively. For m2500, we manually filter out meshes whose Nv is originally less than 2500 and those with low quality after simplifications. We also perform the same process for m500. As a result, we get 3508 train meshes and 879 test meshes for m2500, and 1147 train meshes and 337 test meshes for m500. The original ModelNet has 9843 and 2468 meshes for training and testing. We train two replicas of DEEP3DMARK on m500 and m2500, respectively. Both are further tested on the test set of ModelNet to evaluate the size adaptability. To evaluate the effectiveness on geometry variations, two replicas of DEEP3DMARK, which is trained on m500 and m2500, are tested on ShapeNet, GraspNet, ScanNet, and Hands. ShapeNet has different categories of meshes from ModelNet, such as birdhouse, camera, clock, etc. Scannet is a dataset of scanned and reconstructed real-world scenes. Hands contain meshes of human hands. Experiment on Robustness Settings. Our first experiment is to (1) prove that it is hard to achieve full robustness even under the settings of 8-bit message length for traditional methods and (2) DEEP3DMARK is robust against Gaussian, rotation, scaling, translation, and cropping attack while maintaining relatively high quality. DEEP3DMARK is trained on the train set of m2500. DEEP3DMARK and baselines are further evaluated on the test set of m2500. For a fair comparison, the watermarked meshes are rescaled back to their original coordinates before we apply any attacks. We choose Gaussian noise (σ = 0.1), translation (random translation vector in [0, 1000]3), rotation (origin point as the rotation center and α ∈[0, π 2 ], ), scaling (s ∈[0.1, 1)) and cropping attacks (cropping ratio c = 0.1). Results. Table 1 shows that although traditional methods have relatively high watermarked mesh quality, they are vulnerable to multiple attacks. PENG2021 embeds secret messages in the least significant bits of vertex coordinates, thus vulnerable to Gaussian attacks. TSAI2022 embeds secret messages in the most significant bits of vertex coordinates. However, it still cannot withstand rotation and scaling attacks. Compared to traditional methods, DEEP3DMARK The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7788 (a) Visualization L1d (b) Original Mesh SNR: 29.33 SNR:29.09 (c) Watermarked Mesh Figure 5: Results to show the imperceptible watermarking: (a) L1 Norm of vertex difference (L1d), (b) the original mesh, and (c) the watermarked mesh with DEEP3DMARK (ours). is robust against arbitrary attacks and achieved 10%∼50% higher accuracy when facing arbitrary attacks. Compared to DNN-based methods, DEEP3DMARK achieved 1%∼8% higher accuracy and 2× SNR value. Table 2 shows that we achieve similar geometry difference and accuracy to YO O2022. We provide a visual quality example in Fig. 5 for the concern of the SNR drop compared with traditional methods. Algorithm L1 Vertex Normal Bit Accuracy (%) YOO2022 0.1041 0.9362 DEE P3DMAR K 0.1143 0.9403 Table 2: Comparison with YOO2022. Experiment on Unknown Distortions Settings. Our second experiment is to prove DEEP3DMARK is robust against unknown distortions because a practical watermarking experiment must be robust against a wide range of distortions. We choose Gaussian noise, cropping, reordering, ARAP (Sorkine and Alexa 2007), implicit laplacian smoothing (Smooth) (Desbrun et al. 2023) and Draco (Google 2017) compression. For the ARAP attack, we randomly select [0, 10] handle points and move all the handle points along a vector with length 0.1. For implicit laplacian smoothing, we increase the distortion strength by increasing the parameter λI. For Draco compression, we increase the distortion strength by decreasing the quantization bits Nq. Results. Table 3 shows the bit accuracy of our model on these additional distortions. Overall, our model shows full robustness on these unknown distortions. Distortions Made By Bit Accuracy (%) No Distortion 98.17 Gaussian Noise (σ = 0.005) 98.06 Gaussian Noise (σ = 0.01) 97.11 Gaussian Noise (σ = 0.02) 90.47 Cropping (c = 0.1) 78.41 Cropping (c = 0.5) 75.04 Cropping (c = 0.9) 65.81 Implicit Laplacian Smooth (λI = 1.0) 93.57 Implicit Laplacian Smooth (λI = 5.0) 89.25 Implicit Laplacian Smooth (λI = 10) 80.29 Draco Compression(Nq = 15) 97.32 Draco Compression(Nq = 10) 96.98 Draco Compression(Nq = 5) 88.60 Reorder 98.17 ARAP 96.42 Table 3: The robustness under unknown attacks, where Gaussian noise is tested with out-of-domain parameters. Experiment on Adaptability Settings. Our third experiment is conducted to prove that our method can be generalized to different mesh sizes and unseen geometry. We train DEEP3DMARK on m500 and m2500 to get two replicas. Both are further evaluated on the original test set of ModelNet40, ShapeNet40, ScanNet, GraspNet, and Hands. The data distributions in ShapeNet40, ScanNet, GraspNet, and Hands are unseen to both DEEP3DMARK replicas during training. Results. Figure 6 shows the result using the DEEP3DMARK trained on m2500. The top row shows the cover meshes where (a-d) are simplified from the original mesh (e). The bottom row shows the watermarked meshes. Table 5 shows The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7789 Metrics Avg birdhouse camera clock spigot knife loudspeaker mug pistol printer m500 Hausdorff 0.0758 0.0778 0.0723 0.0754 0.0762 0.0747 0.0690 0.0882 0.0809 0.0794 L1d 0.0308 0.0349 0.0339 0.0476 0.0364 0.0355 0.0421 0.0313 0.0380 0.0333 SNR 26.54 27.91 27.99 26.08 25.79 25.22 26.58 26.66 27.67 26.18 Acc 0.8863 0.8698 0.9347 0.9220 0.7943 0.7160 0.9221 0.8615 0.8969 0.9390 m2500 Hausdorff 0.0549 0.0680 0.0568 0.0570 0.0474 0.0493 0.0620 0.0527 0.0562 0.0547 L1d 0.0248 0.0245 0.0302 0.0278 0.0281 0.0267 0.0293 0.0266 0.0254 0.0296 SNR 31.01 29.92 27.55 31.11 27.14 30.30 27.02 30.68 30.76 31.19 Acc 0.9348 0.9554 0.9800 0.9489 0.9358 0.8773 0.9682 0.9526 0.9324 0.9623 Table 4: Geometry adaptability on ShapeNet dataset. m500 and m2500 are trained on simplified ModelNet dataset with Nv = 500 and Nv = 2500, respectively. Here, we list the results of nine categories. Metrics (0, 20000) [20000, 40000) [40000, 60000) [60000, 80000) [80000, 100000) m500 Hausdorff 0.0721 0.0609 0.0600 0.0626 0.0666 L1d 0.0408 0.0344 0.0348 0.0355 0.0304 SNR 26.65 27.43 26.61 26.552 26.91 Acc 0.9183 0.9052 0.8570 0.8717 0.8181 m2500 Hausdorff 0.0550 0.0473 0.0466 0.0502 0.0488 L1d 0.0281 0.0225 0.0232 0.0232 0.0222 SNR 29.83 31.21 30.18 30.33 30.02 Acc 0.9462 0.9034 0.9183 0.9123 0.8708 Table 5: Size adaptability on ModelNet dataset with the varied number of vertices Nv ∈(0, 100000]. m500 and m2500 are trained on simplified ModelNet dataset with Nv = 500 and Nv = 2500, respectively. Metrics GraspNet Hands ScanNet m500 Hausdorff 0.0817 0.0753 0.0970 L1d 0.0365 0.0337 0.0378 SNR 28.80 28.91 27.07 Acc 0.9588 0.9288 0.8593 m2500 Hausdorff 0.0519 0.0515 0.0527 L1d 0.0254 0.0277 0.0237 SNR 30.62 30.73 30.93 Acc 0.9673 0.9584 0.9929 Table 6: Geometry adaptability on other datasets. statistical results under size variations. We evaluate our method on meshes with Nv ≤100000. The DEEP3DMARK trained on m2500 is still effective when mesh size is 40× increased. Compared to the DEEP3DMARK trained on m2500, the one trained on m500 achieves lower accuracy and introduces more distortions. However, it still achieves 81.81% accuracy when the mesh size is 190× increased. Table 4 shows results under geometry variations on ShapeNet. On ShapeNet, the DE EP3DMARK trained on m2500 achieves an average 93.48% bit accuracy while only introducing 0.0248 L1 norm of vertex difference and 0.0549 Hausdorff difference. The results demonstrate that simplified meshes inherit the relation D(vi, vj) distribution from the original meshes. However, as the size of training meshes decreases, the adaptability of GAT decreases as well. Conclusion 3D watermarking is a key step toward copyright protection. Our paper has introduced DEEP3DMARK, which uti(a) 500 (b) 1000 (c) 2500 (d) 10000 (e) 35947 (f) 500 (g) 1000 (h) 2500 (i) 10000 (j) 35947 Figure 6: Results of the DEEP3DMARK trained on m2500 under different mesh sizes. Top(a-e): original mesh G with varying Nv from 500 to 35947. Bottom(f-j): the corresponding watermarked mesh ˆG using our DEEP3DMARK. (a-d) are the simplified version of the original mesh (e). lizes graph attention networks to embed binary messages in vertex distributions without texture assistance. Our approach has taken advantage of the property that simplified meshes inherit similar relations from the original ones, specifically the offset vector between adjacent vertices. This approach has enabled the training on simplified meshes but remains effective on larger and previously unseen categories of meshes (adaptability), resulting in fewer distortions and 10%∼50% higher bit accuracy than previous methods when facing attacks. Moreover, extensive experiments have shown that our DEEP3DMARK is robust against unknown mesh attacks, such as smoothing, ARAP, and compression. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7790 Acknowledgments This work was supported in part by National Key R&D Program of China under Grant 2021YFF0900300, in part by Key Talent Programs of Guangdong Province under Grant 2021QN02X166, and in part by Research Institute of Trustworthy Autonomous Systems under Grant C211153201. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding parties. References Al-Khafaji, H.; and Abhayaratne, C. 2019. Graph spectral domain blind watermarking. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2492–2496. IEEE. Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; Xiao, J.; Yi, L.; and Yu, F. 2015. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago. Choi, K.; Tatwawadi, K.; Grover, A.; Weissman, T.; and Ermon, S. 2019. Neural joint source-channel coding. In International Conference on Machine Learning, 1182–1192. PMLR. Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. Desbrun, M.; Meyer, M.; Schroder, P.; and Barr, A. H. 2023. Implicit fairing of irregular meshes using diffusion and curvature flow. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2, 149–156. Dong, C.; Kumar, A.; and Liu, E. 2022. Think Twice Before Detecting GAN-generated Fake Images from their Spectral Domain Imprints. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7865– 7874. Fang, H.-S.; Wang, C.; Gou, M.; and Lu, C. 2020. Graspnet1billion: A large-scale benchmark for general object grasping. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11444–11453. Feng, Y.; Feng, Y.; You, H.; Zhao, X.; and Gao, Y. 2019. Meshnet: Mesh neural network for 3d shape representation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 8279–8286. Google. 2017. Draco 3D Data Compression. https://google. github.io/draco/. Accessed: 2023-12-25. Groh, F.; Wieschollek, P.; and Lensch, H. 2018. Flexconvolution. In Asian Conference on Computer Vision, 105– 122. Springer. Hermosilla, P.; Ritschel, T.; V´azquez, P.-P.; Vinacua, `A.; and Ropinski, T. 2018. Monte carlo convolution for learning on non-uniformly sampled point clouds. ACM Transactions on Graphics (TOG), 37(6): 1–12. Hertz, A.; Hanocka, R.; Giryes, R.; and Cohen-Or, D. 2020. Deep geometric texture synthesis. arXiv preprint arXiv:2007.00074. Hou, G.; Ou, B.; Long, M.; and Peng, F. 2023. Separable Reversible Data Hiding for Encrypted 3D Mesh Models Based on Octree Subdivision and Multi-MSB Prediction. IEEE Transactions on Multimedia. Hou, J.-U.; Kim, D.-G.; and Lee, H.-K. 2017. Blind 3D mesh watermarking for 3D printed model by analyzing layering artifact. IEEE Transactions on Information Forensics and Security, 12(11): 2712–2725. Hu, S.-M.; Liu, Z.-N.; Guo, M.-H.; Cai, J.-X.; Huang, J.; Mu, T.-J.; and Martin, R. R. 2022. Subdivision-based mesh convolution networks. ACM Transactions on Graphics (TOG), 41(3): 1–16. Jiang, R.; Zhou, H.; Zhang, W.; and Yu, N. 2017. Reversible data hiding in encrypted three-dimensional mesh models. IEEE Transactions on Multimedia, 20(1): 55–67. Kim, S.; and Chae, D.-K. 2022. ExMeshCNN: An Explainable Convolutional Neural Network Architecture for 3D Shape Analysis. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 795–803. Lian, C.; Wang, L.; Wu, T.-H.; Liu, M.; Dur´an, F.; Ko, C.C.; and Shen, D. 2019. Meshsnet: Deep multi-scale mesh feature learning for end-to-end tooth labeling on 3d dental surfaces. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 837–845. Springer. Lindstrom, P.; et al. 1998. Fast and memory efficient polygonal simplification. In IEEE Vis, 279–286. IEEE. Lindstrom, P.; et al. 1999. Evaluation of memoryless simplification. IEEE Transactions on Visualization and Computer Graphics, 5(2): 98–115. Liu, Y.; Fan, B.; Xiang, S.; and Pan, C. 2019. Relationshape convolutional neural network for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8895–8904. Peng, F.; Liao, T.; and Long, M. 2022. A semi-fragile reversible watermarking for authenticating 3d models in dual domains based on variable direction double modulation. IEEE Transactions on Circuits and Systems for Video Technology, 32(12): 8394–8408. Peng, F.; Long, B.; and Long, M. 2021. A general region nesting-based semi-fragile reversible watermarking for authenticating 3D mesh models. IEEE transactions on circuits and systems for video technology, 31(11): 4538–4553. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7791 Romero, J.; Tzionas, D.; and Black, M. J. 2022. Embodied hands: Modeling and capturing hands and bodies together. arXiv preprint arXiv:2201.02610. Shannon, C. E. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3): 379–423. Simonovsky, M.; and Komodakis, N. 2017. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3693–3702. Son, J.; Kim, D.; Choi, H.-Y.; Jang, H.-U.; and Choi, S. 2017. Perceptual 3D watermarking using mesh saliency. In International Conference on Information Science and Applications, 315–322. Springer. Sorkine, O.; and Alexa, M. 2007. As-rigid-as-possible surface modeling. In Symposium on Geometry processing, volume 4, 109–116. Citeseer. The CGAL Project. 2022. CGAL User and Reference Manual. CGAL Editorial Board, 5.5.1 edition. Tsai, Y.-Y. 2020. Separable reversible data hiding for encrypted three-dimensional models based on spatial subdivision and space encoding. IEEE transactions on multimedia, 23: 2286–2296. Tsai, Y.-Y.; and Liu, H.-L. 2022. Integrating Coordinate Transformation and Random Sampling Into High-Capacity Reversible Data Hiding in Encrypted Polygonal Models. IEEE Transactions on Dependable and Secure Computing. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Wang, F.; Zhou, H.; Fang, H.; Zhang, W.; and Yu, N. 2022. Deep 3D mesh watermarking with self-adaptive robustness. Cybersecurity, 5(1): 1–14. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5): 1–12. Wu, W.; Qi, Z.; and Fuxin, L. 2019. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9621–9630. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1912–1920. Xu, H.; Dong, M.; and Zhong, Z. 2017. Directionally convolutional networks for 3D shape segmentation. In Proceedings of the IEEE International Conference on Computer Vision, 2698–2707. Xu, Y.; Fan, T.; Xu, M.; Zeng, L.; and Qiao, Y. 2018. Spidercnn: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the European Conference on Computer Vision (ECCV), 87–102. Yoo, I.; Chang, H.; Luo, X.; Stava, O.; Liu, C.; Milanfar, P.; and Yang, F. 2022. Deep 3D-to-2D Watermarking: Embedding Messages in 3D Meshes and Extracting Them from 2D Renderings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10031–10040. Zhou, H.; Chen, K.; Zhang, W.; Yao, Y.; and Yu, N. 2018. Distortion design for secure adaptive 3-d mesh steganography. IEEE Transactions on Multimedia, 21(6): 1384–1398. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7792
2024
865
18,701
Boosting Few-Shot Learning via Attentive Feature Regularization Xingyu Zhu1, 2, Shuo Wang1, 2*, Jinda Lu1, 2, Yanbin Hao1, 2, Haifeng Liu3, Xiangnan He1, 2 1Department of Electronic Engineering and Information Science, University of Science and Technology of China; 2MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China; 3Brain-Inspired Technology Co., Ltd. {xingyuzhu, lujd}@mail.ustc.edu.cn, {shuowang.edu, xiangnanhe}@gmail.com, [email protected], [email protected] Abstract Few-shot learning (FSL) based on manifold regularization aims to improve the recognition capacity of novel objects with limited training samples by mixing two samples from different categories with a blending factor. However, this mixing operation weakens the feature representation due to the linear interpolation and the overlooking of the importance of specific channels. To solve these issues, this paper proposes attentive feature regularization (AFR) which aims to improve the feature representativeness and discriminability. In our approach, we first calculate the relations between different categories of semantic labels to pick out the related features used for regularization. Then, we design two attention-based calculations at both the instance and channel levels. These calculations enable the regularization procedure to focus on two crucial aspects: the feature complementarity through adaptive interpolation in related categories and the emphasis on specific feature channels. Finally, we combine these regularization strategies to significantly improve the classifier performance. Empirical studies on several popular FSL benchmarks demonstrate the effectiveness of AFR, which improves the recognition accuracy of novel categories without the need to retrain any feature extractor, especially in the 1-shot setting. Furthermore, the proposed AFR can seamlessly integrate into other FSL methods to improve classification performance. Introduction In recent years, convolutional neural networks (CNNs) have demonstrated remarkable capabilities on various visual classification tasks, particularly provided with sufficient training data. However, collecting and labeling such datasets is a time-consuming and expensive procedure. As a remedy to address this challenge, few-shot learning (FSL) is proposed to classify a novel object with a scarcity of labeled data. (Ye et al. 2020; Peng et al. 2019; Wang et al. 2020). The conventional solution of FSL involves using a CNN trained on the base categories to directly extract the global features of novel objects (Hariharan and Girshick 2017; Wang et al. 2018). It aims to yield a transferable feature representation (textures and structures) to describe a novel category. Subsequently, these features are employed to train *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Obvious Deviation Inference Query Sample MixUp CutMix PatchMix Prediction Linnet Retriever Retriever Linnet Manifold Regularization Classifier Unrelated Figure 1: The analysis of manifold regularization methods. a classifier for recognizing novel objects. Manifold regularization (Rodr´ıguez et al. 2020; Deutsch et al. 2017; Velazquez et al. 2022) is a popular strategy to improve classification performance. These methods involve mixing two samples (features) and their labels from randomly selected categories to generate a regularized feature. However, the mixing operation with randomness is easy to weaken the representation ability (Guo, Mao, and Zhang 2019; Chou et al. 2020). This is primarily due to the direct interpolation without considering the complementarity of two features and the neglect of specific feature channels (Hou, Liu, and Wang 2017; Shi, Wu, and Wang 2023; Luo, Xu, and Xu 2022; Zhu et al. 2023), which in turn impacts the distribution of prediction results. As illustrated in Figure 1, given a novel sample of “Retriever” and another randomly picked out sample “Linnet”, the manifold regularization methods, e.g., Mixup (Zhang et al. 2018), CutMix (Yun et al. 2019), and PatchMix (Liu et al. 2021), interpolate their images and labels to train the classifier for predicting both categories. It’s evident that the “Retriever” and the “Linnet” are unrelated in terms of both vision and semantics. Consequently, the regularized features deviate from the novel feature “Retriever” (as indicated by the five yellow squares in the lowerleft corner of Figure 1). This deviation leads to an increase in the prediction score for “Linnet” and results in misclassification. This deviation leads to an increase in the prediction score of the “Linnet” and limits the classification results. To address the aforementioned issue arising from manifold regularization, we first incorporate semantics to select categories related to the novel categories from the base set. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7793 This idea aligns with the prior work (Wang et al. 2020; Peng et al. 2019; Wang et al. 2022), where semantic knowledge not only strengthens visual features but also aids help classifier in capturing discriminative patterns. However, it’s worth noting that such methodologies necessitate greater prior semantic information during the training process, which leads to increased model size and longer training times. Different from the previous approaches, our method solely relies on semantic labels to select relevant base categories during the data preprocessing stage. This purposeful selection, in contrast to the random selection in manifold regularization, enables the classifier to better concentrate more effectively on the novel content during the training stage. Besides, we also exploit the feature complementarity from similar categories and the discriminability of specific feature channels, which can both provide distinctive patterns for classification (Liu et al. 2019; Shi, Wu, and Wang 2023). Building on the above analysis, we propose two attention-based calculations at the instance and channel levels, respectively. The instance attention is designed to adaptively leverage the collaborative components of the selected base categories guided by their relevance to heighten the novel feature representativeness. Specifically, we first analyze the semantic similarity of the selected base categories related to the novel category, then calculate the attention scores between the selected base samples and the given novel sample to measure their importance. Finally, these attention scores are then employed in the reweighting of selected samples. Instance attention exploits the collaboration between the related categories through adaptive interpolation, which avoids the irrelevant components of the base categories and consequently improves the representation of the novel samples. For channel calculations, we aim to emphasize the specific feature channels that signify the discriminative patterns. Specifically, we calculate the scores as channel importance weights from the regularized features output from the instance attention. These weights are then applied to each channel of features in regularization, aiding the classifier in identifying the representative content of the novel samples. This channel attention mechanism allows for more efficient and focused exploration of novel category information within the feature channels, which enhances the discriminability of the final feature representation. The proposed procedures, defined as Attentive Feature Regularization (AFR), all operate on features and can be easily applied to existing pre-trained feature extractors. The main contributions of our method are as follows. 1. We propose instance-level attention with semantic selection to improve the feature representativeness, which leverages the complementarity of the related base categories to enhance the novel categories. 2. We design channel-level attention to enhance the feature discriminability by measuring the importance of different channels, which helps the classifier focus on the representative content of the novel sample. 3. Our method achieves state-of-the-art performance on three popular FSL datasets and can also be used to improve the performance of the classifier in other FSL methods without training feature extractors. Related Work In this section, we first briefly introduce common solutions for FSL tasks and corresponding regularization strategies. Subsequently, we list the applications of recent attentionbased methods. Finally, we enumerate the differences between our methods and those of related methods. Knowledge Transfer in Few-Shot Learning Recent advances in Few-Shot Learning (FSL) have demonstrated promising performance by transferring the knowledge from the base categories to the novel categories (Li et al. 2020, 2019; Wang et al. 2022; Lu et al. 2023). These methods leverage semantic knowledge to provide additional information for refining visual features or enriching the supervision during classifier training. For example, the method in (Li et al. 2019) clusters hierarchical textual labels from both the base and novel categories to improve the feature extractor training. Wang et al. proposed a multi-directional knowledge transfer (MDKT) method which integrates the visual and textual features through a bidirectional knowledge connection. The work described in (Lu et al. 2023) employs the semantics to explore the correlation of categories to hallucinate the additional training samples. Regularization in Few-Shot Learning Recently, manifold regularization(Devries and Taylor 2017; Zhang et al. 2018; Verma et al. 2019; Yun et al. 2019; Liu et al. 2021) has been used in FSL tasks, which is simply based on mixture and mask operation and can improve the classification performance. The simplest method is CutOut (Devries and Taylor 2017), which randomly masks out square regions of input during training and improves the performance of the networks. Based on CutOut, many other manifold regularization methods have been developed, i.e., MixUp (Zhang et al. 2018; Verma et al. 2019), CutMix (Yun et al. 2019), PatchMix (Liu et al. 2021). Specifically, MixUp mixes two samples by interpolating both the image and the labels. In CutMix, patches are cut and pasted among training features, where ground truth labels are also mixed proportionally to the area of the patches. PatchMix is similar to CutMix and uses mixed images for contrastive learning. Attention in Few-Shot Learning In the field of FSL, attention mechanisms (Vaswani et al. 2017a) have been widely widespread due to their ability to highlight the important parts of inputs by measuring similarities. This enables the network to focus on critical content for specific tasks (Hou et al. 2019; Kang et al. 2021; Chikontwe, Kim, and Park 2022). For instance, Hou et al. proposed a cross-attention (CAM) method to model the semantic relevance between class and query features, leading to adaptive localization of relevant regions and generation of more discriminative features (Hou et al. 2019). The work in (Kang et al. 2021) computes the cross-correlation between two representations and learns to produce co-attention between them. It improves the classification accuracy by learning The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7794 Retriever Poodle Komondor Saluki Retriever Bars Saluki Hautboy Komondor Poodle 0.42 0.39 0.40 0.17 0.08 0.21 Photocopier Category Selection Feature Extractor " Average Calibration Channel Attention Attention Score Attention Score Attention Score Retriever Hadamard Product Element-wise Addition Instance Attention !! !!" " , !!" # , !!" $ Novel Category Base Categories $" %" ℒ!" ℒ##! ℒ$#" Label Classifier ! $# $$ &$" &$# &$$ %# %$ !!# " , !!# # , !!# $ !!$ " , !!$ # , !!$ $ $" $# $$ '$" '$# '$$ Figure 2: The overview of attentive feature regularization (AFR), where LCE, LSC, and LMSE are three losses. cross-correlational patterns and adapting “where to attend” concerning the images given in the testing stage. Moreover, the method (Ye et al. 2020) integrates the entire Transformer module including attention mechanisms (FEAT) to adapt the features for FSL tasks. The authors in (Lai et al. 2022) propose the method named transformer-based Semantic Filter (tSF) which defines the additional learnable parameters to filter the useful knowledge of the whole base set for the novel category. The recently proposed CAD (Chikontwe, Kim, and Park 2022) employs self-attention operations to cross-attend support and query embeddings, effectively reweighting each instance relative to the others. Based on the analysis of the related work, our method belongs to the manifold regularization methods. The methods most related to ours are the recently proposed Manifold Mixup in (Verma et al. 2019) and CAM in (Hou et al. 2019). Our method differs from theirs in two aspects. First, we introduce semantic knowledge to purposefully select samples for regularization and keep the label of the regularized feature the same as the novel feature to avoid introducing other unrelated supervisions for training. Second, we design two attention calculations to enhance collaboration and improve the discriminability of features, which helps the classifier focus on the distribution of novel categories rather than associate the support and query samples during the testing stage (Hou et al. 2019). Besides, our approach directly applies to the features and has a lower computational complexity. Method In this section, we elaborate on our attentive feature regularization (AFR). First, we briefly revisit the preliminaries of the FSL tasks and an overview of our framework. Second, we delve into the details of our semantic selection process and different attention calculations. Finally, we describe the training and inference procedures of our approach. Preliminaries The data for the few-shot learning task is divided into three parts: base set Dbase, support set Dsupport, and query set Dquery. The base set Dbase has large-scale labeled samples (e.g., about hundreds of samples in one category) used for training the feature extractor. The categories of these samples are denoted as Cbase and provide valuable prior knowledge as known contents to describe other samples. The support set Dsupport and the query set Dquery share the same set of categories, called Cnovel, which is disjoint with that of the base set Cbase. The goal of few-shot learning is to construct a classifier using training samples from both the base set and the support set, capable of accurately classifying the samples in the query set. For the training samples from Dsupport, there are total N categories that are randomly sampled from Cnovel, and each category provides K samples. This process is known as the N-way-K-shot recognition problem. The overview of our framework is depicted in Figure 2. First, we use the semantic knowledge to select the related base categories to a given novel sample and extract features of all these samples by a pre-trained CNN. Second, we design instance attention and channel attention to regularize these features. Third, we design three losses to constrain the regularization procedure and train a classifier. Attentive Feature Regularization Textual knowledge uses semantic description to express each category. It provides the direct relations between the categories. To avoid bringing irrelevant noise to influence the classifier training, we directly calculate the relations between these descriptions before regularization. Specifically, we first use the word2vec embedding method (Li et al. 2019) to express these descriptions into the feature. Then, given feature of a support category as ts, we calculate the relations Rs = {rs i }|Cbase| i=1 between ts and the other descriptions {ti}i∈Cbase from base categories by similarity calculation: rs i = ⟨ts, ti⟩ ∥ts∥2 · ∥ti∥2 , (1) where ⟨·, ·⟩is the inner product between two vectors. After obtaining the relation scores Rs, we sort them and select the samples from the top-βs related categories denoted as Cβs for regularization. These semantically relevant features can provide a more relevant content supplement to the training and avoid bringing much irrelevant noise. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7795 !! !" !# "$ # $ % & '% '& '' !( ($ S )# Vector Product S Element-wise Addition Softmax Operation Figure 3: The calculation of calibration in instance attention. Our regularization operates on the feature level. Therefore, we first represent the sample I into feature f = Φ(I) ∈Rd by extracting the output from the pre-trained visual model Φ before the last prediction layer. The model Φ is already trained on known images from the base set Dbase, and d is the dimension of the feature. Then, we illustrate the different attention calculations used in our approach. Instance Attention Given a support feature fs with its textual description feature ts, we first selected βs categories from base categories as Cβs using Eq (1). Then we compute the prototype of each selected category Ci βs by averaging all the features corresponding to that category: pi = 1 |Ci βs| X fj, j ∈Ci βs (2) Therefore, the prototypes of whole Cβs categories can be constructed as P = [p1, p2, ..., p|Cβs|], where P ∈ R|Cβs|×d. We then design a calibration based on selfattention (Vaswani et al. 2017b) to find relevant categories that can help describe the novel category. The details are shown in Figure 3. Specifically, we first design three attention matrixes Q, K, and V to capture the content similarities from novel features and prototypes of related categories: Q = fs ∗Wq, K = P ∗Wk, V = P ∗Wv, (3) where Wq ∈Rd×d, Wk ∈Rd×d, Wv ∈Rd×d are weights of calibration calculation. We then use these similarities to calibrate the prototypes of the related categories since the distribution of the base categories and the novel category belong to different spaces, where the amplitude As ∈Rd of calibration is calculated as: As = softmax(Q ∗K⊺ √ d )V . (4) It measures the relations between the novel category and its related base categories from the feature space. Thus, calibrated prototypes ˆP can be defined as: ˆP = [ˆp1, ˆp2, ..., ˆp|Cβs|], = δ(As ∗Wp) + [p1, p2, ..., p|Cβs|], (5) where Wp ∈Rd×d is weight matrix for the calibration calculation, and δ is ReLU function. The calibrated prototypes can better simulate the distribution of the novel category and improve the accuracy of feature regularization. Channel Attention Channels of features have different influences on classifiers (Yue et al. 2020). To identify the important content of the channels, we design a channel attention module inspired by SE-Net (Hu, Shen, and Sun 2018). SE-Net terms the “Squeeze-and-Excitation (SE)” block to adaptively re-calibrate channel-wise feature responses by explicitly modeling interdependencies between channels in backbone training(Hu, Shen, and Sun 2018). Thus, we introduce a similar operation into the feature analysis. Specifically, we design two fully connected (FC) layers to “Squeeze-and-Excitation” calibrated prototypes ˆP : Es = σ(F C2(δ(F C1( ˆP )))), (6) where σ is the Sigmoid function, and we intentionally set the embedding size of F C1 to be smaller than that of F C2. This design enhances important content and weakens unrelated content in the features by controlling the size of F C1. To further fuse the channel attention with the prototypes, we set the size of Es to be the same as ˆP ∈R|Cβs|×d by controlling F C2 accordingly. In our fusion stage, we employ a residual structure to prevent vanishing gradients while improving the accuracy of the prototype representation: ¯P = Es ⊙ˆP + P , (7) where ⊙is the Hadamard product. ¯P ∈R|Cβs|×d not only close to the distribution of the novel category by calibrating but also captures the content related to the novel category by using channel attention. Therefore, we sample the features of ¯P as representations of the given novel category to enrich the training set in our few-shot learning task. Training and Inference Denoted the novel samples and their labels in a N-wayK-shot task as {{f i s, li s}N s=1}K i=1, and the fused prototypes as { ¯Ps = {¯pj s}βs j=1}N s=1, we combine the given features and prototypes into one set {{Hs = {hj s}}N s=1}K+βs j=1 to simplify the expressions in subsequent calculations, where Hs = [f 1 s , f 2 s , ..., f K s , ¯p1 s, ¯p2 s, ..., ¯pβs s ]. We then design two losses to constrain the distribution of regularized prototypes and use cross-entropy (CE) loss to train the classifier. First, we adopt the principles of self-supervised contrastive learning (Khosla et al. 2020), which aim to bring features of the same category closer together while pulling features of different categories apart. Thus, the supervised contrastive (SC) loss can be calculated as follows: LSC = 1 N|Hs| N X s=1 |Hs| X i,j=1 i̸=j log exp(⟨hi s, hj s⟩/τ) P ∀hp/∈Hs exp(⟨his, hp⟩/τ), (8) where |Hs| = K + βs means the size of Hs. Minimizing LSC encourages maximizing the distances between samples from different categories and clustering them from the same category closer together. Meanwhile, to bridge the distribution gap between the prototypes of base categories and the features of the novel category, we employ the mean squared The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7796 error (MSE) operation to measure the average prototypes and the novel features, and the loss is designed as: LMSE = 1 N N X s=1 || 1 K K X i=1 f i s −1 βs βs X j=1 ¯pj s||. (9) Finally, we design a N-way classifier Γ to learn the prediction distribution from given novel features and the prototypes. In this work, the classifier Γ is a simple network, e.g., as simple as one fully connected layer. We use the crossentropy loss to train the classifier with the hard labels: LCE = 1 N 1 |Hs| N X s=1 |Hs| X i=1 CrossEntropy(hi s, ls). (10) The total loss for training is defined as: L = LCE + µ1LSC + µ2LMSE, (11) where µ1 and µ2 are two weighting factors. During the inference, we use the trained classifier to directly predict the category for each feature in the query set. Experiments In this section, we present the experimental evaluation of our AFR. We begin by introducing the experimental settings. Next, we perform ablation studies to analyze the contributions of different components in our approach. Finally, we compare the performance of our approach with other stateof-the-art (SOTA) methods. Our experiments aim to address the following research questions (RQ): RQ1: Given an novel category, how many related categories (Cβs) should be selected from the base categories? RQ2: What are the effects of the instance attention and channel attention? RQ3: How do the contrastive learning and the feature space closing operations influence the classifier? RQ4: How does AFR perform compared to the state-of-theart FSL methods? Experimental Settings Datasets. We evaluate our method on three benchmark datasets, i.e., Mini-ImageNet (Vinyals et al. 2016), TieredImageNet (Ren et al. 2018), and Meta-Datase (Triantafillou et al. 2019). Specifically, Mini-ImageNet consists of 100 categories and each category has 600 images. It is divided into three parts: 64 base categories for training, 16 novel categories for validation, and the remaining 20 categories for testing. Similar to Mini-ImageNet, Tiered-ImageNet consists of 779165 images from 608 categories, where 351 base categories are used for training, 97 novel categories are used for validation, and the remaining 160 novel categories are used for testing. Meta-Dataset is a significantly larger-scale dataset that comprises multiple datasets with diverse data distributions, and we follow the usage described in (Xu et al. 2022). Specifically, feature extractor training is conducted using the base categories of Mini-ImageNet, and the other 8 image datasets are utilized for testing process, including Omniglot (Lake, Salakhutdinov, and Tenenbaum 2015), 80.0 82.5 0 10 20 30 40 50 60 65.0 67.5 70.0 72.5 Number…of…Selection…s Accuracy…(%) 1-shot 5-shot Figure 4: The accuracy (%) of the classifiers trained with the different numbers of selected base categories. Ins.Att. Chanl.Att K = 1 K = 5 % % 64.02 ± 0.70% 82.32 ± 0.41% % " 68.74 ± 0.61% 82.56 ± 0.46% " % 68.03 ± 0.58% 83.04 ± 0.43% " " 70.68 ± 0.61% 83.36 ± 0.45% Table 1: The accuracy (%) of the classifiers with different attentions, where Ins.Att. and Chanl.Att is the instance attention and channel attention, respectively. CUB-200-2011 (Wah et al. 2011), Describable Textures (Cimpoi et al. 2014), Quick Draw (Fernandez-Fernandez et al. 2019), Fungi (Sulc et al. 2020), VGG Flower (Nilsback and Zisserman 2008), Traffic Signs (Houben et al. 2013). Evaluation. In our evaluation, we conduct several N-wayK-shot classification tasks. In each task, N novel categories are randomly sampled at first, then K samples in each of the N categories are sampled for training, and finally, 15 samples (different from the previous K samples) in each of the N categories are sampled for testing. To ensure reliable results, we sample 600 such tasks and report mean accuracies and variances on all tasks. In our experiments, N = 5. Notably, we adhere to the evaluation setting for meta-dataset as described in (Xu et al. 2022), where the novel categories are randomly sampled from the alternate image datasets, excluding the base categories present in the Mini-ImageNet. Implementation Details. We utilize the features extracted from the pre-trained model and then apply our AFR to obtain both original and regularized features for training the classifier Γ. These features are used to train the classifier Γ using the loss function L defined in Eq. (11) for a total of 1000 epochs. We employ Adam optimization (Kingma and Ba 2015) with a learning rate of 0.001 and a weight decay of 0.0001 during the training process. Ablation Study In the ablation study, we use 64 base categories and 16 novel categories (validation set) of Mini-ImageNet with the available ResNet-12 (Chen et al. 2021) to evaluate the effectiveThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7797 LSC LMSE K = 1 K = 5 % % 70.68 ± 0.61% 83.36 ± 0.45% % " 70.93 ± 0.66% 83.76 ± 0.43% " % 71.18 ± 0.63% 83.70 ± 0.45% " " 72.35 ± 0.63% 84.11 ± 0.43% Table 2: The accuracy (%) of the classifiers with different training strategies of loss functions. Regularization K = 1 K = 5 Baseline 64.02 ± 0.70% 82.58 ± 0.45% CutMix† 64.83 ± 0.72% 80.89 ± 0.51% Mixup† 64.93 ± 0.69% 81.55 ± 0.47% CutOut† 64.85 ± 0.68% 81.53 ± 0.48% AFR(only ¯PS) 67.58 ± 0.69% 82.96 ± 0.46% AFR(fs + ¯PS) 72.35 ± 0.63% 83.74 ± 0.43% Table 3: The accuracy (%) of the classifiers trained with different regularization strategies. † is our implementation. ness of the different components of attentive feature regularization (AFR). Meanwhile, we use the pre-trained word2vec (Li et al. 2019) to represent the labels with vectors. All experiments in the ablation study are conducted on 5-way-Kshot settings, where K = 1 or K = 5. We first evaluate the category selection and then introduce the experiments of different attention calculations and training strategies. The influences of semantic selection (RQ1) The semantic selection is designed for feature regularization, thus we train the classifier Γ with only instance attention and LCE loss to validate its effects. In this ablation study, we conduct experiments with different βs on K = 1 and K = 5, where βs ranges from 1 to 64 (base categories of Mini-ImageNet). The results are shown in Figure 4. For comparison, we also plot the results without any operation at 0th position (“Baseline”). First, both the introduced semantic selection and instance attention can improve the performance of the classifier. Moreover, the accuracy of using instance attention is better than that of utilizing the whole base categories (64th position). Second, with the increase of category selection, the performances of the classifier increase first and then decrease. It’s because introducing too many categories also bring more noise, which makes it hard to train the classifier. Therefore, we set βs = 3 in our remaining experiments. The effects of different attentions (RQ2) To evaluate the effectiveness of different attentions, we train four classifiers with or without attention operations, using only the LCE loss. The performance of each classifier was evaluated on K = 1 and K = 5, and the results are shown in Table 1. The results indicate that both instance and channel attention improve the classifier performance for the query samples. Compared to the classifier without employing any attention, the introduced instance attention and channel attention achieve nearly 6% accuracy improvements on the K = 1 experiment, respectively. More importantly, combining these attentions provides the best performance (the last row of Table 1), with over 7.5% improvement, which validates the effectiveness of our attention calculations. The effectiveness of different losses (RQ3) In this ablation study, we train four classifiers with different loss functions, where the instance attention and the channel attention are applied in all cases. To balance the optimization process of these losses, we set µ1 = 5 and µ2 = 20 experientially and following (Li et al. 2022). The performances of four classifiers on K = 1 and K = 5 are shown in Table 2, which show that both LSC and LMSE contribute to the training procedure of the classifier. Moreover, combining these two losses further improves classification performance. We also verify the effects of different regularizations in Table 3. The common regularizations, i.e. CutMix, Mixup, and CutOut, achieve slight improvement over the baseline in the 1-shot task but are harmful to accuracy in the 5-shot task. The classifier trained with the regularized features obtains over 3% improvements (AFR(only ¯PS)). Moreover, our AFR (fs + ¯PS) can further improve the performances. Comparisons with Other Methods (RQ4) We compare the performance of our method with the latest on the Mini-ImageNet and Tiered-ImageNet datasets. Table 4 shows the results which contain MatchingNets (Lee et al. 2019), ProtoNets (Snell, Swersky, and Zemel 2017), MixtFSL (Afrasiyabi, Lalonde, and Gagn´e 2021), RENet (Kang et al. 2021), DeepBDC (Xie et al. 2022), FeLMi (Roy et al. 2022), tSF (Lai et al. 2022), RankDNN (Guo et al. 2023), FRN (Wertheimer et al. 2021), BML (Zhou et al. 2021), FEAT (Ye et al. 2020), Label-Halluc (Jian and Torresani 2022), SEGA (Yang, Wang, and Chen 2022), IFSL (Yue et al. 2020), and LDRC (Yang, Liu, and Xu 2021). At the same time, we apply our approach to five recently proposed popular FSL methods, i.e., Meta-Baseline, FRN, BML, FEAT, Label-Halluc, SEGA, and LRDC. We clearly observe that our approach consistently improves the classification performance in all settings, which is agnostic to the method, datasets, and pre-trained backbones. For different features extracted with various methods on Mini-ImageNet, we perform remarkable 6.61% accuracy improvements with the baseline (“Meta-Baseline + AFR”) and obtain the best accuracy 74.57% with features from (Zhou et al. 2021) (“Label-Halluc + AFR”) under K = 1. Generally, our AFR outperforms the compared methods by about 2% in accuracy for K = 1 and the improvements are generally greater in the 1-shot setting compared to the 5-shot setting. In the Tiered-ImageNet, we gain the 4.42% improvement (“BML + AFR”) and achieve the best performance 89.59% (“LRDC + AFR”) for K = 1 and K = 5, respectively. To further demonstrate the effectiveness of our AFR, we conduct evaluations on the Meta-Dateset with K = 1 setting. The results are summarized in Table 5, including SimpleShot(Wang et al. 2019), ZN(Fei et al. 2021), and TCPR(Xu et al. 2022). We can see that our AFR exhibits strong adaptability to new data domains and achieves the best classification performance across several testing datasets. Notably, even compared to the transductive setting The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7798 Method Backbone Mini-ImageNet Tiered-ImageNet K = 1 K = 5 K = 1 K = 5 MatchingNets (NeurIPS16) ResNet-12 63.08 ± 0.80% 75.99 ± 0.60% 68.50 ± 0.92% 80.60 ± 0.71% ProtoNets (NeurIPS17) ResNet-12 60.37 ± 0.83% 78.02 ± 0.57% 65.65 ± 0.92% 83.40 ± 0.65% MixtFSL (ICCV21) ResNet-12 63.98 ± 0.79% 82.04 ± 0.49% 70.97 ± 1.03% 86.16 ± 0.67% RENet (ICCV21) ResNet-12 67.60 ± 0.44% 82.58 ± 0.30% 71.61 ± 0.51% 85.28 ± 0.35% DeepBDC (CVPR22) ResNet-12 67.34 ± 0.43% 84.46 ± 0.28% 72.34 ± 0.49% 87.31 ± 0.32% FeLMi (NeurIPS22) ResNet-12 67.47 ± 0.78% 86.08 ± 0.44% 71.63 ± 0.89% 87.01 ± 0.55% tSF(ECCV22) ResNet-12 69.74 ± 0.47% 83.91 ± 0.30% 71.89 ± 0.50% 85.49 ± 0.35% FEAT (CVPR20) ResNet-12 66.78 ± 0.20% 82.05 ± 0.14% 70.80 ± 0.23% 84.79 ± 0.16% FEAT + AFR ResNet-12 72.57 ± 0.62% 85.06 ± 0.42% 71.55 ± 0.74% 87.64 ± 0.46% Meta-Baseline (ICCV21) ResNet-12 63.17 ± 0.23% 79.26 ± 0.17% 68.62 ± 0.27% 83.74 ± 0.18% Meta-Baseline + AFR ResNet-12 69.78 ± 0.61% 84.51 ± 0.41% 69.66 ± 0.70% 86.29 ± 0.48% FRN (CVPR21) ResNet-12 66.45 ± 0.19% 82.83 ± 0.13% 71.16 ± 0.22% 86.01 ± 0.15% FRN + AFR ResNet-12 71.66 ± 0.56% 84.75 ± 0.46% 71.54 ± 0.71% 87.35 ± 0.47% BML (ICCV21) ResNet-12 67.04 ± 0.63% 83.63 ± 0.29% 68.99 ± 0.50% 85.49 ± 0.34% BML + AFR ResNet-12 73.84 ± 0.60% 86.63 ± 0.41% 73.41 ± 0.74% 87.44 ± 0.48% Label-Halluc (AAAI22) ResNet-12 68.28 ± 0.77% 86.54 ± 0.46% 73.34 ± 1.25% 87.68 ± 0.83% Label-Halluc + AFR ResNet-12 74.57 ± 0.58% 87.30 ± 0.37% 73.66 ± 0.66% 89.15 ± 0.40% SEGA (WACV22) ResNet-12 69.04 ± 0.26% 79.03 ± 0.18% 72.18 ± 0.30% 84.28 ± 0.21% SEGA + AFR ResNet-12 71.14 ± 0.60% 84.26 ± 0.42% 72.87 ± 0.45% 85.26 ± 0.54% IFSL (NeurIPS20) WRN-28-10 64.12 ± 0.44% 80.97 ± 0.31% 69.96 ± 0.46% 86.19 ± 0.34% tSF (ECCV22) WRN-28-10 70.23 ± 0.46% 84.55 ± 0.29% 74.87 ± 0.49% 88.05 ± 0.32% RankDNN (AAAI23) WRN-28-10 66.67 ± 0.15% 84.79 ± 0.11% 74.00 ± 0.15% 88.80 ± 0.25% FEAT (CVPR20) WRN-28-10 65.10 ± 0.20% 81.11 ± 0.14% 70.41 ± 0.23% 84.38 ± 0.16% FEAT + AFR WRN-28-10 71.76 ± 0.59% 84.60 ± 0.42% 71.74 ± 0.74% 86.33 ± 0.53% LRDC (ICLR21) WRN-28-10 68.57 ± 0.55% 82.88 ± 0.42% 74.38† ± 0.93% 88.12† ± 0.59% LRDC + AFR WRN-28-10 72.98 ± 0.62% 86.91 ± 0.40% 75.26 ± 0.67% 89.59 ± 0.46% Table 4: The accuracies (%) by different methods on the novel categories from Mini-ImageNet (Vinyals et al. 2016) and TieredImageNet (Ren et al. 2018). † denotes our implementation. Method Testing Data Set Mini-Test CUB Fungi Omini Sign QDraw Flower DTD SimpleShot (arXiv2019) 67.18% 49.68% 43.79% 78.19% 54.04% 54.50% 71.68% 51.19% ZN (ICCV2021) 67.05% 48.15% 43.24% 78.80% 53.92% 52.86% 72.01% 52.20% TCPR (NeurIPS 2022) 69.52% 53.83% 46.28% 80.88% 56.65% 57.31% 75.37% 54.38% AFR 72.98% 54.45% 47.93% 81.84% 60.12% 58.20% 76.11% 57.47% Table 5: The accuracies (%) by different methods on Meta-Dataset (Triantafillou et al. 2019) with K = 1. Sign and DTD denote Traffic Signs and Describable Textures dataset, respectively. of TCPR, our approach gains more than 3% improvements on Traffic Signs and Describale Textures datasets. Conclusion In this paper, we have proposed attentive feature regularization named AFR to tackle the challenges in few-shot learning. Specifically, (1) The category selection based on semantic knowledge is employed to carefully constrain the features for regularization and helps avoid introducing unrelated noise into the training process. (2) Two attention calculations are designed to improve the complementarity of the features across the different categories and improve the channel discriminability of the regularized features. The extensive experiments have demonstrated the effectiveness of our proposed method, particularly in the 1-shot setting. Note that the current usage of semantic relations is superficial. In our future work, we will focus on achieving a more robust feature regularization by incorporating additional techniques, such as GCN (Graph Convolutional Network) and GNN (Graph Neural Network), et al., to further enhance the performance of the classifier. Acknowledgements This work was supported by the National Natural Science Foundation of China (Grants No. 62202439). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7799 References Afrasiyabi, A.; Lalonde, J.; and Gagn´e, C. 2021. Mixturebased Feature Space Learning for Few-shot Image Classification. In ICCV, 9021–9031. Chen, Y.; Liu, Z.; Xu, H.; Darrell, T.; and Wang, X. 2021. Meta-Baseline: Exploring Simple Meta-Learning for FewShot Learning. In ICCV, 9042–9051. Chikontwe, P.; Kim, S.; and Park, S. H. 2022. CAD: CoAdapting Discriminative Features for Improved Few-Shot Classification. In CVPR, 14534–14543. Chou, H.; Chang, S.; Pan, J.; Wei, W.; and Juan, D. 2020. Remix: Rebalanced Mixup. In ECCV Workshops (6), volume 12540, 95–110. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; and Vedaldi, A. 2014. Describing Textures in the Wild. In CVPR, 3606–3613. Deutsch, S.; Kolouri, S.; Kim, K.; Owechko, Y.; and Soatto, S. 2017. Zero Shot Learning via Multi-scale Manifold Regularization. In CVPR, 5292–5299. Devries, T.; and Taylor, G. W. 2017. Improved Regularization of Convolutional Neural Networks with Cutout. CoRR, abs/1708.04552. Fei, N.; Gao, Y.; Lu, Z.; and Xiang, T. 2021. Z-score normalization, hubness, and few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 142–151. Fernandez-Fernandez, R.; Victores, J. G.; Estevez, D.; and Balaguer, C. 2019. Quick, stat!: A statistical analysis of the quick, draw! dataset. arXiv preprint arXiv:1907.06417. Guo, H.; Mao, Y.; and Zhang, R. 2019. MixUp as Locally Linear Out-of-Manifold Regularization. In AAAI, 3714– 3722. Guo, Q.; Haotong, G.; Wei, X.; Fu, Y.; Yu, Y.; Zhang, W.; and Ge, W. 2023. RankDNN: Learning to Rank for FewShot Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 728–736. Hariharan, B.; and Girshick, R. B. 2017. Low-Shot Visual Recognition by Shrinking and Hallucinating Features. In ICCV, 3037–3046. Hou, R.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2019. Cross Attention Network for Few-shot Classification. In NeurIPS, 4005–4016. Hou, S.; Liu, X.; and Wang, Z. 2017. DualNet: Learn Complementary Features for Image Recognition. In ICCV, 502– 510. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; and Igel, C. 2013. Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In IJCNN, 1– 8. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-Excitation Networks. In CVPR, 7132–7141. Jian, Y.; and Torresani, L. 2022. Label hallucination for fewshot classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7005–7014. Kang, D.; Kwon, H.; Min, J.; and Cho, M. 2021. Relational Embedding for Few-Shot Classification. In ICCV, 8802– 8813. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; and Krishnan, D. 2020. Supervised Contrastive Learning. In NeurIPS. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Lai, J.; Yang, S.; Liu, W.; Zeng, Y.; Huang, Z.; Wu, W.; Liu, J.; Gao, B.; and Wang, C. 2022. tSF: Transformer-Based Semantic Filter for Few-Shot Learning. In ECCV. Lake, B. M.; Salakhutdinov, R.; and Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science, 1332–1338. Lee, K.; Maji, S.; Ravichandran, A.; and Soatto, S. 2019. Meta-Learning With Differentiable Convex Optimization. In CVPR, 10657–10665. Li, A.; Huang, W.; Lan, X.; Feng, J.; Li, Z.; and Wang, L. 2020. Boosting Few-Shot Learning With Adaptive Margin Loss. In CVPR, 12573–12581. Li, A.; Luo, T.; Lu, Z.; Xiang, T.; and Wang, L. 2019. LargeScale Few-Shot Learning: Knowledge Transfer With Class Hierarchy. In CVPR, 7212–7220. Li, S.; Xia, X.; Ge, S.; and Liu, T. 2022. SelectiveSupervised Contrastive Learning with Noisy Labels. In CVPR, 316–325. Liu, C.; Fu, Y.; Xu, C.; Yang, S.; Li, J.; Wang, C.; and Zhang, L. 2021. Learning a Few-shot Embedding Model with Contrastive Learning. In AAAI, 8635–8643. Liu, N.; Zhao, Q.; Zhang, N.; Cheng, X.; and Zhu, J. 2019. Pose-Guided Complementary Features Learning for Amur Tiger Re-Identification. In ICCV Workshops, 286–293. Lu, J.; Wang, S.; Zhang, X.; Hao, Y.; and He, X. 2023. Semantic-based Selection, Synthesis, and Supervision for Few-shot Learning. In ACM Multimedia, 3569–3578. Luo, X.; Xu, J.; and Xu, Z. 2022. Channel Importance Matters in Few-Shot Image Classification. In ICML, volume 162 of Proceedings of Machine Learning Research, 14542– 14559. PMLR. Nilsback, M.; and Zisserman, A. 2008. Automated Flower Classification over a Large Number of Classes. In ICVGIP, 722–729. Peng, Z.; Li, Z.; Zhang, J.; Li, Y.; Qi, G.; and Tang, J. 2019. Few-Shot Image Recognition With Knowledge Transfer. In ICCV, 441–449. Ren, M.; Triantafillou, E.; Ravi, S.; Snell, J.; Swersky, K.; Tenenbaum, J. B.; Larochelle, H.; and Zemel, R. S. 2018. Meta-Learning for Semi-Supervised Few-Shot Classification. In ICLR. Rodr´ıguez, P.; Laradji, I. H.; Drouin, A.; and Lacoste, A. 2020. Embedding Propagation: Smoother Manifold for Few-Shot Classification. In ECCV, 121–138. Roy, A.; Shah, A.; Shah, K.; Dhar, P.; Cherian, A.; and Chellappa, R. 2022. FeLMi: Few shot Learning with hard Mixup. In NeurIPS. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7800 Shi, C.; Wu, H.; and Wang, L. 2023. A Feature Complementary Attention Network Based on Adaptive Knowledge Filtering for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens., 61: 1–19. Snell, J.; Swersky, K.; and Zemel, R. S. 2017. Prototypical Networks for Few-shot Learning. In NeurIPS, 4077–4087. Sulc, M.; Picek, L.; Matas, J.; Jeppesen, T. S.; and Heilmann-Clausen, J. 2020. Fungi Recognition: A Practical Use Case. In WACV, 2305–2313. Triantafillou, E.; Zhu, T.; Dumoulin, V.; Lamblin, P.; Evci, U.; Xu, K.; Goroshin, R.; Gelada, C.; Swersky, K.; Manzagol, P.-A.; et al. 2019. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017a. Attention is All you Need. In NeurIPS, 5998–6008. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017b. Attention is All you Need. In NeurIPS, 5998–6008. Velazquez, D.; Rodrıguez, P.; Gonfaus, J. M.; Roca, F. X.; and Gonzalez, J. 2022. A Closer Look at Embedding Propagation for Manifold Smoothing. The Journal of Machine Learning Research, 23: 1–27. Verma, V.; Lamb, A.; Beckham, C.; Najafi, A.; Mitliagkas, I.; Lopez-Paz, D.; and Bengio, Y. 2019. Manifold Mixup: Better Representations by Interpolating Hidden States. In ICML, 6438–6447. Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; and Wierstra, D. 2016. Matching Networks for One Shot Learning. In NeurIPS, 3630–3638. Wah, C.; Branson, S.; Welinder, P.; Perona, P.; and Belongie, S. 2011. The caltech-ucsd birds-200-2011 dataset. Wang, S.; Yue, J.; Liu, J.; Tian, Q.; and Wang, M. 2020. Large-Scale Few-Shot Learning via Multi-modal Knowledge Discovery. In ECCV, 718–734. Wang, S.; Zhang, X.; Hao, Y.; Wang, C.; and He, X. 2022. Multi-directional Knowledge Transfer for Few-Shot Learning. In ACM Multimedia, 3993–4002. Wang, Y.; Chao, W.-L.; Weinberger, K. Q.; and Van Der Maaten, L. 2019. Simpleshot: Revisiting nearestneighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623. Wang, Y.; Girshick, R. B.; Hebert, M.; and Hariharan, B. 2018. Low-Shot Learning From Imaginary Data. In CVPR, 7278–7286. Wertheimer, D.; Tang, L.; Hariharan, B.; ; and and. 2021. Few-Shot Classification With Feature Map Reconstruction Networks. In CVPR, 8012–8021. Xie, J.; Long, F.; Lv, J.; Wang, Q.; and Li, P. 2022. Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification. In CVPR, 7962–7971. Xu, J.; Luo, X.; Pan, X.; Li, Y.; Pei, W.; and Xu, Z. 2022. Alleviating the sample selection bias in few-shot learning by removing projection to the centroid. Advances in Neural Information Processing Systems, 35: 21073–21086. Yang, F.; Wang, R.; and Chen, X. 2022. SEGA: Semantic guided attention on visual prototype for few-shot learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1056–1066. Yang, S.; Liu, L.; and Xu, M. 2021. Free Lunch for Few-shot Learning: Distribution Calibration. In ICLR. Ye, H.; Hu, H.; Zhan, D.; and Sha, F. 2020. Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions. In CVPR, 8805–8814. Yue, Z.; Zhang, H.; Sun, Q.; and Hua, X. 2020. Interventional Few-Shot Learning. In NeurIPS. Yun, S.; Han, D.; Chun, S.; Oh, S. J.; Yoo, Y.; and Choe, J. 2019. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features. In ICCV, 6022–6031. Zhang, H.; Ciss´e, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2018. mixup: Beyond Empirical Risk Minimization. In ICLR. Zhou, Z.; Qiu, X.; Xie, J.; Wu, J.; and Zhang, C. 2021. Binocular Mutual Learning for Improving Few-shot Classification. In ICCV, 8382–8391. Zhu, X.; Zhang, R.; He, B.; Zhou, A.; Wang, D.; Zhao, B.; and Gao, P. 2023. Not all features matter: Enhancing few-shot clip with adaptive prior refinement. arXiv preprint arXiv:2304.01195. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7801
2024
866
18,702
Memory-Efficient Prompt Tuning for Incremental Histopathology Classification Yu Zhu1,3*, Kang Li1*†, Lequan Yu2, Pheng-Ann Heng1, 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Department of Statistics and Actuarial Science, The University of Hong Kong 3Department of Mechanical Engineering, The University of Hong Kong {yzhu, kli, pheng}@cse.cuhk.edu.hk, [email protected] Abstract Recent studies have made remarkable progress in histopathology classification. Based on current successes, contemporary works proposed to further upgrade the model towards a more generalizable and robust direction through incrementally learning from the sequentially delivered domains. Unlike previous parameter isolation based approaches that usually demand massive computation resources during model updating, we present a memory-efficient prompt tuning framework to cultivate model generalization potential in economical memory cost. For each incoming domain, we reuse the existing parameters of the initial classification model and attach lightweight trainable prompts into it for customized tuning. Considering the domain heterogeneity, we perform decoupled prompt tuning, where we adopt a domain-specific prompt for each domain to independently investigate its distinctive characteristics, and one domain-invariant prompt shared across all domains to continually explore the common content embedding throughout time. All domain-specific prompts will be appended to the prompt bank and isolated from further changes to prevent forgetting the distinctive features of earlyseen domains. While the domain-invariant prompt will be passed on and iteratively evolve by style-augmented prompt refining to improve model generalization capability over time. In specific, we construct a graph with existing prompts and build a style-augmented graph attention network to guide the domain-invariant prompt exploring the overlapped latent embedding among all delivered domains for more domaingeneric representations. We have extensively evaluated our framework with two histopathology tasks, i.e., breast cancer metastasis classification and epithelium-stroma tissue classification, where our approach yielded superior performance and memory efficiency over the competing methods. Introduction Histopathology classification is a fundamental task in cancer diagnosis. It aims to specify the malignancy and benignity of suspected tissues by microscopic examination. The resulting analysis is normally considered the gold standard in determining the presence and spread of certain cancers (Bejnordi et al. 2017). Although recent deep-learning models *These authors contributed equally. †Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. have achieved remarkable progress on this task, contemporary studies are not content with the achievements made so far but strive to upgrade and update model functionality toward perfection by incremental learning (Derakhshani et al. 2022; Li, Yu, and Heng 2022). One practical yet challenging direction for model upgrading is to incrementally boost its generalization potential over heterogeneous histopathology data. Depending on the technician skills and digital scanner brands in different medical centers, the histology data sampled from multiple sites (i.e., domain) often exhibit heterogenous appearances after hematoxylin and eosin (H&E) staining, varying from dark blueish purple to light pinkish purple (Lin et al. 2019). Then, domain incremental learning (DIL), i.e., a model updating paradigm that enables the model to progressively adapt to more and more heterogenous domains as time goes by, would be substantial for robust histopathology classification. For any updated model, the basic requirement is to keep the existing capability unaffected, i.e., not catastrophically forgetting the previously-acquired domains. Moreover, we expect to enhance its generalization ability, i.e., not only well adapted to the currently delivered domains but also the unseen domains that might be encountered in the future. Particularly in the medical field, for each update, the model would have no access to the early-delivered domains due to data privacy concerns (Li et al. 2020) and storage burden (Lin et al. 2019). In addition, the domain identity, i.e., the label indicating which domain one particular sample comes from, is erased as part of patient privacy during data anonymization and would be unavailable to use for model training and testing during the entire learning lifespan (Gonzalez, Sakas, and Mukhopadhyay 2020). The straightforward approach is to finetune the previous model with each sequentially incoming domain one by one. However, with the absence of past domains and data heterogeneity throughout time, it inevitably overrides and disrupts the parameters learned for past domains, leading to catastrophic forgetting of them (Li and Hoiem 2017). A promising way for this issue is to address it from the modelcentric perspective, i.e., isolating the early-acquired parameters (e.g., the whole model) into separate storage and allocating new parameters to acquire the newly-arrived domain (Gonzalez, Sakas, and Mukhopadhyay 2020; Miao et al. 2022; Li et al. 2019). Despite their effectiveness, most The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7802 of them are extremely memory-intensive with increasing computation demands and memory usage over time, greatly limiting their applicability in gigapixel-sized histopathology images (e.g., 1-3 GB per slide (Zhao et al. 2019)). Fortunately, we could borrow some insights from recent advances in prompt-based natural language processing approaches (Su et al. 2022), which pointed out that employing learnable prompt tokens as the parameterized inputs, could encode necessary guidance to conditionally adapt the frozen pre-trained model to the downstream target task. Inspired by that, it would be unnecessary to completely adjust the previous model to accommodate the currently delivered domain, nor isolate the entire model parameters in separate memory units to retain early-acquired knowledge. Alternatively, it could be more memory-efficient to simply perform prompt tuning upon the initial well-trained classification model and save these lightweight prompts for future usage instead. In this paper, we present a memory-efficient prompt tuning framework to incrementally learn from the sequentiallydelivered heterogeneous domains, progressively cultivating the histopathology classification model towards a more generalizable and robust direction over time. Considering the data heterogeneity of the domains delivered in different time steps, we perform decoupled prompt tuning with two types of prompts. We employ a domain-specific prompt for each domain to independently investigate its distinctive features while maintaining a domain-invariant prompt shared across all domains to continually explore the common content embedding over time. For each incoming domain, we freeze the initial model and train two lightweight prompts upon the existing weights for memory and computation efficiency. We learn a domain-specific prompt from scratch and learn the shared domain-invariant prompt iteratively upon the previous one via style-augmented prompt refining. Specifically, we build up a graph with the existing prompts and constrain the domain-invariant prompt to explore the co-existing and domain-agnostic representations among all seen domains via graph attention propagation. Meanwhile, we augment the style variations met in the prompt refining process to expose more domain-generic representations and further boost its generalization potential. At the end of each time step, we store all prompts in the bank under economical memory costs. The domain-specific prompts would be isolated from further changes and retrieved later to prevent forgetting early-seen domains. The domain-invariant prompt would be carried forward to incrementally acquire more domaingeneric features to improve generalization ability. We have extensively evaluated our framework on two histopathology classification tasks, including breast cancer metastase classification on the Camelyon17 dataset (Bandi et al. 2018) and epithelium-stroma tissue classification on a multi-site data collection. In both tasks, our approach showed superior performance over competing methods with better generalization on unseen domains and less forgetting of past domains. Our main contributions could be summarized as follows: • We proposed a memory-efficient prompt tuning framework to iteratively upgrade the model towards a more generalized direction in economical memory cost. • We performed decoupled prompt tuning with a series of domain-specific prompts and a shared domain-invariant prompt to tackle the heterogeneity of incoming domains. • We presented style-augmented prompt refining to iteratively evolve the domain-invariant prompt over time to boost its generalization potential on unseen data. • We have validated our approach on two histopathology image classification tasks, where our framework outperformed other comparison methods significantly. Related Work Domain Incremental Learning Considerable efforts have been devoted to domain incremental learning to progressively cultivate the model accommodating more and more heterogeneous domains. One stream of works would not require any additional module to support model updating (Aljundi et al. 2018; Li and Hoiem 2017; Kirkpatrick et al. 2017; Zenke, Poole, and Ganguli 2017). For example, the regularization-based methods (Kirkpatrick et al. 2017; Aljundi et al. 2018) employed a loss term to penalize large changes of the parameters important to historical domains to help retain early-acquired knowledge. However, these approaches often suffered from interval forgetting when dealing with a long sequence of incremental learning tasks, and their performance still has certain improvement spaces (Luo et al. 2020; Mai et al. 2022). Other streams of work (e.g., replayed-based methods and parameter isolation methods) sacrificed memory usage to trade for better model performance. For example, Shin et al. (Shin et al. 2017) employed an extra generative adversarial network (GAN) (≈266MB) to memorize and replay past domain distributions to prevent forgetting past domains, while Gonzalez et al. (Gonzalez, Sakas, and Mukhopadhyay 2020) stored all previously-learned models (≈81MB each) in a separate space and maintain an autoencoder-based domain classifier to retrieve them back when necessary. Although the above methods could effectively alleviate the forgetting of historical domains, it also results in massive memory consumption, making them less applicable to gigabyte-size histopathology images. In contrast, we maximally reuse the existing initial model and perform decoupled prompt tuning upon it by two lightweight prompts (≈0.5MB) for each incoming domain, greatly boosting memory efficiency. Prompt Learning Inspired by the recent progress of prompt tuning in natural language processing (Su et al. 2022), contemporary works attempted to apply it for incremental learning. Most prior works concentrated on the class incremental learning (CIL) settings, i.e., progressively learning to categorize more and more classes over time. They prevented forgetting early-acquired classes by creating a shared prompt pool for instance-wise prompt query (Wang et al. 2022b), setting general prompts and expert prompts to form complementary learning (Wang et al. 2022a) and etc. However, most of them are less prepared for the domain incremental learning settings, especially the demand to generalize to unseen The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7803 𝑪𝒖𝒓𝒓𝒆𝒏𝒕 𝑫𝒐𝒎𝒂𝒊𝒏 𝑫𝒕 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒊𝒐𝒏 & 𝑬𝒎𝒃𝒆𝒅𝒅𝒊𝒏𝒈 𝑴𝑺𝑨 𝑳𝒂𝒚𝒆𝒓 𝑫𝒐𝒎𝒂𝒊𝒏-𝒔𝒑𝒆𝒄𝒊𝒇𝒊𝒄 𝑷𝒓𝒐𝒎𝒑𝒕 (𝑫𝑺𝑷) 𝑴𝑺𝑨 𝑳𝒂𝒚𝒆𝒓 𝒘/ 𝑫𝑺𝑷 𝑻𝒖𝒏𝒊𝒏𝒈 𝑴𝑺𝑨 𝑳𝒂𝒚𝒆𝒓 𝑫𝒐𝒎𝒂𝒊𝒏-𝒊𝒏𝒗𝒂𝒓𝒊𝒂𝒏𝒕 𝑷𝒓𝒐𝒎𝒑𝒕 (𝑫𝑰𝑷) 𝑴𝑺𝑨 𝑳𝒂𝒚𝒆𝒓 𝒘/ 𝑫𝑰𝑷 𝑻𝒖𝒏𝒊𝒏𝒈 𝑪𝒍𝒔𝒇 𝑴𝑺𝑨 𝑳𝒂𝒚𝒆𝒓 𝑶𝒖𝒕𝒑𝒖𝒕 𝑴𝑺𝑨 𝑳𝒂𝒚𝒆𝒓 𝑶𝒖𝒕𝒑𝒖𝒕 𝟏𝒔𝒕 𝑫𝑺𝑷 𝑷𝑺 𝟏 𝒕-𝟏 𝒕𝒉 𝑫𝑺𝑷 𝑷𝑺 𝒕ି𝟏 𝒕 𝒕𝒉 𝑫𝑺𝑷 𝑷𝑺 𝒕 𝑫𝑰𝑷 𝑷𝑰 (𝒕ି𝟏) 𝒚ෝ 𝑷𝑺 𝒕 𝑷𝑺 𝒕ି𝟏 𝑷𝑺 𝟏 𝑷𝑰 (𝒕ି𝟏) … … … … 𝑹𝒆𝒇𝒊𝒏𝒆𝒅 𝑫𝑰𝑷 𝑷𝑰 (𝒕) 𝑷𝒓𝒆𝒗𝒊𝒐𝒖𝒔 𝑷𝒓𝒐𝒎𝒑𝒕 𝑮𝒓𝒂𝒑𝒉 𝑹𝒆𝒇𝒊𝒏𝒆𝒅 𝑷𝒓𝒐𝒎𝒑𝒕 𝑮𝒓𝒂𝒑𝒉 … … … … 𝑫𝒆𝒄𝒐𝒖𝒑𝒍𝒆𝒅 𝑷𝒓𝒐𝒎𝒑𝒕 𝑻𝒖𝒏𝒊𝒏𝒈 𝑺𝒕𝒚𝒍𝒆- 𝒂𝒖𝒈𝒎𝒆𝒏𝒕𝒆𝒅 𝑷𝒓𝒐𝒎𝒑𝒕 𝑹𝒆𝒇𝒊𝒏𝒊𝒏𝒈 𝑷𝑺 𝒕ି𝟏 𝑷𝑺 𝒊 𝑷𝑺 𝟐 𝑷𝑺 𝟏 𝑷𝑺 𝒕 𝑫𝒕 𝑫𝟏 𝑫𝟐 𝑫𝒊 𝑫𝒕ି𝟏 𝑷𝑰 (𝒕) 𝑷𝑺 𝒕ି𝟏 𝑷𝑺 𝒊 𝑷𝑺 𝟐 𝑷𝑺 𝟏 𝑷𝑺 𝒕 𝑫𝒕 𝑫𝟏 𝑫𝟐 𝑫𝒊 𝑫𝒕ି𝟏 𝑷𝑰 (𝒕ି𝟏) 𝑫𝒕 𝓐𝟏 𝓐𝒊 𝓐𝒏 𝑮𝒓𝒂𝒑𝒉 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑵𝒆𝒕𝒘𝒐𝒓𝒌 Figure 1: Overview of our memory-efficient prompt tuning framework. We proposed to perform decoupled prompt tuning upon the initial model with two lightweight prompts, aiming to acquire the latest domain knowledge in economical memory cost. We employ a domain-specific prompt independently for each domain to acquire its distinctive features like appearances. The learned domain-specific prompt would be stored and isolated in the prompt bank to help alleviate the forgetting of early-acquired domains. Meanwhile, we maintain a domain-invariant prompt shared across domains to progressively learn the common content over time like shape prior. We performed style-augmented prompt refining upon the previous domain-invariant prompt, where we constrain its exploration scope within the overlapped latent embeddings of all seen domains and guide it to learn the domaingeneric representations, gradually strengthening the generalization potential over time. data. Most CIL approaches would not expect the model to correctly recognize the objects of unseen classes (i.e., never learned in training). However, in domain incremental learning settings, it is highly desired for a model well generalizing to unseen domains of unknown appearances for robust classification. Very recently, a DIL approach S-Prompt (Wang, Huang, and Hong 2022) tried to independently learn the prompts across domains for a win-win game but still overlooked the generalization issue. In our work, we put extra effort to maintain a domain-variant prompt shared over time by style-augmented prompt refining, incrementally absorbing more domain-generic features to improve generalization. Methodology In domain incremental learning (DIL) settings, we assume a heterogenous data stream D1, D2, ..., DT sequentially delivered from multiple sites one by one. With the arrival of the dataset Dt at time step t, our goal is to incrementally optimize the previous model Mt−1 with Dt, such that the updated model Mt would not catastrophically forget past domains D1, D2, ..., Dt−1, while maintaining satisfying generalization ability for unseen domains. For privacy concerns in medical fields, all past domains would be inaccessible and no domain identity would be available. Fig. 1 overviews our framework. For each incoming domain, we reuse the initial model and perform decoupled prompt tuning upon it with two lightweight prompts to acquire new domain knowledge in a memory-efficient manner. In specific, the domain-specific prompt (DSP) is independently learned from scratch to tackle the distinctive features, while the domain-invariant prompt (DIP) is iteratively evolved from the previous one by style-augmented prompt refining to incrementally explore domain-generic features. Decoupled Prompt Tuning We construct a transformer backbone (e.g., ViT (Dosovitskiy et al. 2020)) for the classification model. It consists of a basic transformer feature extractor fb to convert the input image into sequence-like high-level representations, and a classification layer fϕ to map the representation to the final prediction ˆy. At time step t, with the arrival of the current domain Dt, we load the pre-trained weights into the basic feature extractor following prior works (Wang et al. 2022a,b) and freeze them. Upon it, we perform decoupled prompt tuning by two lightweight trainable prompts, i.e., one domaininvariant prompt p(t) I and one domain-specific prompt pt s, to acquire the current domain. To avoid any confusion, we use the superscript (t) to denote the shared domain-invariant prompt learned in the t-th time step, while using the superscript t to indicate the t-th domain-specific prompt. The domain-invariant prompt and the domain-specific prompt can be inserted as additional inputs of any multiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7804 head self-attention (MSA) layer in the basic transformer feature extractor. Take the i-th MSA layer as an example. Before passing the previous MSA layer outputs hi−1 ∈Rl×m to it, we keep the query hq i−1 and append the domaininvariant prompt p(t) I ∈Rl×m in its key hk i−1 and value hv i−1 to guide it explore the domain-shared representations as hi = f (i) MSA(hq i−1, [pk I; hk i−1], [pv I; hv i−1]), (1) where f (i) MSA and hi denote the i-th MSA layer and its output tuned with the domain-invariant prompt respectively. pk I ∈ Rl/2×m and pv I ∈Rl/2×m are split from p(t) I to maintain the same sequence length before and after the MSA layer. The domain-specific prompt pt s ∈Rl×m can be attached in a similar way to learn the distinctive features as hj = f (j) MSA(hq j−1, [pk s; hk j−1], [pv s; hv j−1]), (2) where f (j) MSA and hj denote the j-th MSA layer and its outputs respectively. pk s, pv s ∈Rl/2×m are split from pt s. As domain identity is not available during inference, we additionally equip a distinguishable key value kt for the domain-specific prompt, to help pair each test image with a matching domain-specific prompt. We decompose each image xi ∈Dt into the amplitude spectrum A(xi) and phase spectrum C(xi) in the frequency space by fast Fourier transform ΦFFT. Since the amplitude captures the low-level statistics (e.g., style, appearances) while the phase extracts the high-level features (e.g., content, shape) (Jiang, Wang, and Dou 2022; Liu et al. 2021), we implement the key value kt as the average amplitude spectrum of the images in Dt as kt = 1 Nt Nt X i=1 A(xi), (3) where Nt denotes the total number of training data in Dt. In the first time step (t = 1), we simultaneously optimize the domain-specific prompt p1 s, the domain-invariant prompt p(1) I and the classification layer fϕ upon the frozen basic feature extractor fb by the training samples (x, y) ∈D1 as min fϕ,p(1) I ,p1 s Lce  fϕ  fb  x; p(1) I , p1 s  , y  , (4) where Lce denotes the cross-entropy loss. Before moving to the next time step, we store all prompts and the associated keys into the prompt bank P1 = {p(1) I ,  p1 s, k1  }. The domain-specific prompt and its key value would be isolated from further changes while the domain-invariant prompt would be passed to the next time step to iteratively evolve. For the subsequent time step t (t > 1), we keep the classification model (including fϕ and fb) frozen, and optimize the domain-specific prompt pt s and the domain-invariant prompt p(t) I asynchronously in separate steps. We first learn an independent domain-specific prompt pt s from scratch with the old domain-invariant prompt p(t−1) I fixed by min pt s Lce  fϕ  fb  x; p(t−1) I , pt s  , y  , (5) where (x, y) ∈Dt. Then we update the domain-invariant prompt by style-augmented prompt refining, which would be thoroughly described in the following subsection. Style-augmented Prompt Refining The straightforward way to update the domain-invariant prompt is to finetune the previous one p(t−1) I along with the t-th domain-specific prompt pt s. However, this would easily make the latest domain-invariant prompt not compatible with early domain-specific prompts recorded in the prompt bank. To address this issue, we build a graph with all existing prompts and feed it into the graph attention network (GAT) (Veliˇckovi´c et al. 2017) to guide the domain-invariant prompt exploring the co-existing and generic features. GAT setup We flatten all existing prompts into long vectors as P t = {p(t−1) I , p1 s, ..., pt s}, and take them as the nodes of the graph. The graph attention network consists of one learnable linear transformation W ∈RL×L, where L = l × m, and a trainable single-layer feed-forward neural network a to acquire the attention coefficients e between two nodes. Particularly for the node of the domain-invariant prompt that we most concern, the attention coefficients for its i-th neighbor ei IS and itself eII are computed as ei IS = a  Wp(t−1) I , Wpi s  , eII = a  Wp(t−1) I , Wp(t−1) I  , (6) which indicates the correlation and relevance between the domain-invariant prompt and each prompt in the bank. We normalize the above coefficients as αi IS and αII, and use them to adjust the participation of each node in the knowledge aggregation to the domain-invariant prompt as αi IS = exp ei IS  exp (eII) + Pt i=1 exp ei IS , αII = exp (eII) exp (eII) + Pt i=1 exp ei IS , (7) The outputs of the domain-invariant prompt p(t) I would be p(t) I = fGAT (P t) = t X i=1 αi ISWpi s + αIIWp(t−1) I , (8) where fGAT denotes the graph attention network. By simply reshaping p(t) I into the original prompt size, we could obtain the updated domain-invariant prompt p(t) I . Style-augmented GAT training We augment the style diversity met in GAT training to further improve the generalization potential of the domain-invariant prompt. As aforementioned, for each image, its amplitude spectrum reflects the low-level features like style or appearance, while the phase spectrum presents its high-level content like shape. Given an image-label pair of the current domain (xi, yi) ∈ Dt, we reserve the phrase spectrum C(xi) to keep its semantic content, but substitute its amplitude A(xi) into a new one A′(xi) to modulate its style. As shown in Fig. 2, rather than blindly guessing feasible appearances, we use a set of random scalars  λ1 i , ..., λt i to interpolate the average amplitudes of past domains (i.e., the keys in the prompt bank) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7805 and generate the new amplitude A′(xi) as A′(xi) = λ1 i k1 + λ2 i k2 + · · · + λt ikt, (9) where Pt j=0 λj i = 1. We then perform inverse fast Fourier transform operation Φ−1 FFT to remap the phase C(xi) and amplitude A′(xi) into the image space, and generate the styleaugmented image x′ i as follows x′ i = Φ−1 FFT (A′(xi), C(xi)) . (10) We paired the style-augmented image x′ i with its original label yi to form a set Dsa t = {(x′ i, yi), i ∈[1, Nt]}, which will be used for GAT training along with the current domain Dt. For any data (x, y) ∈Dt∪Dsa t , we select the most compatible domain-specific prompt pj∗ s by similarity ranking as j∗= argmaxj γ (A(x), kj) , (11) where j ∈[1, t] and γ denotes the cosine similarity. Then, we force the GAT to produce a domain-invariant prompt pt I that could satisfyingly tackle the images of any augmented style and work smoothly with any domain-specific prompts in the bank by the following objectives min fGAT Lce fϕ fb x; fGAT (P t), pj∗ s  , y  . (12) The overall training scheme is presented in Algorithm 1. During inference, we use the latest domain-invariant prompt for all test data and pair each test sample with the most compatible domain-specific prompt to it by Eq. 11 accordingly. Experiment Dataset and Experiment Settings Breast cancer metastase classification We adopted the Camelyon17 dataset (Bandi et al. 2018) which provided the labels of the presence or absence of breast cancer. The data was collected from 5 medical centers with different stains. We closely followed the domain split of prior works (Jiang, Wang, and Dou 2022) and took the samples of the same center as one domain. All domains are sequentially delivered one by one in ascending order. We set the total time step as 4, where Domain 4 currently arrives, Domain 1-3 are previously delivered, and Domain 5 remains unseen to the model. Epithelium-stroma tissue classification We utilized four public datasets, including 615 images from VGH (Beck et al. 2011) (Domain 1), 671 images from NKI (Beck et al. 2011) (Domain 2), 1296 patches from IHC (Linder et al. 2012) (Domain 3), and 26,437 patches from NCH (Kather et al. 2019) (Domain 4). Each of them comes from different institutions under different H&E stains. Here, we set the total time step as 3, where Domain 3 has currently arrived, Domain 1 and 2 are previously delivered and Domain 4 remains unseen during model training. Implementation details We adopted the ViT-B/16 (Dosovitskiy et al. 2020) as our feature extractor fb. We employ the Adam optimizer with the learning rate of 7.5e−4 in the first time step and the learning rate of 1e−4 for the subsequent time steps. 𝑫𝑫𝑫𝑫𝑫𝑫𝑲𝑲𝑲𝑲𝑲𝑲𝑲𝑲 𝝀𝝀𝒊𝒊 𝟏𝟏 𝝀𝝀𝒊𝒊 𝒕𝒕 … 𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺 𝑵𝑵𝑵𝑵𝑵𝑵 𝑨𝑨𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝓐𝓐′(𝒙𝒙𝒊𝒊) 𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝒙𝒙𝒊𝒊 𝑷𝑷𝑷𝑷𝑷𝑷𝑷𝑷𝑷𝑷𝓒𝓒(𝒙𝒙𝒊𝒊) 𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺𝑺 𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨𝑨 𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝑰𝑰 𝒙𝒙𝒊𝒊 ′ 𝒌𝒌𝒕𝒕 𝒌𝒌𝟏𝟏 FFT 𝑭𝑭𝑭𝑭𝑭𝑭−𝟏𝟏 … Figure 2: Illustration of generating style-augmented data. Algorithm 1: Training Procedures Output: The model fϕ(fb(·)) and prompt bank P. while incrementally learning from t = 1 to T do if t == 1 then Load pre-trained weights in fb and freeze it. Optimize p(1) I , p1 s, fϕ with D1 by Eq. 4. Calculate the key k1 for p1 s by Eq. 3. Store all prompts in the bank P1. else Freeze fb, fϕ and train pt s with Dt by Eq. 5. Generate style-augmented data Dsa t by Eq. 9. Update p(t) I given pt s by GAT as Eq. 12. Compute the key kt by Eq. 9 and append [kt, pt s] in the prompt bank Pt. Overwrite DIP as p(t) I in the prompt bank Pt. end Pass fb, fϕ and Pt in the t + 1 step. end Return fϕ(fb(·)), P ←PT . Evaluation metrics We employed the classification accuracy (Acc) as the base evaluation metric. We first measure the model performance at the last incremental learning step on all domains, including previous domains, the current domain and unseen domains, to extensively evaluate the ability to alleviate forgetting and generalize. We further employ three more metrics to comprehensively evaluate the overall performance of the entire incremental learning span. Backward transfer (BWT) evaluates the model stability, i.e., the ability to alleviate catastrophic forgetting, which is computed as BWT = 2 PN i=2 Pi−1 j=1 (Ri,j −Rj,j)/N(N −1), where Ri,j denotes the Acc of the model trained sequentially from the 1st domain to the i-th domain and tested on the j-th domain, and N denotes the total number of training domains. Incremental learning (IL) provides an overall measurement of all seen domains and measures both stability and plasticity of the model as IL = 2 PN i=1 Pi j=1 Ri,j/N(N + 1). Forward Transfer on unseen domains (FTU) measures the generalization ability of the model as FTU = 2 PN i=1 PN j=i+1 Ri,j/N(N −1). Comparison with the State-of-the-arts Compared methods Besides the intuitive method, i.e., sequential finetune, we also implemented the state-of-the-art incremental learning methods, including the regularizationbased methods (LwF (Li and Hoiem 2017), EWC (KirkThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7806 Methods Acc [%] ↑ IL ↑ BWT ↑ FTU ↑ Domain 1 Domain 2 Domain 3 Domain 4 Domain 5 Avg Individual Training 96.79 (±0.77) 91.62 (±0.82) 96.32 (±0.54) 96.46 (±0.65) 96.84 (±0.70) 95.30 (±0.52) 71.74 (±0.56) -38.65 (±0.55) 64.71 (±1.31) Joint Training (Upper bound) 96.96 (±0.54) 94.32 (±0.67) 97.51 (±0.52) 97.41 (±0.63) 82.80 (±1.19) 92.45 (±0.58) 95.27 (±0.46) 2.53 (±0.03) 80.75 (±1.22) Sequential Finetune 61.92 (±1.22) 64.78 (±2.55) 51.52 (±1.10) 96.51 (±0.68) 49.82 (±3.22) 64.91 (±2.84) 72.59 (±2.01) -37.13 (±1.69) 63.69 (±2.62) LwF 83.28 (±0.44) 72.69 (±1.05) 59.71 (±0.65) 95.86 (±0.83) 69.07 (±0.53) 76.12 (±0.68) 80.12 (±1.11) -23.44 (±0.36) 69.18 (±0.47) EWC 81.87 (±0.81) 77.21 (±1.44) 53.26 (±2.21) 96.47 (±0.99) 46.77 (±2.55) 71.12 (±2.28) 77.78 (±2.46) -29.32 (±0.48) 61.02 (±2.26) SI 86.53 (±1.66) 82.49 (±0.97) 54.04 (±1.78) 96.52 (±0.64) 50.55 (±1.43) 74.03 (±1.21) 79.83 (±1.29) -24.15 (±0.61) 62.51 (±1.05) DGR 81.21 (±2.49) 83.20 (±1.96) 72.77 (±3.45) 96.84 (±1.10) 78.57 (±2.83) 82.52 (±2.36) 86.80 (±1.95) -15.67 (±1.62) 70.64 (±2.18) Orc-MML 90.07 (±0.93) 86.64 (±0.85) 87.32 (±0.77) 89.73 (±0.96) 66.91 (±1.59) 84.13 (±0.91) 89.69 (±0.94) -6.44 (±0.60) 75.45 (±1.03) S-Prompt 92.50 (±0.69) 81.30 (±0.88) 93.93 (±0.70) 94.43 (±0.83) 74.37 (±0.92) 87.31 (±0.85) 91.73 (±0.78) -3.46 (±0.24) 80.64 (±0.62) DualPrompt 91.34 (±0.43) 85.18 (±0.75) 92.71 (±0.67) 95.63 (±0.58) 67.44 (±2.78) 86.46 (±0.59) 91.11 (±0.42) -7.60 (±0.11) 77.57 (±1.02) Ours 94.36 (±0.90) 89.40 (±0.39) 94.32 (±0.87) 95.86 (±0.79) 84.12 (±1.26) 91.61 (±0.67) 93.67 (±0.44) -1.62 (±0.09) 82.17 (±0.54) Table 1: The comparison results of breast cancer metastases classification in the final time step (the 2-7 Columns) and the entire domain incremental learning process (the last three columns). We have highlighted the best DIL results in bold. patrick et al. 2017) and SI (Zenke, Poole, and Ganguli 2017)), the replay-based method (DGR (Shin et al. 2017)), and the parameter isolation methods (Orc-MML (Gonzalez, Sakas, and Mukhopadhyay 2020)) especially those also involved prompt tuning (DualPrompt (Wang et al. 2022a) and S-Prompt (Wang, Huang, and Hong 2022)). We separately trained a model for each domain (i.e., individual training) and also jointly trained a model with all delivered domains. Since the joint training could fully access all seen domains while the DIL methods only have access to the current domain, we consider it as the upper bound. Experiment results We reported the performance of breast cancer metastase classification in Table 1. Each experiment is repeated 5 times to avoid random bias. Our approach achieved the best results in the majority of evaluation metrics (8 out of 9). Compared to prior parameter isolation approaches like Orc-MML, DualPrompt, and S-Prompt, our framework yielded the least forgetting of past domains with 1.86% increases in Domain 1 and 2.76% increases in Domain 2. Since we continually evolve the domain-invariant prompt by style-augmented prompt refining, our approach greatly improved model generalization capability, leading to 5.55% gains in the unseen domain (Domain 5) and 1.53% increases in FTU compared to the state-of-the-art approach. We presented the results of epithelium-stroma tissue classification in Table 2. Our framework outperformed the competing approaches on most of the evaluation metrics (7 out of 8). For the first delivered domain (Domain 1), which commonly suffered most from catastrophic forgetting, our framework achieved 5.32% higher than S-Prompt and 1.67% higher than DualPrompt, demonstrating the effectiveness of our decoupled prompt tuning. When it comes to the generalization ability, our approach yielded 2.42% increases in the unseen domain (Domain 4) and 1.77% increases in FTU over the competing methods. Analysis of the Key Components The tradeoff between model performance and memory efficiency We evaluated the memory efficiency by the metric of model size efficiency (MS) (D´ıaz-Rodr´ıguez et al. 2018), which measures the additional storage used at the time step t compared to the usage in the first time step by computing MS = min  1, 1 N PN i=1 θ1 θi  , where θ1 and θi denote the allocated memory spaces to store all necessary modules for the next round of incremental learning in the 1st and i-th time step respectively. We also calculated the absolute value of the average additional memory storage (AAMS) over time as AAMS = 1 N PN i=1 |θi −θ1|. As presented in Table 3, the sequential finetune (SeqFT) method and regularization-based approaches barely require any additional module to support the learning in the next time step. However, these methods still have large improvement spaces in the aspect of alleviating model forgetting (see Acc and BWT) and enhancing model generalization (see FTU). The replay-based approaches and parameThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7807 Methods Acc [%] ↑ IL ↑ BWT ↑ FTU ↑ Domain 1 Domain 2 Domain 3 Domain 4 Avg Individual Training 94.09 (±0.56 91.84 (±0.61 93.42 (±0.83) 96.50 (±0.69) 93.96 (±0.60) 78.79 (±0.64) -28.87 (±0.35) 67.74 (±2.19) Joint Training (Upper bound) 94.33 (±0.48) 92.61 (±0.65) 93.67 (±0.71) 85.82 (±1.86) 91.61 (±0.84) 93.56 (±0.52) 1.20 (±0.04) 74.91 (±0.93) Sequential Finetune 62.77 (±1.19) 70.14 (±2.06) 93.58 (±0.95) 69.06 (±1.42) 73.89 (±1.27) 82.84 (±1.44) -20.86 (±1.08) 66.02 (±1.79) LwF 77.54 (±1.60) 66.32 (±2.47) 93.52 (±1.01) 73.61 (±1.99) 77.75 (±1.32) 85.12 (±1.17) -15.82 (±0.91) 73.93 (±1.85) EWC 71.22 (±2.04) 77.90 (±1.70) 93.61 (±0.84) 68.12 (±2.66) 77.71 (±1.82) 87.85 (±1.79) -15.01 (±0.38) 72.32 (±1.88) SI 66.58 (±0.47) 72.51 (±0.21) 93.55 (±1.02) 77.33 (±1.55) 77.50 (±0.68) 82.64 (±0.64) -21.99 (±0.82) 71.51 (±0.73) DGR 80.54 (±1.30) 82.35 (±1.77) 93.67 (±0.69) 86.72 (±1.48) 85.81 (±0.74) 88.80 (±0.60) -9.72 (±0.18) 78.29 (±1.95) Orc-MML 85.61 (±0.93) 84.93 (±0.75) 83.76 (±0.67) 72.58 (±0.99) 81.72 (±0.72) 88.15 (±0.71) -6.82 (±0.05) 69.19 (±1.26) S-Prompt 85.29 (±0.83) 88.42 (±1.94) 94.33 (±0.75) 73.03 (±1.76) 85.27 (±0.94) 89.61 (±1.04) -4.19 (±0.06) 74.62 (±1.01) DualPrompt 88.94 (±0.58) 88.05 (±0.47) 93.63 (±0.17) 83.16 (±0.51) 88.45 (±0.33) 91.14 (±0.77) -4.34 (±0.03) 75.11 (±0.42) Ours 90.61 (±1.00) 88.47 (±0.39) 93.84 (±1.03) 89.14 (±0.97) 90.52 (±0.55) 92.17 (±0.49) -2.19 (±0.05) 80.06 (±0.84) Table 2: The comparison results of epithelium-stroma classification in the final time step (the 2-6 columns) and the entire domain incremental learning process (the last three columns). We have highlighted the best DIL results in bold. ter isolation approaches, such as Orc-MML (Orc-M), normally consumed extra memory spaces to trade for model performance. Among them, the prompt-based approaches, i.e., S-Prompt (S-P), DualPrompt (Dual-P), and ours, are the top 3 memory-efficient ones. With limited additional memory spaces (around 0.57 MB), our approach could bring significant performance gains with 4.30% in the average Acc, 1.84% in BWT, and 1.53% in FTU over prior prompt-based approaches, suggesting it is the most desirable approach when considering the trade-off between model accuracy and memory consumption. For training efficiency, our work used 0.6h longer training time than other prompt-based methods on average, which is generally affordable in most cases. The efficiency of decoupled prompt tuning We visualized the output feature embeddings after performing decoupled prompt tuning in the breast cancer classification task via t-SNE in Fig. 3. For the embeddings of the same category (e.g., Tumor or Normal), the features within the same domain are grouped into a cluster and well-separated from other domains, suggesting that the learned domain-specific prompts could effectively capture the domain-distinctive characteristics. For the embeddings within the same domain, it shows a clear decision boundary between the features of normal tissues and tumor tissues, indicating our model could well distinguish them from each other. Methods A-Acc ↑ BWT ↑ FTU ↑ AAMS ↓ MS ↑ Seq-FT 64.91 -37.13 63.69 0 1 LwF 76.12 -23.44 69.18 0 1 EWC 71.12 -29.32 61.02 0 1 SI 74.03 -24.15 62.51 0 1 DGR 82.52 -15.67 70.64 399.61 0.56 Orc-M 84.13 -6.44 75.45 460.50 0.52 S-P 87.31 -3.46 80.64 0.23 0.99 Dual-P 86.46 -7.60 77.57 0.57 0.99 Ours 91.61 -1.62 82.17 0.57 0.99 Table 3: Analysis of model accuracy and memory efficiency. Here, we reported the average Acc of all domains (A-Acc) in the last time step, and employed model size efficiency (MS) and the average additional memory storage (AAMS) [MB] to measure memory efficiency. The study of key operations in style-augmented prompt refining To extensively investigate the effectiveness of style-augmented prompt refining, we experimented on several settings, including (a) using DIP individually (first row), (b) using DSP individually (second row), (c) using both DIP and DSP but updating DIP via fine-tuning (third row), and (d) using both DIP and DSP and refining DIP via GAT with the current domain Dt only (fourth row), and compared them with ours, i.e., using both DIP and DSP and refining The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7808 DIP DSP GAT SA A-Acc ↑ BWT ↑ FTU ↑ ✓ 85.94 -8.86 77.35 ✓ 86.31 -7.29 75.82 ✓ ✓ 86.24 -7.16 77.49 ✓ ✓ ✓ 90.34 -3.19 80.04 ✓ ✓ ✓ ✓ 91.61 -1.62 82.17 Table 4: Analysis of the key operations in style-augmented prompt refining. Figure 3: The t-SNE visualization of the feature embeddings after applying decoupled prompt tuning. DIP via GAT with style-augmented training data Dsa t ∪Dt (the last row). We reported the results on breast cancer classification in Table 4. Compared to simply finetuning DIP, refining with GAT could explore more high-correlative and domain-generic representations across domains, leading to 4.10% increases in the average Acc and 2.55% increases in FTU. Further refining by the style-augmented data not only keeps the updated DIP compatible with early-recorded DSPs but also lets the model be early-prepared for the unseen styles during inference, thus bringing further improvements of 1.27% and 2.13% increases in the average Acc and FTU respectively. Conclusion We presented a memory-efficient prompt tuning framework to incrementally evolve the histology classification model towards a more generalizable and robust direction. For each incoming domain, we performed decoupled prompt tuning upon the initial classification model with two lightweight prompts, efficiently acquiring the latest domain knowledge without huge memory costs. We customized a domainspecific prompt customized for tackling the distinctive characteristics while maintaining a domain-invariant prompt shared across all domains to progressively explore the common content embedding. We additionally conducted styleaugmented prompt refining on the domain-invariant prompt to continually investigate domain-generic representations across domains and cultivate its generalization potential. All prompts will be stored in a prompt bank, where the domainspecific prompts will be isolated from further changes to prevent catastrophic forgetting of past domains, while the domain-invariant prompt will be passed on to the next time step to continually evolve. We have extensively evaluated our framework with two histology classification tasks, where our approach outperformed other comparison methods with higher accuracy and more satisfying memory efficiency. Acknowledgements The work described in this paper was supported in part by the following grant from the Research Grants Council of the Hong Kong SAR, China (Project No. T45-401/22-N), the Hong Kong Innovation and Technology Fund (Project No. MHP/085/21), and the National Natural Science Fund (62201483). References Aljundi, R.; Babiloni, F.; Elhoseiny, M.; Rohrbach, M.; and Tuytelaars, T. 2018. Memory aware synapses: Learning what (not) to forget. In ECCV, 139–154. Bandi, P.; Geessink, O.; Manson, Q.; Van Dijk, M.; Balkenhol, M.; Hermsen, M.; et al. 2018. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. IEEE Transactions on medical imaging, 38(2): 550–560. Beck, A. H.; Sangoi, A. R.; Leung, S.; Marinelli, R. J.; Nielsen, T. O.; Van De Vijver, M. J.; et al. 2011. Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Science translational medicine, 3(108): 108ra113–108ra113. Bejnordi, B. E.; Veta, M.; Van Diest, P. J.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; et al. 2017. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA, 318(22): 2199–2210. Derakhshani, M. M.; Najdenkoska, I.; van Sonsbeek, T.; Zhen, X.; Mahapatra, D.; Worring, M.; et al. 2022. LifeLonger: A Benchmark for Continual Disease Classification. In MICCAI, 314–324. Springer. D´ıaz-Rodr´ıguez, N.; Lomonaco, V.; Filliat, D.; and Maltoni, D. 2018. Don’t forget, there is more than forgetting: new metrics for Continual Learning. arXiv preprint arXiv:1810.13166. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Gonzalez, C.; Sakas, G.; and Mukhopadhyay, A. 2020. What is Wrong with Continual Learning in Medical Image Segmentation? arXiv preprint arXiv:2010.11008. Jiang, M.; Wang, Z.; and Dou, Q. 2022. Harmofl: Harmonizing local and global drifts in federated learning on heterogeneous medical images. In AAAI, volume 36, 1087–1095. Kather, J. N.; Krisam, J.; Charoentong, P.; Luedde, T.; Herpel, E.; Weis, C.-A.; et al. 2019. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS medicine, 16(1): e1002730. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7809 Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. PNAS, 114(13): 3521– 3526. Li, K.; Yu, L.; and Heng, P.-A. 2022. Domain-incremental Cardiac Image Segmentation with Style-oriented Replay and Domain-sensitive Feature Whitening. IEEE Transactions on Medical Imaging. Li, X.; Jiang, M.; Zhang, X.; Kamp, M.; and Dou, Q. 2020. FedBN: Federated Learning on Non-IID Features via Local Batch Normalization. In ICLR. Li, X.; Zhou, Y.; Wu, T.; Socher, R.; and Xiong, C. 2019. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In ICML, 3925–3934. PMLR. Li, Z.; and Hoiem, D. 2017. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12): 2935–2947. Lin, H.; Chen, H.; Graham, S.; Dou, Q.; Rajpoot, N.; and Heng, P.-A. 2019. Fast scannet: Fast and dense analysis of multi-gigapixel whole-slide images for cancer metastasis detection. IEEE Transactions on medical imaging, 38(8): 1948–1958. Linder, N.; Konsti, J.; Turkki, R.; Rahtu, E.; Lundin, M.; Nordling, S.; et al. 2012. Identification of tumor epithelium and stroma in tissue microarrays using texture analysis. Diagnostic pathology, 7: 1–11. Liu, Q.; Chen, C.; Qin, J.; Dou, Q.; and Heng, P.-A. 2021. Feddg: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In CVPR, 1013–1023. Luo, Y.; Yin, L.; Bai, W.; and Mao, K. 2020. An appraisal of incremental learning methods. Entropy, 22(11): 1190. Mai, Z.; Li, R.; Jeong, J.; Quispe, D.; Kim, H.; and Sanner, S. 2022. Online continual learning in image classification: An empirical survey. Neurocomputing, 469: 28–51. Miao, Z.; Wang, Z.; Chen, W.; and Qiu, Q. 2022. Continual learning with filter atom swapping. In ICLR. Shin, H.; Lee, J. K.; Kim, J.; and Kim, J. 2017. Continual learning with deep generative replay. NeurIPS, 30. Su, Y.; Wang, X.; Qin, Y.; Chan, C.-M.; Lin, Y.; Wang, H.; et al. 2022. On transferability of prompt tuning for natural language processing. In NAACL, 3949–3969. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Wang, Y.; Huang, Z.; and Hong, X. 2022. S-prompts learning with pre-trained transformers: An occam’s razor for domain incremental learning. NeurIPS, 35: 5682–5695. Wang, Z.; Zhang, Z.; Ebrahimi, S.; Sun, R.; Zhang, H.; Lee, C.-Y.; et al. 2022a. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV, 631–648. Springer. Wang, Z.; Zhang, Z.; Lee, C.-Y.; Zhang, H.; Sun, R.; Ren, X.; et al. 2022b. Learning to prompt for continual learning. In CVPR, 139–149. Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. In ICML, 3987–3995. PMLR. Zhao, Z.; Lin, H.; Chen, H.; and Heng, P.-A. 2019. PFAScanNet: Pyramidal feature aggregation with synergistic learning for breast cancer metastasis analysis. In MICCAI, 586–594. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7810
2024
867
18,703
SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection Yun Zhu1, Le Hui2, Yaqi Shen1, Jin Xie1* 1 PCA Lab, School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2 Shaanxi Key Laboratory of Information Acquisition and Processing, Northwestern Polytechnical University, China [email protected], [email protected], [email protected], [email protected] Abstract Current 3D object detection methods for indoor scenes mainly follow the voting-and-grouping strategy to generate proposals. However, most methods utilize instance-agnostic groupings, such as ball query, leading to inconsistent semantic information and inaccurate regression of the proposals. To this end, we propose a novel superpoint grouping network for indoor anchor-free one-stage 3D object detection. Specifically, we first adopt an unsupervised manner to partition raw point clouds into superpoints, areas with semantic consistency and spatial similarity. Then, we design a geometry-aware voting module that adapts to the centerness in anchor-free detection by constraining the spatial relationship between superpoints and object centers. Next, we present a superpoint-based grouping module to explore the consistent representation within proposals. This module includes a superpoint attention layer to learn feature interaction between neighboring superpoints, and a superpoint-voxel fusion layer to propagate the superpoint-level information to the voxel level. Finally, we employ effective multiple matching to capitalize on the dynamic receptive fields of proposals based on superpoints during the training. Experimental results demonstrate our method achieves state-of-the-art performance on ScanNet V2, SUN RGB-D, and S3DIS datasets in the indoor one-stage 3D object detection. Source code is available at https://github.com/zyrant/SPGroup3D. Introduction As one of the basic tasks of 3D scene understanding, the goal of 3D object detection is to estimate the oriented 3D bounding boxes and semantic labels of objects in point clouds. It has been used in many application scenarios, such as autonomous driving, augmented reality, and robotics. Since indoor scenarios have more complex geometry and occlusion than outdoor scenarios (Chen et al. 2022; He et al. 2022; Wu et al. 2023; Feng et al. 2023), it remains challenging to cluster the scenes and get the proposals accurately. Previous state-of-the-art 3D object detection methods mainly follow the bottom-up paradigm, involving two key components: voting and grouping. These components are *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (c) ours proposal (b) mis-instance vote (table) vote (chair 1) (a) mis-semantic vote (chair 2) seed (table) seed (chair 1) seed (chair 2) / Figure 1: Different types of grouping strategy: (a) semanticagnostic grouping; (b) semantic-aware grouping (Wang et al. 2022a); (c) our superpoint-based grouping. The proposal corresponding to the table is shown in red, and the two chairs are blue and green, respectively. For superpoint-based grouping, we represent different superpoints with different colors, and the features with the same superpoint will be grouped. used to learn the point-wise center offsets and aggregate the points that vote to instance-agnostic local region, respectively. The conventional voting tends to push all proposals close to the object centers. It may not suitable for FCOSlike (Tian et al. 2019; Rukhovich, Vorontsova, and Konushin 2022) detection methods since the distances from the proposals to the object centers play a crucial role in determining the quality of proposals. Besides, the commonly used grouping module can be categorized into two types: semanticagnostic and semantic-aware. The former, such as VoteNet and its subsequent works (Qi et al. 2019; Cheng et al. 2021; Liang, An, and Ma 2022; Sun et al. 2022), groups the scenes based on the distance between coordinates. However, this approach fails in clustering indoor scenes when various instances are close but belong to different categories, as shown in Fig. 1(a). The latter one, such as CAGroup3D (Wang et al. 2022a), which additionally considers semantics, still suffers from a similar issue when instances of the same category are close to each other, as shown in Fig. 1(b). In summary, the conventional voting and the instance-agnostic grouping introduce more noise and outliers to the proposals, leading to performance bottlenecks in bottom-up methods. In this paper, we propose a superpoint grouping network for indoor anchor-free one-stage 3D object detection (SPThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7811 Group3D) to solve these problems. Specifically, we first use a sparse convolution-based backbone to obtain the voxels, which include coordinates and features. Then, we introduce a geometry-aware voting module to adapt to the concept of centerness in the anchor-free detection methods (Tian et al. 2019; Rukhovich, Vorontsova, and Konushin 2022). For each voxel, by merging the seed parts and the vote parts, we can ensure the relative geometric relationships between proposals and object centers, making the model easier to filter out low-quality proposals. Consequently, we address grouping problem by introducing superpoint (Landrieu and Simonovsky 2018). As shown in Fig. 1(c), superpoint is a natural instance-aware local unit, consisting of adjacent points sharing similar semantics and spatiality. Based on superpoint, we construct a superpoint-based grouping module. Superpoint-based grouping includes superpoint attention and superpoint-voxel fusion to facilitate superpoint-tosuperpoint and superpoint-to-voxel information interactions, respectively. Superpoint attention is proposed to enhance the interaction between non-overlapping superpoints, adaptively exploring relevant superpoint features within their neighborhoods. The following superpoint-voxel fusion based on the sparse convolution is used for the interaction of voxels and superpoints. Finally, we employ multiple matching to assess the discrepancy between each superpoint-based proposal and the ground truth, and select positive samples during training. Extensive experiments indicate our method achieves state-of-the-art performance in the indoor onestage 3D object detection task on three datasets, in terms of [email protected] on ScanNet V2 (+1.1), SUN RGB-D (+1.2), and S3DIS (+2.5). The main contributions are listed as follows: • We introduce a geometry-aware voting module, which preserves the relative geometry of superpoints in coordinate space, to adapt to anchor-free detection. • We design a superpoint-based grouping module including superpoint attention and superpoint-voxel fusion to enable superpoint-to-superpoint and superpoint-to-voxel feature interactions, respectively. • We present a multiple matching strategy, which can discriminate between positive and negative samples of superpoint-based proposals during training. Related Work Top-down 3D object detection. Top-down models are primarily applied in outdoor autonomous driving scenarios. VoxelNet (Zhou and Tuzel 2018) is a pioneering work that partitions the raw point clouds into voxels and realizes endto-end 3D object detection in this field. PV-RCNN (Shi et al. 2020) argues that voxel quantization may lead to information loss and proposes a second stage for incorporating finegrained features at the point level. For indoor 3D object detection, FCAF3D (Rukhovich, Vorontsova, and Konushin 2022) introduces the first anchor-free model with fully sparse convolution and further proposes an improved version, TR3D (Rukhovich, Vorontsova, and Konushin 2023). Bottom-up 3D object detection. Bottom-up methods, inspired by PointNet (Qi et al. 2017a) and its variation (Qi et al. 2017b), have gained widely attention for their ability to predict 3D bounding boxes from point clouds. VoteNet (Qi et al. 2019) adopts deep hough voting to group features and generate proposals. Building upon this framework, MLCVNet (Xie et al. 2020) adopts different level context information to explore relationships between objects. H3DNet (Zhang et al. 2020) extends the key points prediction of centers in VoteNet, including the centers of bounding boxes, surfaces, and edges. VENet (Xie et al. 2021) designs a voteweighting module to improve the voting process. BRNet (Cheng et al. 2021) proposes using clustered centers to predict the angles and spatial positions of entire objects. RBGNet (Wang et al. 2022b) introduces a ray-based feature grouping module to capture points on the object surface. CAGroup3D (Wang et al. 2022a) leverages the powerful expressive capability of sparse convolution to enhance feature extraction and proposes a two-stage method. However, these methods essentially construct instance-agnostic grouping to generate proposals, which inevitably leads to the presence of multiple instance features within a proposal. Superpoints. Recently, superpoints have been introduced into other 3D point cloud tasks (Hui et al. 2023; Shen et al. 2023; Tang, Hui, and Xie 2022). The concept of superpoints is initially proposed by SPG (Landrieu and Simonovsky 2018) and applied to 3D semantic segmentation. SSP (Landrieu and Boussaha 2019) proposes a learnable strategy for over-segmenting superpoints, while SPNet (Hui et al. 2021) further extends the concept of superpoint oversegmentation to an end-to-end approach. In the domain of instance segmentation, SSTNet (Liang et al. 2021) constructs a superpoint tree network to aggregate superpoints with the same semantic information. GraphCut (Hui et al. 2022) proposes a bilateral graph attention mechanism to generate precise instances using the superpoint graph cutting network. SPFormer (Sun et al. 2023) utilizes learnable query to predict instance segmentation from superpoint features as a top-down pipeline. Although these methods have explored a variety of uses for superpoints, they have not directly enhanced the representation of superpoints nor made them more suitable for anchor-free detection. We employ a superpoint-based grouping to enhance the expression of superpoints and design a geometry-aware voting specifically for anchor-free detection to make the model easier to filter out low-quality proposals. Method Overview The overall architecture of our method is depicted in Fig. 2. The input point cloud typically consists of N points, which can be represented as {pi}N i=1, with pi ∈ R6. Each point has coordinates x, y, z and colors r, g, b. Following previous method (Wang et al. 2022a), we first extract M high-resolution non-empty seed voxels from a sparse 3D backbone network (Choy, Gwak, and Savarese 2019). Next, these voxels are passed through a geometryaware voting module and several superpoint-based grouping The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7812 Center Class Regress NMS Multiple Matching Training Superpoint Attention Superpoint-based Grouping ×N Inference Input SuperpointVoxel Fuison GeometryAware Voting Geometry-Aware Voting M×(3+C) M×(3+C) 2M×(3+C) Voting K L×3 L×C L × k × 3 L × k × C L×C MLP1 MLP2 MLP3 K Superpoint Attention Softmax  k i 1 2M×(3+2C) SparseConv 2M×(3+C’) Superpoint-Voxel Fusion Superpoint Superpoint Voxel k -nearest superpoints Voxlize Devoxlize L×(3+C) Superpoint Backnone Figure 2: Framework of the superpoint grouping network for indoor 3D object detection (SPGroup3D). Given the input, we first extract seed voxels through the backbone. Subsequently, we construct geometry-aware voting to preserve the relative positions of the proposals and the object centers. Following this, based on superpoint-based grouping, we iteratively optimize the feature representations of the superpoints. Finally, multiple matching is employed to select positive samples during training and 3D NMS is applied to eliminate redundant proposals during the inference time. modules to generate proposals. Finally, multiple matching is employed to choose positive proposals during training and 3D non-maximum suppression (NMS) is applied to remove redundant proposals during inference. Geometry-Aware Voting In anchor-free detection, we usually assign higher scores/ centernesses to the proposals/superpoints that are close to object centers and lower scores to proposals that are far away from object centers, and discard the proposals with low scores. However, traditional voting (Qi et al. 2019) tends to push proposals close to the object centers, resulting in high scores, which makes it difficult to filter out low-quality proposals. In this paper, we present a geometry-aware voting to solve this problem. Geometry-aware voting preserves the relative position information from the proposals to the object centers in geometric space, enabling the model to allocate low scores for relatively distant proposals, thus making it easier to filter out low-quality proposals in post-processing. Specifically, given M high-resolution non-empty seed voxels {vi}M i=1 from the backbone, where vi = [vc i ; vf i ] with vc i ∈R3 and vf i ∈RC.The next key step is to gengerate proposals/superpoints. The most straightforward way to group voxels into superpoints is to cluster {vi}M i=1 using offline pre-computed superpoints. However, as shown in Fig. 3(b) and (f), only using {vi}M i=1 will lead to the generated superpoints mainly existing on the surface of the objects, which are far away from the center of the instances, making it difficult to accurately regress (Qi et al. 2019). Therefore, similar to VoteNet (Qi et al. 2019), we set a voting branch to learn the offsets of the voxels to the corresponding bounding box centers. Each seed voxel includes coordinate offset ∆vc i ∈R3 and feature offset ∆vf i ∈RC. The vote voxel oi is generated by adding the offsets as follows: n oi | oi = [vc i + ∆vc i ; vf i + ∆vf i ] oM i=1 (1) where the corresponding coordinate and feature of oi are denoted by oc i ∈R3 and of i ∈RC, respectively. The predicted offset ∆vc i is explicitly supervised by a smooth-ℓ1 loss with the ground-truth displacement from the coordinate of seed voxel vi to its corresponding bounding box center. Depicted in Fig. 3(c) and (g), the generated superpoints primarily emphasize the object centers with this traditional voting, weakening the geometry representation and relative position to object centers (Cheng et al. 2021). This interference poses challenges to the post-processing in the anchor-free detection methods (Tian et al. 2019; Rukhovich, Vorontsova, and Konushin 2022) since the superpoints belonging to the current object tend to gather around the object center. Hence, unlike VoteNet, which directly utilizes the vote voxels, we further merge the seed voxels. This merge processing can be formulated as follows: n fi | fi = [(vc i , oc i); (vf i , of i )] o2M i=1 (2) where fi denotes the i-th voxel of {fi}2M i=1. This processing ensures that the generated superpoints remain within their corresponding objects while preserving their original relative position to the object centers, as illustrated in Fig. 3(d) and (h). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7813 (a) (b) (c) (d) (e) (f) (g) (h) Figure 3: The visualization of the locations of generated superpoints. The red points represent superpoints. To better represent the geometry of the object, we use gray points to indicate the voxels. (a) and (e) represent the input point clouds. (b) and (f) represent superpoints without voting. (c) and (g) represent superpoints with traditional voting. (d) and (h) represent superpoints with geometry-aware voting. Sperpoint-based Grouping For the purpose to realize the consistent instance representation within the same proposals, we propose superpointbased grouping to aggregate voxels into superpoints, consisting of superpoint attention and superpoint-voxel fusion. In our setting, we iterate the superpoint-based grouping multiple times, i.e., 3. For each iteration, the output of superpoint attention is regarded as the corresponding output. Conversely, the output of superpoint-voxel fusion serves as the input for the next iteration of superpoint-based grouping. Superpoint Attention. Superpoints are non-overlapping local regions of the input voxels, which become the bottleneck of the model perception, preventing dense prediction tasks like object detection from understanding the instances. In this paper, inspired by attention mechanism (Vaswani et al. 2017), we propose a superpoint attention mechanism to enable feature interaction within neighboring local regions. The superpoint attention mechanism facilitates information propagation and integration, leading to a better understanding of the semantic and structural aspects of the instances. Specifically, given the {fi}2M i=1 obtained from the former module, we first group them into initial L superpoints {si}L i=1, where si = [sc i; sf i ] with sc i ∈R3 and sf i ∈RC, using the superpoints. Subsequently, to accelerate the algorithm convergence and simplify its complexity, we apply the k-nearest neighbours (k-NN) algorithm to obtain the k nearest neighbours based on coordinate space for each superpoint. The attention operation is then performed only within the k nearest neighbours of each superpoint. These processes can be formulated as follows: {si}L i=1 = Scatter {fi}2M i=1 ,  nij k j=1 = knn(si) (3) where Scatter and {Nij}k j=1 represent Scatter1 function 1https://github.com/rusty1s/pytorch scatter and the k nearest superpoints of si, respectively. Each superpoint in {Nij}k j=1 can be represented by coordinate nc ij ∈R3 and feature nf ij ∈RC. Inspired by (Hui et al. 2022), we also explicitly calculate the similarity in both coordinate and feature space instead of embedding the coordinate into feature space. Different from their method that computes similarities between graphs for graph cutting in that realizes instance segmentation, our algorithm aims to facilitate interaction between superpoints to capture more contextual information. The specific steps are described as follows: {wc ij}k j=1 = MLP(sc i −{nc ij}k j=1) (4) {wf ij}k j=1 = MLP(sf i −{nf ij}k j=1) (5) where {wc ij}k j=1 and {wf ij}k j=1 represent the wights in coordinate space and feature space of k nearest superpoints, respectively. MLP indicates a fully-connected layer. It is noted that to guarantee the above process, we copy sc i and sf i k times. The fusion weights can be obtained by multiplying the coordinate space weights and feature space weights, followed by applying a softmax function to obtain normalized fusion weights. Therefore, the fusion wight wij for i-th superpoint and its j-th nearest superpoint can be formulated as: wij = exp(wc ij × wf ij) Pk j=1 exp(wc ij × wf ij) (6) Finally, these fusion weights are multiplied by the superpoint features of the k-nearest neighbours to obtain updated superpoint feature, while the coordinate of the superpoint remains unchanged. The step can be formulated as: af i = k X j=1 (wij × MLP(nf ij)), ac i = sc i (7) where af i and ac i denote one of the features and coordinates of the updated superpoints in {ai}L i=1, respectively. In addition to above steps, similar to the original attention algorithm, we also incorporate residual and normalization operations at last. Superpoint-Voxel Fusion. From voxels to superpoints is a quantification process, inevitably leading to a loss of finegrained features. Therefore, we propose superpoint-voxel fusion based on sparse convolution to achieve the interaction of coarse-grained superpoints and fine-grained voxels. First, we need to broadcast the superpoint features to match the size of the merge voxels {fi}2M i=1 using the superpoints. The superpoint features after the broadcast are denoted as {hf i }2M i=1. Then, we concatenate the features from {fi}2M i=1 and {hi}2M i=1 to obtain the initial fusion voxel features {gf i }2M i=1. The step can be formulated as: {hf i }2M i=1 = Broadcast  {af i }L i=1  (8) {gf i }2M i=1 = Concat  {f f i }2M i=1; {hf i }2M i=1  (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7814 where gf i ∈R2C is one the feature of fusion voxels {gi}2M i=1. Besides, the coordinates of the fusion voxels are the same as those of the merge voxels. Next, based on the coordinates, we will voxelize fusion voxels again with a voxel size equal to the resolution of output voxels from the backbone. The revoxlized fusion voxels pass through an SPFFN, consisting of a sparse convolution layer, a normalization layer, and an activation layer to obtain the final output. This process can be formulated as follows: {g ′ i}M i=1 = SPFFN Voxelize({gi}2M i=1)  (10) where g ′ i denotes one of updated fusion voxels. Finally, {g ′ i} M i=1 will be converted back to input-level by the sparse tensor matrix mapping and then get the refined voxels {fi+1}2M i=1 for next iteration. Multiple Matching and Loss Function Previous anchor-free indoor 3D detection methods (Rukhovich, Vorontsova, and Konushin 2022; Wang et al. 2022a) rely on constructing proposals using regular neighbourhoods, i.e., voxels. Assume we consider the receptive field as a criterion for determining the importance of the current proposal, proposals within the same features level would be treated equally. Based on this pattern, they could directly set positive and negative samples by judging the distance between the location of the proposals and the center of the bounding boxes. However, this approach is not suitable for proposals generated by superpoints due to the fact that the proposals in our method have dynamic receptive fields. We must improve the matching strategy to select multiple positive samples for each ground truth. Here, to ensure consistency between training and registration, inspired by DETR (Carion et al. 2020), we directly adopt the loss during the training process as the cost function for matching and consider the cost of both classification and regression simultaneously. It is worth mentioning that our approach does not require Hungarian Matching and assigns multiple samples for each ground truth, which is different from Bipartite Matching in DETR. Costik is to evaluate the similarity of the i-th proposal and the k-th ground truth. As defined by the following formula: Costik = −λclscostcls ik −λregcostreg ik (11) where costcls ik and costreg ik represent the focal cost function (Carion et al. 2020) and DIoU cost function (Zheng et al. 2020), respectively. λcls and λreg are the corresponding coefficients for each term. Finally, we directly select the topr (i.e., r=18) proposals with the minimum cost as positive samples for each ground truth, while the rest are considered as negative. In our experiments, only the proposals within the bounding box are considered, and both λcls, λreg are set to 1. After assignment, our method is trained from scratch with voting loss Lvote, centerness loss Lcntr, bounding box estimation loss Lbox, and classification loss Lcls for object detection, which are formulated as follows: L = βvoteLvote + βcntrLcntr + βboxLbox + βclsLcls (12) Lvote is a smooth-ℓ1 loss for predicting the center offset of each voxel. In terms of proposal generation, Lcntr, Lbox, and Lcls utilize Cross-Entropy loss, DIOU loss (Zheng et al. 2020), and Focal loss (Lin et al. 2017) to optimize object centerness, bounding box prediction, and classification, respectively. βvote, βcntr, βbox, and βcls represent the corresponding coefficients and are set to 1. Experiments Datasets and Evaluation Metric Our SPGroup3D is evaluated on three indoor challenging 3D scene datasets, i.e., ScanNet V2 (Dai et al. 2017), SUN RGB-D (Song, Lichtenberg, and Xiao 2015), S3DIS (Armeni et al. 2016). For all datasets, we follow the standard data splits adopted in (Qi et al. 2019) and (Gwak, Choy, and Savarese 2020). ScanNet V2. ScanNet is a richly annotated dataset that provides a comprehensive collection of 3D indoor scans, with annotated 3D rebuilt indoor scenes and bounding boxes for the 18 object categories. The dataset is divided into 1,201 training samples, with the remaining 312 used for validation. SUN RGB-D. SUN RGB-D is a widely recognized dataset designed for 3D object detection in indoor environments. This dataset is divided into approximately 5,000 training and 5,000 validation samples. Each sample is annotated with oriented 3D bounding boxes and per-point semantic labels, covering 37 different object categories. Following the approach in (Qi et al. 2019), we select 10 categories. S3DIS. S3DIS is a comprehensive 3D indoor dataset that comprises 3D scans from 272 rooms across 6 buildings, with annotations for both 3D instances and semantic categories. We follow the standard division, where Area 5 is used for validation, while the remains are the training subset. We evaluate all the experiment results by a standard evaluation protocol (Qi et al. 2019; Rukhovich, Vorontsova, and Konushin 2022), which uses mean average precision (mAP) with different IoU thresholds, i.e., 0.25 and 0.5. Implementation Details We set the voxel size as 0.02m for all datasets. For the backbone, we use the same backbone introduced in (Wang et al. 2022a) as the voxel feature extractor and the voxel size of output high-resolution is 0.04m. In terms of the superpoint-based grouping, we set the iteration number to 3 and the neighbour number to 8. Moreover, following the setting in FCAF3D (Rukhovich, Vorontsova, and Konushin 2022), we set the number of positive samples to 18 in multiple matching. In our experiments, we train the model in an end-to-end manner using the MMdetection3D framework (Contributors 2020). Following the approach in (Rukhovich, Vorontsova, and Konushin 2022, 2023), we employ the AdamW optimizer (Kingma and Ba 2014) with batch size, initial learning rate, and weight decay set to 4, 0.001, and 0.0001, respectively. Training is performed for 15 epochs on each dataset, with a learning rate decay by a factor of 10 at the 9th and 12-th epochs. The experiments are conducted on four The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7815 Methods Presented at ScanNet V2 SUN RGB-D S3DIS [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] VoteNet (Qi et al. 2019) ICCV’19 58.6 33.5 57.7 3D-MPA (Engelmann et al. 2020) CVPR’20 64.2 49.2 HGNet (Chen et al. 2020) CVPR’20 61.3 34.4 61.6 MLCVNet (Xie et al. 2020) CVPR’20 64.5 41.4 59.8 GSDN (Gwak et al. 2020) ECCV’20 62.8 34.8 47.8 25.1 H3DNet (Zhang et al. 2020) ECCV’20 67.2 48.1 60.1 39.0 BRNet (Cheng et al. 2021) CVPR’21 66.1 50.9 61.1 43.7 3DETR (Misra et al. 2021) ICCV’21 65.0 47.0 59.1 32.7 VENet (Xie et al. 2021) ICCV’21 67.7 62.5 39.2 GroupFree (Liu et al. 2021) ICCV’21 69.1 (68.6) 52.8 (51.8) 63.0 (62.6) 45.2 (44.4) RBGNet (Wang et al. 2022b) CVPR’22 70.6 (69.6) 55.2 (54.7) 64.1 (63.6) 47.2 (46.3) HyperDet3D (Zheng et al. 2022) CVPR’22 70.9 57.2 63.5 47.3 FCAF3D (Rukhovich et al. 2022) ECCV’22 71.5 (70.7) 57.3 (56.0) 64.2 (63.8) 48.9 (48.2) 66.7 (64.9) 45.9 (43.8) CAGroup3D* (Wang et al. 2022a) NeurIPS’22 73.2 57.1 SPGroup3D (ours) 74.3 (73.5) 59.6 (58.3) 65.4 (64.8) 47.1 (46.4) 69.2 (67.7) 47.2 (43.6) Table 1: 3D detection results on validation of ScanNet V2, SUN RGB-D, and S3DIS datasets. The main comparison is based on the best results of multiple experiments between different methods, and the average value of 25 trials is given in brackets. Since we only focus on one-stage methods in this paper, we report the one-stage result from the ablation study of the origin paper (Wang et al. 2022a) for fair comparison, dubbed as “ CAGroup3D* ” . Geometry Group Match [email protected] [email protected] 68.2 53.2 ✓ 71.3 57.0 ✓ ✓ 72.7 57.9 ✓ ✓ ✓ 73.5 58.3 Table 2: Ablation study of key components including geometry-aware voting (Geometry), superpoint-based grouping (Group), and multiple matching (Match). NVIDIA RTX 3090 GPUs. We follow the evaluation scheme from (Liu et al. 2021), which involved training the models five times and testing each trained model five times. The reported results include the best and average performance across all results. Benchmarking Results We compare our method with the recent state-of-the-art 3D detection methods on ScanNet V2 (Dai et al. 2017), SUN RGB-D (Song, Lichtenberg, and Xiao 2015) and S3DIS (Armeni et al. 2016) benchmarks. As indicated in Tab. 1, SPGroup3D outperforms the previous state-of-theart methods in almost all metrics. In terms of [email protected], our method achieves 1.1, 1.2, and 2.5 improvements over the previous state-of-the-art methods on ScanNet, SUN RGB-D and S3DIS, respectively. Regarding [email protected], our method shows 2.3 and 1.3 improvements on ScanNet and S3DIS. The visualization of 3D Object detection with predicted bounding boxes on Scannet V2 is shown in Fig. 5. Ablation Study We conduct extensive ablation studies on the validation set of ScanNet V2 to analyze individual components of our (a) iteration number (b) neighbour number Figure 4: Comparison of performance results for different iteration numbers and neighbour numbers. proposed method. Effect of different components of SPGroup3D. We first ablate the effects of different components of SPGroup3D. As seen in Tab. 2, the base model (1st row) is the fully sparse convolutional VoteNet-style (Qi et al. 2019; Wang et al. 2022a) model. Comparing the 1st and 2nd row, we introduce a geometry-aware voting (dubbed as “Geometry”), eliminate superpoint-based grouping and multiple matching, and directly group voxels into superpoints. The results of this variant model are significantly improved from 68.2 to 71.3, and [email protected] from 53.2 to 57.0. Comparing the 2nd and 3rd row, by adding several superpoint based groupings (dubbed as “Group”), the performance of this variant further improved, i.e., [email protected] and [email protected], 71.3 to 72.7 and 57.0 to 57.9, respectively. In conjunction with multiple matching (dubbed as “Match”), from the 4th row, the result of [email protected] increased from 72.7 to 73.5, and [email protected] from 57.9 to 58.3. These experiments The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7816 Ground Truth SPGroup3D (ours) Figure 5: Qualitative results on validation of ScanNet V2. Different classes are indicated by bounding boxes in different colors. Settings [email protected] [email protected] Semantic-agnostic 68.2 53.2 Semantic-aware 73.2 57.1 SPGroup3D (ours) 73.5 58.3 Table 3: Comparison with other grouping-based strategies. demonstrate the effectiveness of our geometry-aware voting, superpoint-based grouping and multiple matching. Effect of superpoint based grouping. In the superpointbased grouping, there are two hyperparameters: iteration number (iter) and neighbour number (k). According to Fig. 4, we can see that the model is insensitive to changes in hyperparameters. Fig. 4(a) shows that the best results are obtained at an iteration number of 3 (73.5 on [email protected] and 58.3 on [email protected]). Therefore, in our experiments, we set the number of iterations to 3. We choose several values for the number of neighbours: 1, 4, 8, and 12, as shown in Fig. 4(b). The results of [email protected] and [email protected] are 72.2, 57.5 (k=1), 73.3, 57.7 (k=4), 73.5, 58.3 (k=8) and 73.1, 58.6 (k=12), respectively. In order to strike a balance between performance and efficiency, this paper chooses 8 as the number of neighbours. To further demonstrate the effect of superpoint-based grouping, we compare it with other grouping strategies. In Tab. 3, “semantic-agnostic” and “semantic-aware” correspond to VoteNet-style model and the one-stage CAGroup3D (Wang et al. 2022a), respectively. Our method achieve more reliable detection results (73.5 for [email protected], 58.3 for [email protected]) compared to these methods. This proves that instance-aware proposals produced by superpoint-based grouping lead to better detection. Effect of geometry-aware voting. In our geometry-aware voting, we need to preserve both features before and after voting. Here we study its impact. Only using the features before voting achieves 72.0 for [email protected] and 55.1 for [email protected]. The variant model only with the features after voting obtains 70.9 for [email protected] and 56.2 for [email protected], and with both features can achieve 73.5 for [email protected] and 58.3 for [email protected]. It can be observed that by utilizing both features, our model achieves improvement. This shows that preserving the geometric distribution of superpoints in coordinate space is beneficial to the anchor-free method. Additionally, we claim that our model is robust to incorrectly partitioned superpoints, which usually appear at the edges of objects and are more likely to generate low-quality proposals. With geometry-aware voting, the position of superpoints relative to the center of objects remains unchanged, allowing us to easily use centerness scores to filter out low-quality proposals during post-processing. Conclusion In this paper, we proposed a novel end-to-end one-stage method, SPGroup3D, for indoor 3D object detection. SPGroup3D first utilizes geometry-aware voting to refine the positions of superpoints and then employs superpoint-based grouping to group the bottom-up latent features of voxels into superpoints. During the training phase, multiple matching is used to select positive superpoint-based proposals. Extensive experiments on ScanNet V2, SUN RGB-D, and S3DIS benchmarks demonstrate that the proposed method achieves state-of-the-art performance on the indoor onestage 3D object detection task. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7817 Acknowledgments This work was supported by the National Science Fund of China (Grant Nos. 62276144, 62306238) and the Fundamental Research Funds for the Central Universities. References Armeni, I.; Sener, O.; Zamir, A. R.; Jiang, H.; Brilakis, I.; Fischer, M.; and Savarese, S. 2016. 3D Semantic Parsing of Large-Scale Indoor Spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1534–1543. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. In Proceedings of the European Conference on Computer Vision, 213–229. Chen, C.; Chen, Z.; Zhang, J.; and Tao, D. 2022. SASA: Semantics-Augmented Set Abstraction for Point-Based 3D Object Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, 221–229. Chen, J.; Lei, B.; Song, Q.; Ying, H.; Chen, D. Z.; and Wu, J. 2020. A Hierarchical Graph Network for 3D Object Detection on Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 392–401. Cheng, B.; Sheng, L.; Shi, S.; Yang, M.; and Xu, D. 2021. Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8963–8972. Choy, C.; Gwak, J.; and Savarese, S. 2019. 4D SpatioTemporal ConvNets: Minkowski Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3075–3084. Contributors, M. 2020. MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https://github.com/open-mmlab/mmdetection3d. Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5828–5839. Engelmann, F.; Bokeloh, M.; Fathi, A.; Leibe, B.; and Nießner, M. 2020. 3D-MPA: Multi-Proposal Aggregation for 3D Semantic Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9031–9040. Feng, X.; Du, H.; Fan, H.; Duan, Y.; and Liu, Y. 2023. SEFormer: Structure Embedding Transformer for 3D Object Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, 632–640. Gwak, J.; Choy, C.; and Savarese, S. 2020. Generative Sparse Detection Networks for 3D Single-Shot Object Detection. In Proceedings of the European Conference on Computer Vision, 297–313. He, Q.; Wang, Z.; Zeng, H.; Zeng, Y.; and Liu, Y. 2022. SVGA-Net: Sparse Voxel-Graph Attention Network for 3D Object Detection from Point Clouds. In Proceedings of the AAAI Conference on Artificial Intelligence, 870–878. Hui, L.; Tang, L.; Dai, Y.; Xie, J.; and Yang, J. 2023. Efficient LiDAR Point Cloud Oversegmentation Network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 18003–18012. Hui, L.; Tang, L.; Shen, Y.; Xie, J.; and Yang, J. 2022. Learning Superpoint Graph Cut for 3D Instance Segmentation. Advances in Neural Information Processing Systems, 36804–36817. Hui, L.; Yuan, J.; Cheng, M.; Xie, J.; Zhang, X.; and Yang, J. 2021. Superpoint Network for Point Cloud Oversegmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5510–5519. Kingma, D. P.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. Landrieu, L.; and Boussaha, M. 2019. Point Cloud Oversegmentation with Graph-Structured Deep Metric Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7440–7449. Landrieu, L.; and Simonovsky, M. 2018. Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4558–4567. Liang, J.; An, P.; and Ma, J. 2022. Distribution Aware VoteNet for 3D Object Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, 1583–1591. Liang, Z.; Li, Z.; Xu, S.; Tan, M.; and Jia, K. 2021. Instance Segmentation in 3D Scenes using Semantic Superpoint Tree Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2783–2792. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal Loss for Dense Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2980–2988. Liu, Z.; Zhang, Z.; Cao, Y.; Hu, H.; and Tong, X. 2021. Group-Free 3D Object Detection via Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2949–2958. Misra, I.; Girdhar, R.; and Joulin, A. 2021. An Endto-End Transformer Model for 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2906–2917. Qi, C. R.; Litany, O.; He, K.; and Guibas, L. J. 2019. Deep Hough Voting for 3D Object Detection in Point Clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9277–9286. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. PointNet: Deep learning on Point sets for 3D Classification and Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Advances in Neural Information Processing Systems, 4490–4499. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7818 Rukhovich, A.; Vorontsova, A.; and Konushin, A. 2022. FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection. In Proceedings of the European Conference on Computer Vision, 477–493. Rukhovich, D.; Vorontsova, A.; and Konushin, A. 2023. TR3D: Towards Real-Time Indoor 3D Object Detection. arXiv preprint arXiv:2302.02858. Shen, Y.; Hui, L.; Xie, J.; and Yang, J. 2023. Self-Supervised 3D Scene Flow Estimation Guided by Superpoints. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5271–5280. Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; and Li, H. 2020. PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10529–10538. Song, S.; Lichtenberg, S. P.; and Xiao, J. 2015. SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 567–576. Sun, J.; Fang, H.-S.; Zhu, X.; Li, J.; and Lu, C. 2022. Correlation Field for Boosting 3D Object Detection in Structured Scenes. In Proceedings of the AAAI Conference on Artificial Intelligence, 2298–2306. Sun, J.; Qing, C.; Tan, J.; and Xu, X. 2023. Superpoint Transformer for 3D Scene Instance Segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2393–2401. Tang, L.; Hui, L.; and Xie, J. 2022. Learning InterSuperpoint Affinity for Weakly Supervised 3D Instance Segmentation. In Proceedings of the Asian Conference on Computer Vision, 1282–1297. Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. FCOS: Fully Convolutional One-Stage Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9627–9636. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention Is all You Need. Advances in Neural Information Processing Systems, 6000–6010. Wang, H.; Dong, S.; Shi, S.; Li, A.; Li, J.; Li, Z.; Wang, L.; et al. 2022a. CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds. Advances in Neural Information Processing Systems, 29975–29988. Wang, H.; Shi, S.; Yang, Z.; Fang, R.; Qian, Q.; Li, H.; Schiele, B.; and Wang, L. 2022b. RBGNet: Ray-Based Grouping for 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1110–1119. Wu, H.; Wen, C.; Li, W.; Li, X.; Yang, R.; and Wang, C. 2023. Transformation-Equivariant 3D Object Detection for Autonomous Driving. In Proceedings of the AAAI Conference on Artificial Intelligence, 2795–2802. Xie, Q.; Lai, Y.-K.; Wu, J.; Wang, Z.; Lu, D.; Wei, M.; and Wang, J. 2021. VENet: Voting Enhancement Network for 3D Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3712–3721. Xie, Q.; Lai, Y.-K.; Wu, J.; Wang, Z.; Zhang, Y.; Xu, K.; and Wang, J. 2020. MlCVNet: Multi-Level Context VoteNet for 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10447–10456. Zhang, Z.; Sun, B.; Yang, H.; and Huang, Q. 2020. H3DNet: 3D Object Detection Using Hybrid Geometric Primitives. In Proceedings of the European Conference on Computer Vision, 311–329. Zheng, Y.; Duan, Y.; Lu, J.; Zhou, J.; and Tian, Q. 2022. HyperDet3D: Learning a Scene-conditioned 3D Object Detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5585–5594. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; and Ren, D. 2020. Distance-IoU loss: Faster and Better Learning for Bounding Box Regression. In Proceedings of the AAAI Conference on Artificial Intelligence, 12993–13000. Zhou, Y.; and Tuzel, O. 2018. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4490–4499. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7819
2024
868
18,704
SEIT: Structural Enhancement for Unsupervised Image Translation in Frequency Domain Zhifeng Zhu1, Yaochen Li1*, Yifan Li1, Jinhuo Yang1, Peijun Chen1, Yuehu Liu2 1School of Software Engineering, Xi’an Jiaotong University 2Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University [email protected], [email protected], {3121358033, jinhuo, 3123358029}@stu.xjtu.edu.cn, [email protected] Abstract For the task of unsupervised image translation, transforming the image style while preserving its original structure remains challenging. In this paper, we propose an unsupervised image translation method with structural enhancement in frequency domain named SEIT. Specifically, a frequency dynamic adaptive (FDA) module is designed for image style transformation that can well transfer the image style while maintaining its overall structure by decoupling the image content and style in frequency domain. Moreover, a wavelet-based structure enhancement (WSE) module is proposed to improve the intermediate translation results by matching the high-frequency information, thus enriching the structural details. Furthermore, a multi-scale network architecture is designed to extract the domain-specific information using image-independent encoders for both the source and target domains. The extensive experimental results well demonstrate the effectiveness of the proposed method. Introduction Unsupervised image translation is a challenging task that aims to transform images from source domain to target domain without paired training data. In recent years, the development of generative adversarial networks (GANs) (Goodfellow et al. 2020) has led to significant advances in computer vision tasks such as image defogging (Fu et al. 2021), image deraining (Chen et al. 2022), etc. The GAN-based methods have also demonstrated success in accomplishing source-to-target domain translations through adversarial training of generators and discriminators. However, the resulting translated images may exhibit structural distortions and image artifacts. To address these issues, researchers have made efforts to refine GAN-based techniques for image translation tasks. For example, GCGAN(Fu et al. 2019) ensures the consistency between the input image and the corresponding output by preserving image transformation, such as flipping or rotating. In the work of LPTN(Liang, Zeng, and Zhang 2021), Liang et al. design a framework based on Laplacian decomposition and reconstruction to maintain the image structural *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The SSIM vs. FID of different methods on the Day →Night task. The method closest to the top left corner is the best and the red circle represents our proposed SEIT. information. VSAIT(Theiss et al. 2022) addresses semantic inversion issues by learning inverse mapping in a highdimensional vector space based on vector symbolic architectures to ensure consistency of source domain content. Although these methods have achieved success in maintaining image structure, the generated results still lack satisfactory style effects. To keep both the structure and style well during image translation, we propose an unsupervised image translation method with structural enhancement in the frequency domain named SEIT. The method is based on the GAN framework and consists of a generator and a discriminator. To improve the style of the translation results, we leverage both the source and target domain images as input to the generator, as in previous work (Jiang et al. 2020), to extract more style information during image translation. When image translation is performed in this framework, the content features of the source domain image should be fully preserved and the style features of the target domain image should be fully transferred to the translation result. To meet these requirements, a frequency dynamic adaptive (FDA) module for style converThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7820 sion is proposed based on the discrete Fourier transform to fully convert the target domain style while maintaining the source domain image content. We also propose a waveletbased structure enhancement (WSE) module to further improve image detail quality during translation. Based on the above designs, our model is able to maintain the optimal structure and style, as demonstrated in Figure 1, resulting in superior metrics compared with existing methods. The contributions of our work are summarized as follows: • A novel frequency dynamic adaptive module (FDA) for style conversion is proposed that decouples the image content and style in the frequency domain using the discrete Fourier transform. The content and style information obtained after decoupling are independent of each other, which is conducive to style translation afterwards. During the translation, the designed FDA module can fully capture the style information of the target domain image, while minimizing the loss of source domain image content and transferring the style information of the target domain image. • A wavelet-based structure enhancement (WSE) module is proposed to enrich the structural details. Through discrete wavelet transformation, the image’s style and highfrequency structural information can be separated. Then, the high-frequency features of the intermediate translation results are enhanced using the high-frequency information of the source domain image. The module can enhance the image details without compromising the obtained style information. • A multi-scale architecture is designed that is independent of both the source and the target domains, which minimizes information loss in encoding and decoding images and extracts domain-specific information during image decoding. The qualitative and quantitative experiments on three different tasks demonstrate that the proposed method achieves state-of-the-art results. Related Work Unsupervised Image-to-image Translation Unsupervised image-to-image translation based on GANs can be divided into one-sided methods and two-sided ones. Zhu et al. (2017) first propose the two-sided method CycleGAN, which introduces the cycle-consistency loss to ensure structural consistency during translation. The methods of DRIT(Lee et al. 2018) and MUNIT(Huang et al. 2018) decompose the latent space into a content-shared space and a domain-specific style space, thus realizing multi-modal image translation. In NICEGAN, Chen et al.(2020) reconsider the role of the discriminator and use the discriminator as part of the generator. Although the cycle-consistency constraint is effective, the underlying two-sided assumption behind the restriction is too absolute. Perfect bidirectional reconstruction is difficult to achieve when the images of one domain have extra information compared to the images of another domain. The one-sided methods aim to ensure that the relationships appearing in the input are reflected in a similar way to the output. Benaim et al. (2017) propose DistanceGAN to constrain the similarity between the distances of the input images and the output images. Park et al. (2020) introduce contrastive learning to constrain the image structure by maximizing the mutual information between the corresponding positions of the original and generated images. The one-sided methods mainly focus on designing various reconstruction losses between the source and generated images. However, the content loss in the flow of the network is ignored. Therefore, these methods are difficult to apply to road scenes to generate images with complete content structures and good details. Frequency Domain Decomposition In recent years, many researchers have attempted to combine traditional frequency domain processing methods with deep learning-based image processing. In the field of arbitrary style transfer, Yoo et al. (2019) propose a wavelet pooling strategy to approximate average pooling and its mirror operation upsample pooling, reducing the loss of information when propagating image features in the neural network and improving the quality of images. Zou et al. (2021) design a wavelet transform module to help restore clear highfrequency texture features in image restoration tasks. In the field of unsupervised image translation, Liang et al. (2021) apply Laplacian pyramids to decompose, transform, and reconstruct images to achieve a light network for improving the efficiency of image transformation. Inspired by these methods, we design a wavelet-based structure enhancement module to maintain the texture information and enhance the image structure in the unsupervised image translation. Approach Our whole framework adopts a one-sided GAN-based architecture, which consists of a generator and a discriminator. The generator is described in detail next, while the discriminator adopts the same structure as that in (Jiang et al. 2020). Overall Architecture of the Generator Figure 2 shows the overall architecture of the generator. Given an image x ∈R3×h×w in the source domain and y ∈R3×h×w in the target domain, we first feed them to separate encoders to extract multi-scale features. Then the extracted features are fed into the proposed FDA module and WSE module at different layers to fully transfer style information maintaining the overall content and further enhance the structural details. Finally, the image is reconstructed in a multi-scale manner. In the feature extraction stage, the Conv Block is applied to extract image features, which consists of a 3 × 3 convolution, instance normalization and relu activation function. And the bilinear interpolation is utilized to perform downsampling. Equipped with the domain-independent encoders, the domain-specific multi-scale features are extracted for further reconstruction. In the image reconstruction stage, the multi-scale source and target domain features are fed into the FDA module and the WSE module to obtain the intermediate features of translation. The features of the deeper layers are upsampled first and then concatenated with the features of the next deeper layers along the channel dimension. Then the image is reconstructed in a coarse-to-fine manner. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7821 Figure 2: Overall architecture of the generator. The symbols c⃝and up represent channel-wise concatenation and upsample operation. Frequency Dynamic Adaptive for Style Conversion We propose a frequency dynamic adaptive (FDA) module for style conversion that can fully capture the style information of the target domain image while minimizing the loss of source domain image content. Figure 3 shows the proposed FDA module, where the amplitude and phase of the source and target domain features can be obtained by discrete Fourier transform, respectively. For a single channel feature f(h, w) of size M ×N in the spatial domain, the two-dimensional discrete Fourier transform can be expressed as: F(u, v) = M−1 X h=0 N−1 X w=0 f(h, w)e−j2π(uh/M+vw/N), (1) where F(u, v) represents the frequency-domain representation of f(h, w). Then the amplitude and phase of the feature can be obtained by: A = |F(u, v)| =  R2(u, v) + I2(u, v) 1/2 , (2) P = Φ(u, v) = arctan  I(u, v) R(u, v)  , (3) where R(u, v) and I(u, v) denote the real and imaginary parts of F(u, v). A denotes the amplitude which contains the style information of the feature, and P represents the phase which contains the content information of the feature. To obtain the amplitude and phase of a multi-channel feature, each channel is processed independently following the above procedure. After obtaining the amplitude and phase of the feature, we introduce the local style extraction network (LSNet) and the global style extraction network (GSNet), as shown in Figure 3, to extract style information at different scales from the amplitude of the target feature. The LSNet aims to extract the local style modulation parameter W, which first upsamples the input features by 3 × 3 convolution and then reduces the feature channel dimension by 1 × 1 convolution to obtain the multi-channel style information. The GSNet aims to extract the global style modulation parameter B, which is distinguished from the LSNet network by the last layer of the adaptive averaging pooling layer. After that, adaptive style conversion is performed as: A ˆ Fs = W AFs −µ (AFs) σ (AFs)  + B, (4) where Fs is the source feature and AFs denotes its amplitude. µ(AFs) and σ(AFs) are the mean and standard deviation of AFs. And A ˆ Fs represents the result of adaptive style conversion. After obtaining the stylized amplitude, the Fourier representation is updated by recombining A ˆ Fs with the phase of the source feature Ps as: ˆF(u, v) = A ˆ FsejPs. (5) After that, the final style conversion result of each channel ˆf(h, w) can be restored using the inverse discrete Fourier transform as: ˆf(h, w) = 1 MN M−1 X u=0 N−1 X v=0 ˆF(u, v)ej2π(uh/M+vw/N). (6) And the multi-channel result Fr is formed by combining the single-channel results. Since the FDA module only processes the amplitude of the source feature without changing the phase information, the content of the source feature can be retained to the maximum extent. Wavelet-based Structure Enhancement After the adaptive style conversion, the wavelet-based structure enhancement (WSE) module is designed at the feature The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7822 Figure 3: Frequency dynamic adaptive (FDA) style Conversion Module. The symbols ⊕and ⊗represent element-wise addition, element-wise multiplication. Figure 4: Wavelet-based structure enhancement (WSE). The symbols ⊕and ⊗represent element-wise addition, elementwise multiplication. level to enrich the structural details. The image features are decomposed into four frequency sub-bands LL, LH, HL and HH after one discrete wavelet transform (DWT). The LL contains the overall image style and retains the global content information. The LH contains high-frequency information in the vertical direction. The HL contains the highfrequency edge information in the horizontal direction. And the HH contains the high-frequency information in the diagonal direction. The Harr wavelet is utilized for the discrete wavelet transform because it has enough ability to portray the image information at different frequencies. The process of WSE module is shown in Figure 4. The source domain image feature Fs ∈Rc×h×w and style conversion feature Fr ∈Rc×h×w are decomposed into four wavelet frequency sub-bands after the DWT. And the principle is defined as: DWT(Fs) = {Fs LL, Fs LH, Fs HL, Fs HH}, (7) DWT(Fr) = {Fr LL, Fr LH, Fr HL, Fr HH}, (8) where Fs LL, Fs LH, Fs HL and Fs HH represent four frequency sub-bands of Fs. Fr LL, Fr LH, Fr HL and Fr HH represent four frequency sub-bands of Fr. Since the Fr LL contains style information while the other three sub-bands contain structural information, we enhance the three highfrequency sub-bands of Fr with the corresponding highfrequency sub-bands of Fs. Let ⊙denote the concatenation operation, the enhancement process is defined as: ˆF LH r = Fr LH + Fs LH · Sig(Conv(⊙(Fr LH, Fs LH))), ˆF HL r = Fr HL + Fs HL · Sig(Conv(⊙(Fr HL, Fs HL))), ˆF HH r = Fr HH + Fs HH · Sig(Conv(⊙(Fr HH, Fs HH))). (9) After obtaining the enhanced high-frequency sub-bands, the image features are finally reconstructed by inverse discrete wavelet transform (IDWT) by equation (10): ˆFr = IDWT(Fr LL, ˆF LH r , ˆF HL r , ˆF HH r ). (10) Note that the low-frequency sub-band Fr LL, which contains the stylized information of Fr, is not changed in the enhancement process. Therefore, the module can enhance the image details without compromising the obtained style information. Loss Function The loss functions of our method contain the adversarial loss Ladv, the perceptual loss Lp and the feature matching loss Lfm. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7823 Adversarial Loss. The adversarial loss is applied to guide the generator to translate the style to the target domain, which is represented as: Ladv = Ex∼Pdata(X)(D(x)) + Ex∼Pdata(X),y∼Pdata(Y )(1 −D((G(x, y))). (11) Perceptual loss. The perceptual loss(Johnson, Alahi, and Fei-Fei 2016) is used to make the content of the translation result similar to the input source image and is defined as: Lp = Ex∼Pdata(X),y∼Pdata(Y )[αj∥Φj(G(x, y)) −Φj(x)∥1], (12) where Φ denotes the pre-trained VGG(Simonyan and Zisserman 2014) network, Φj(x) denotes the feature after the activation function of the jth layer and αj is the weight of the jth layer. We use five layers of intermediate features from the pre-trained V GG network, namely relu1 1, relu2 1, relu3 1, relu4 1, relu5 1, whose corresponding weights are 1/32, 1/16, 1/8, 1/4, 1, respectively. Feature Matching Loss. The feature matching loss is used to match intermediate layer features of multi-scale discriminator: Lfm = Ex∼Pdata(X),y∼Pdata(Y )[ Di(G(x, y)) −Di(y) 1], (13) where Di(·) is the result of the discriminator’s ith layer. Total Loss. The total loss is a combination of the aforementioned ones: Ltotal = λadvLadv + λpLp + λfmLfm. (14) where λadv is the weight of adversarial loss, λp is the weight of the perceptual loss and λfm is the weight of the feature matching loss. Experiments We conduct experiments on the cross-time translation, crossweather translation and cross-dataset translation tasks, respectively. The detailed experimental setup is as follows. Experimental Setup Datasets The datasets we use include SYNTHIA(Ros et al. 2016), GTA5(Richter et al. 2016), Cityscapes(Cordts et al. 2015) and BDD(Yu et al. 2020). we conduct the Day→Night translation on SYNTHIA, the Sunny→Cloudy translation on BDD, and the cross-domain translation on GTA5 and Cityscapes. Evaluation Metrics SSIM(Wang et al. 2004) is applied to measure the structural similarity of the translation result to the source domain image, which combines three types of information: brightness, structure, and contrast of the image. The value of SSIM is closer to 1 when the structure of both is more similar. FSIM(Zhang et al. 2011) is also applied to measure the structural similarity of the translation result to the source domain image. It combines the Phase Congruency and Gradient Magnitude of the image. The value of FSIM is closer to 1 when the structure of both is more similar. FID(Heusel et al. 2017) is utilized for measuring the similarity of the distribution between the target domain and the translation results. The lower the FID, the closer of the two distributions are. Training Details All experiments are conducted on a single RTX 3090 GPU. The batch size is set to 1. We use the Adam optimizer with β1 = 0.5 and β2 = 0.999. The initial learning rate is set to 0.0002 and the step decay learning strategy is used, with the learning rate decaying to half of the original learning rate every 5 epochs. The model is trained for 100 epochs. Following previous work, the loss weight in equation 14 is set to 1.0, 2.0, and 1.0, respectively. Baselines We compare with GAN-based image translation methods including MUNIT (Huang et al. 2018), LPTN (Liang, Zeng, and Zhang 2021), CUT (Park et al. 2020), TSIT (Jiang et al. 2020), F-Sesim (Zheng, Cham, and Cai 2021) and VSAIT (Theiss et al. 2022). Qualitative Comparisons The visual results of our method on three different tasks are shown in Figures 5-7. For each task, we compare two visual examples with zoomed-in local details to evaluate the performance of each method in preserving image details and transferring style. As shown in Figures 5-7, the MUNIT (Huang et al. 2018), CUT (Park et al. 2020), TSIT (Jiang et al. 2020) and F-Sesim (Zheng, Cham, and Cai 2021) suffer from structural distortions and artifacts. Although LPTN (Liang, Zeng, and Zhang 2021) and VSAIT (Theiss et al. 2022) can retain the structural features better (see Figures 5 and 6), they have limited style transfer ability. VSAIT (Theiss et al. 2022) produces a satisfactory result when the source and target domains share similar styles (see Figure 7), whereas it still lacks style transfer ability. In contrast, our method can faithfully preserve the structural details of the source image and fully capture the style of the target domain. Our method outperforms other methods for high-quality image translation in both the overall quality and the zoomed-in details. Quantitative Comparisons The quantitative results of our method on three tasks are shown in Table 1. Our method achieves the best performance among all compared methods on all tasks, demonstrating its ability to generate images with good structure and style. Notably, our method has a significant improvement on SSIM (Wang et al. 2004) and FSIM (Zhang et al. 2011) while maintaining a clear advantage on FID (Heusel et al. 2017) for the GTA5 →Cityscapes task. The above comparison indicates the efficiency of our method in preserving the content structure and transferring the style of the input images. Ablation Study Qualitative comparisons To verify the effectiveness of our method, ablation experiments are conducted on both the single-scale and the multi-scale architecture. The qualitative results are shown in Figure 8. The result of the single-scale architecture in the yellow box (see Figure 8(a)) fails to restore the wires next to the streetlights in the original image The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7824 Method SYNTHIA Day →Night BDD Sunny →Cloudy GTA5 →CityScapes SSIM ↑ FSIM ↑ FID ↓ SSIM ↑ FSIM ↑ FID ↓ SSIM ↑ FSIM ↑ FID ↓ MUNIT 0.64 0.81 50.45 0.88 0.93 41.69 0.63 0.79 105.15 LPTN 0.83 0.88 65.34 0.92 0.94 46.37 0.77 0.89 99.65 CUT 0.58 0.77 61.74 0.81 0.84 58.33 0.55 0.70 98.08 TSIT 0.71 0.80 56.16 0.89 0.90 44.24 0.70 0.82 98.61 F-Sesim 0.46 0.84 49.14 0.79 0.91 42.55 0.55 0.74 83.21 VSAIT 0.80 0.90 48.68 0.89 0.96 42.50 0.58 0.86 82.77 Ours 0.87 0.91 46.15 0.94 0.96 40.74 0.93 0.95 82.55 Table 1: Quantitative comparisons on three tasks. Bolded represents the best and underline indicates the second best. Figure 5: Comparison of existing methods on the SYNTHIA Day →Night task. S-S M-S FDA WSE SSIM ↑ FSIM ↑ FID ↓ ✓ 0.54 0.80 54.70 ✓ ✓ 0.67 0.85 49.50 ✓ ✓ 0.64 0.84 48.26 ✓ ✓ ✓ 0.70 0.86 47.46 ✓ 0.78 0.85 46.68 ✓ ✓ 0.80 0.87 46.44 ✓ ✓ 0.83 0.88 46.99 ✓ ✓ ✓ 0.87 0.91 46.15 Table 2: Quantitative comparison results of ablation studies. S-S represents single-scale and M-S represents Multi-scale. and the style is chaotic. In the red box, the translation result fails to restore the pedestrians and loses the structure information. These problems are caused by severe information loss in encoding and decoding. In Figure 8(b)(c), it can be seen that the visual effects of these two parts are significantly improved. Compared to Figure 8(b)(c), the results in Figure 8(d) are relatively better in stylization effect and structure maintenance, which demonstrates that our method can improve performance on the single-scale architecture. In the multi-scale architecture results, as shown in Figure 8(e), the structure of the translation results is more complete and clearer than that of the single-scale results, which is attributed to the multi-scale feature extraction capability. However, the stylization effect is still unsatisfactory. In Figure 8(f), by adding the FDA module, both the global content and the stylization effect are improved compared to the multi-scale method. And in Figure 8(g), the detailed structure is enhanced by adding the WSE module. When all of our proposed modules are applied, the result shown in Figure 8(h) achieves the best in terms of both style effect and structure maintenance, which proves the effectiveness of our method. Quantitative Comparisons The quantitative results of the ablation experiments are shown in Table 2. The results of the single-scale architecture are the worst, and the SSIM (Wang et al. 2004) and FSIM (Zhang et al. 2011) are improved by adding FDA or WSE to the network. The best results are obtained by adding both the proposed modules. The baseline results of the multi-scale architecture outperform the optimal results under the single-scale architecture in terms of SSIM (Wang et al. 2004) and FID (Heusel et al. 2017), which demonstrate the effectiveness of the proposed multi-scale architecture. The quantitative results are further improved after adding FDA or WSE. The best results are achieved when all modules are applied, which is consistent with the qualitative experimental results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7825 Figure 6: Comparison of existing methods on the BDD Sunny →Cloudy task. Figure 7: Comparison of existing methods on the GTA5 →CityScapes task. Figure 8: Qualitative comparison results of ablation studies. (a) Single-scale. (b) Single-scale+FDA. (c) Singlescale+WSE. (d) Single-scale+FDA+WSE. (e) Multi-scale. (f) Multi-scale+FDA. (g) Multi-scale+WSE. (h) Multiscale+FDA+WSE. Conclusion and Future Work In this work, we introduce an unsupervised image translation method with structural enhancement in frequency domain named SEIT. It is built on the GAN framework with the proposed FDA and WSE modules. We take advantage of the image features in the frequency domain to preserve the source content feature during style conversion and further enhance the structural details. The multi-scale architecture is applied to minimize information loss during translation and domainspecific features are extracted using image-independent encoders for the source and target domains. Our method outperforms existing methods qualitatively and quantitatively. Thanks to the effective decoupling of content and style, our method can be extended to the multi-modal image translation tasks to explore its performance on more domains. Acknowledgements This work was supported by the National Key Research and Development Project of New Generation Artificial Intelligence of China under Grant 2018AAA0102504, and Key R&D Plan of Shaanxi Province under grant number 2022GY-080. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7826 References Benaim, S.; and Wolf, L. 2017. One-sided unsupervised domain mapping. Advances in Neural Information Processing Systems, 30. Chen, R.; Huang, W.; Huang, B.; Sun, F.; and Fang, B. 2020. Reusing discriminators for encoding: Towards unsupervised image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8168–8177. Chen, X.; Pan, J.; Jiang, K.; Li, Y.; Huang, Y.; Kong, C.; Dai, L.; and Fan, Z. 2022. Unpaired deep image deraining using dual contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017–2026. Cordts, M.; Omran, M.; Ramos, S.; Scharw¨achter, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2015. The cityscapes dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop on the Future of Datasets in Vision, volume 2. Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; Zhang, K.; and Tao, D. 2019. Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2427–2436. Fu, M.; Liu, H.; Yu, Y.; Chen, J.; and Wang, K. 2021. DWGAN: A discrete wavelet transform GAN for Nonhomogeneous Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 203–212. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30. Huang, X.; Liu, M.-Y.; Belongie, S.; and Kautz, J. 2018. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision, 172–189. Jiang, L.; Zhang, C.; Huang, M.; Liu, C.; Shi, J.; and Loy, C. C. 2020. TSIT: A simple and versatile framework for image-to-image translation. In Proceedings of the European Conference on Computer Vision, 206–222. Springer. Johnson, J.; Alahi, A.; and Fei-Fei, L. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, 694–711. Springer. Lee, H.-Y.; Tseng, H.-Y.; Huang, J.-B.; Singh, M.; and Yang, M.-H. 2018. Diverse image-to-image translation via disentangled representations. In Proceedings of the European Conference on Computer Vision, 35–51. Liang, J.; Zeng, H.; and Zhang, L. 2021. High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9392–9400. Park, T.; Efros, A. A.; Zhang, R.; and Zhu, J.-Y. 2020. Contrastive learning for unpaired image-to-image translation. In Proceedings of the European Conference on Computer Vision, 319–345. Springer. Richter, S. R.; Vineet, V.; Roth, S.; and Koltun, V. 2016. Playing for data: Ground truth from computer games. In Proceedings of the European Conference on Computer Vision, 102–118. Springer. Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; and Lopez, A. M. 2016. The SYNTHIA Dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3234–3243. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Theiss, J.; Leverett, J.; Kim, D.; and Prakash, A. 2022. Unpaired Image Translation via Vector Symbolic Architectures. In Proceedings of the European Conference on Computer Vision, 17–32. Springer. Wang, Z.; Bovik, A.; Sheikh, H.; and Simoncelli, E. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600–612. Yoo, J.; Uh, Y.; Chun, S.; Kang, B.; and Ha, J.-W. 2019. Photorealistic style transfer via wavelet transforms. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9036–9045. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; and Darrell, T. 2020. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2636–2645. Zhang, L.; Zhang, L.; Mou, X.; and Zhang, D. 2011. FSIM: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8): 2378–2386. Zheng, C.; Cham, T.-J.; and Cai, J. 2021. The spatiallycorrelative loss for various image translation tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16407–16417. Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2223–2232. Zou, W.; Jiang, M.; Zhang, Y.; Chen, L.; Lu, Z.; and Wu, Y. 2021. SDWNet: A straight dilated network with wavelet transformation for image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1895–1904. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7827
2024
869
18,705
Image Safeguarding: Reasoning with Conditional Vision Language Model and Obfuscating Unsafe Content Counterfactually Mazal Bethany1, 2, *, Brandon Wherry1, 2, *, Nishant Vishwamitra1, Peyman Najafirad1, 2, † 1University of Texas at San Antonio 2Secure AI and Autonomy Lab {mazal.bethany, brandon.wherry, nishant.vishwamitra, peyman.najafirad}@utsa.edu Abstract Social media platforms are being increasingly used by malicious actors to share unsafe content, such as images depicting sexual activity, cyberbullying, and self-harm. Consequently, major platforms use artificial intelligence (AI) and human moderation to obfuscate such images to make them safer. Two critical needs for obfuscating unsafe images is that an accurate rationale for obfuscating image regions must be provided, and the sensitive regions should be obfuscated (e.g. blurring) for users’ safety. This process involves addressing two key problems: (1) the reason for obfuscating unsafe images demands the platform to provide an accurate rationale that must be grounded in unsafe imagespecific attributes, and (2) the unsafe regions in the image must be minimally obfuscated while still depicting the safe regions. In this work, we address these key issues by first performing visual reasoning by designing a visual reasoning model (VLM) conditioned on pre-trained unsafe image classifiers to provide an accurate rationale grounded in unsafe image attributes, and then proposing a counterfactual explanation algorithm that minimally identifies and obfuscates unsafe regions for safe viewing, by first utilizing an unsafe image classifier attribution matrix to guide segmentation for a more optimal subregion segmentation followed by an informed greedy search to determine the minimum number of subregions required to modify the classifier’s output based on attribution score. Extensive experiments on uncurated data from social networks emphasize the efficacy of our proposed method. We make our code available at: https://github.com/SecureAIAutonomyLab/ConditionalVLM Introduction Social media is being increasingly misused by bad actors to share sexually explicit, cyberbullying, and self-harm content (Hendricks 2021; Chelmis and Yao 2019; Adler and Chenoa Cooper 2022). However, social media platforms are required by law to safeguard their users against such images (Exon 1996), as well as provide a rationale for why such images are flagged (Cabral et al. 2021) for the purpose of transparency. In response, major platforms have deployed AI and human-based content moderation techniques to *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. flag and obfuscate (i.e, make the image safer by blurring sensitive regions) such images (Bethany et al. 2023). This process involves obfuscating (e.g. by blurring or blocking) unsafe image regions in the image (Li et al. 2017) along with generating a rationale that backs up the decision to obfuscate the flagged images (Meta 2022). The image obfuscation process faces two critical problems regarding how much of the unsafe image is obfuscated and why it is obfuscated: First, the decision to deem an image unsafe and obfuscate it demands providing a rationale for the decision. For example, Instagram moderators are required to provide a legal rationale (Bronstein 2021; Are 2020) to back up their decision (Tenbarge 2023). Existing visual reasoning methods (Li et al. 2022, 2023; Dai et al. 2023) are severely limited for unsafe images such as sexually explicit, cyberbullying, and self-harm since they cannot provide a rationale grounded in attributes that are specific to such images, such as rude hand gestures in cyberbullying images (Vishwamitra et al. 2021), or sensitive body parts in sexually-explicit images (Binder 2019). Second, the unsafe image needs minimal obfuscation while still depicting the safe regions for evidence collection and investigation (Billy Perrigo 2019). For instance, human moderators need to determine the age of the person in the image (e.g., in child sexual abuse material (CSAM) investigations), look for identifiers (e.g., tattoos, scars, and unique birthmarks), and determine their location information (e.g., landmarks, geographical features, and recognizable surroundings). Current segmentation techniques (Chandrasekaran et al. 2021; Vermeire et al. 2022; Bethany et al. 2023) cannot minimally identify the regions and consequently impede investigations that pertinently need full details of the remaining safe regions. In this work, we take the first step towards addressing a pertinent, but overlooked problem of the image moderation process in social media platforms. Our major objective is to first identify and minimally obfuscate the sensitive regions in an unsafe image such that the safe regions are unaltered to aid an investigation, and then provide an accurate rationale for doing so, that is grounded in unsafe image attributes (e.g., private body parts, rude gestures or hateful symbols). To this end, we address this problem in two steps: (1) we develop a novel unsafe image rationale generation method called ConditionalVLM (i.e., conditional vision language model) that leverages the state-of-the-art large lanThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 774 guage models (LLM)-based vision language models (Fang et al. 2023) to perform an in-depth conditional inspection to generate an accurate rationale that is grounded in unsafe image attributes; and (2) minimally obfuscating the sensitive regions only by calculating the classifier attribution matrix using a FullGrad-based model (Srinivas and Fleuret 2019) and then utilize this information to guide Bayesian superpixel segmentation (Uziel, Ronen, and Freifeld 2019) for a more informed and optimal dynamic subregion segmentation, via calculating the attribution score of each subregion. Finally, we utilize a discrete optimization technique such as informed greedy search to determine the minimum number of subregions required to modify the classifier’s output, using the score attribution. Our work has profound implications for the safety of social media content moderators, by greatly reducing their need to view unsafe content (Steiger et al. 2021), social media users who are minors or sensitive to such content (Hargrave and Livingstone 2009), and law enforcement agents who need to investigate such images as part of their investigation (Krause 2009). We make the following contributions: • We develop ConditionalVLM, a visual reasoning model that generates accurate rationales for unsafe images by leveraging state-of-the-art VLMs conditioned on pretrained unsafe image classifiers. • We develop a novel unsafe image content obfuscation algorithm that minimally obfuscates only the unsafe regions while keeping the rest of the image unaltered for investigations. • Evaluations of our work show that it can categorize the three social media unsafe categories of images with an accuracy of 93.9%, and minimally segment only the unsafe regions with an accuracy of 81.8%. Related Works Safeguarding Images Social media platforms are frequently misused for sharing various forms of unsafe content, including sexually-explicit images (Ashurst and McAlinden 2015; Sanchez et al. 2019), non-consensual intimate images (NCII)(Lenhart, Ybarra, and Price-Feeney 2016), and child sexual abuse material (CSAM)(Sanchez et al. 2019). These platforms also contribute to the spread of cyberbullying (Vishwamitra et al. 2021) and self-harm images, which pose significant risks (John et al. 2018). The traditional blurring approach in image moderation has wide-ranging implications. Over a million global moderators face mental health risks from viewing such content (bbc 2021; reu 2021). Additionally, minors require image safeguarding to shield them from exposure to harmful content, while law enforcement agents need it for analyzing crime scene images with minimal obfuscation to preserve crucial investigative details. Vision-Language Models Pre-trained models in computer vision (CV) and natural language processing (NLP) have led to the development of large-scale Vision-Language Models (VLMs). Methods like CLIP (Radford et al. 2021) and BEIT-3 (Wang et al. 2023) integrate image-text pairs, with CLIP using contrastive training and BEIT-3 employing multiway transformers for masked modeling. Modular approaches also exist, leveraging established models for image and text interpretation. However, these models face challenges in effectively coordinating visual and textual features. For instance, Flamingo (Alayrac et al. 2022) and BLIP-2 (Li et al. 2023) address this by adding cross attention layers or querying transformers, while LENS (Berrios et al. 2023) develops visual vocabularies without additional training. A common limitation is the lack of conditioning capability, crucial for domain-specific attributes (Ramesh et al. 2022). Image Segmentation and Counterfactual Explanation for Obfuscation Another type of explanation that is growing in popularity due to its ability to address several of these issues is counterfactual explanations (Wachter, Mittelstadt, and Russell 2017). A counterfactual explanation can be defined as taking the form: a decision y was produced because variable X had values (v1, v2, . . . ) associated with it. If X instead had values (v1′, v2′, . . . ), and all other variables had remained constant, score y′ would have been produced. Some works such as BEN (Chandrasekaran et al. 2021), SEDC (Vermeire et al. 2022), and CSRA (Bethany et al. 2023) have explored region-based counterfactual visual explanations. However, existing approaches face two key challenges: 1. suboptimal subregion boundaries, leading to excessive parts of the image being identified as causing a decision, and 2. high time complexity 2K in searching for a counterfactual in an image of K regions. BEN and SEDC segment an input image into K static subregions without any prior knowledge of the classifier, resulting in an uninformed search strategy for finding the counterfactual examples. While CSRA does use prior knowledge of the classifier to inform the search of the counterfactual example, BEN, SEDC and CSRA do not jointly optimize the subregions boundaries and minimize the number of subregions, which is particularly important for obfuscation applications where preserving as much context as possible is preferred. Method Figure 1 illustrates the architecture of our proposed approach, which consists of two modules. The initial module proposes a conditional visual language model designed for image reasoning. The model classifies images as safe or unsafe by understanding the interactions or activities of entities within the image, using its comprehension of visual features and linguistic annotations. In the subsequent module, counterfactual visual explanations are proposed to precisely identify sub-object regions of the image contributing to its unsafe classification for obfuscation. Conditional Vision-Language Model We introduce a framework that synergistically combines the strengths of large language models (LLMs) with the specific requirements of large image encoders. Additionally, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 775 Counterfactual Generation Algorithm Using Informed Search Subobject Region Partitioning using Adaptive Segmentation Bayesian Adaptive Superpixel Segmentation FullGrad Conditional Image Instruction-guided Transformer (CIIT) Pre-trained Large Language Model (LLM) Instruction (I) (Z) Pre-trained Image Encoder Select Highest Region Attribution Description: In the image, a woman is making an offensive gesture, such as flipping someone off, with her middle finger. Does the image contain potentially offensive gestures or symbols? Image (X) 0.4 0.2 0.07 0.10 0.15 Masked Image (X’) X Unsafe Classifier (c) X Conditional Figure 1: Overview of the proposed architecture. The initial module utilizes ConditionalVLM for classifying images as safe or unsafe, while the subsequent module proposes counterfactual visual explanations to identify and obfuscate the unsafe regions within the image. it provides more explicit control over visual features being reasoned. The ConditianalVLM architecture is anchored by three pivotal components, as depicted in Figure 1: A Large Pre-trained Image Encoder takes an image X as input and outputs a visual embedding representation of the image, Z = g(X). We explore a state-of-the-art pre-trained vision transformer ViT-g/14 from EVA (Fang et al. 2023). A Conditional Image Instruction-guided Transformer (CIIT) employs contrastive language-image pre-training to encode visual data in congruence with a specific language prompt. Additionally, we condition this language prompt using pre-trained unsafe image classifiers. This allows the model to match and parse the unsafe visual embedding effectively, while also providing more explicit control over unsafe visual features (Ramesh et al. 2022). CIIT utilizes a pre-trained Q-Former model (Li et al. 2023), which is conditioned on image classifiers as control code c on unsafe image content such as sexually explicit, cyberbullying, and self-harm. • A prior p(I|c) that produces CIIT instruct prompt I conditioned on control code c. • A transformer decoder p(L|I, c) that produces contrastive embedding L conditioned on Instruct prompt I and control code c. The transformer decoder allows us to invert images given their CIIT Instruct prompt, while the prior allows us to learn a generative model of the image embeddings themselves. Taking the product of these two components yields a generative model P(L|c) of embedding L given control c: p(L|c) = p(L, I|c) = p(L|I, c)p(I|c) (1) The control code c provides a point of control over the CIIT generation process. The distribution can be decomposed using the chain rule of probability and trained with a loss that takes the control code into account. p(L|c) = n Y i=1 p(Li|L<i, c) (2) We train the model with parameters θ to minimize the negative log-likelihood over a dataset D = X1, ..., Xn: L(D) = − |D| X k=1 log pθ(Lk i |L<i, ck) (3) A Pre-trained Large Language Model Decoder takes a text embedding L as input and outputs linguistic sentences derived from the embedding, Text = LLM(L). We choose Vicuna (Vic 2023) as our LLM decoder which is constructed upon LLaMA (Touvron et al. 2023) and can perform a wide range of complex linguistic tasks. Counterfactual Subobject Explanations for Obfuscation In order to connect region attribution to provide counterfactual subobject region explanation of an image, relative to a given machine learning predictive model, we propose a twophase approach. The pipeline of the proposed approach is illustrated in Figure 1. We first partition the image into nonintersecting subobject regions and measuring region attribution value to each region using gradient attribution maps in Section 3.1 and Section 3.2. The counterfactual analysis of alternate versions of the image using a greedy search algorithm using regions with highest attribution values for counterfactual analysis is followed in Section 3.3. Subobject Region Partitioning using Adaptive Segmentation. We represent a given image,Xas a non-intersecting set of K regions given by {z1, z2, · · · , zK}. The boundaries of these regions are defined by clustering algorithms that use color and spatial information and are called superpixels. Let Z represent the K region segmentation, zi represent the label assigned to Xi and j represent the label of some arbitrary The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 776 cluster. An image must be segmented into meaningful subobject regions in order to allow for a counterfactual analysis of the image by the binary predictive model f(X) →0, 1. These regions serve as the features that are analyzed in the counterfactual analysis. To maximize the efficiency of a counterfactual analysis, we require an adaptive segmentation method. Many segmentation methods are wasteful in their assignment of many segments to uninformative regions, while not segmenting detailed regions enough. Such a method should be able to respect pixel connectivity and spatial coherence and requires an adaptive number of regions. K-means based clustering methods are a fast and simple basis for leading segmentation, however Gaussian Mixture Models (GMM) may be better suited for an adaptive segmentation method since we need to capture the heterogeneity in the pixel distribution of various types of images. Let N = h ∗w be the number of pixels in an image, X with c color channels. The values attributed to the pixels in X can be denoted as Xi = (li, ci) ∈R5, where li ∈R2 represent the x, y coordinate location and ci ∈R3 represent the RGB color information. Superpixel clustering methods with spatial coherence aim to partition (Xi)N i=1 into K disjoint groups. Let Z represent the K region segmentation, zi represent the label assigned to Xi and j represent the label of some arbitrary cluster. Where N(X; µj, Σj) is a Gaussian PDF with mean µj and a covariance matrix Σj of size n ∗n, the PDF of a GMM with K components is p(X; (µj, Σj, λj)K j=1) = K X j=1 λjN(X|µj, Σj) The mixing coefficients λj in the PDF of a GMM form a convex combination where: K X j=1 λj = 1, λj ≥0 ∀j and this allows for a globally optimal clustering. Given a Gaussian distribution j where θj = (µj, Σj), a Bayesian GMM with random variables (θj)K j=1 and (λj)K j=1 are drawn from p((θj, λj)K j=1), a prior distribution. Assuming independence, the prior distribution can be factorized as follows p((θj, λj)K j=1) = p((λj)K j=1) K Y j=1 p(θj) Using a Normal-Inverse Wishart (NIW) for p(θj) and a Dirichlet distribution for p((λj)K j=1) gives us posterior distributions in the same form as the priors. Furthermore, the updates from the priors are given in closed form. The Bayesian GMM inference to calculate Z can be done by performing Gibbs sampling, alternating between the following equations: p((λj)K j=1|Z, (Xi)N i=1) K Y j=1 p((θj, λj)|Z, (Xi)N i=1) p(Z|(θj, λj)K j=1, (Xi)N i=1) Subobject Region Attribution Value. We start by creating the FullGrad (Srinivas and Fleuret 2019) attribution map for image feature attribution. Given an image X and the feature maps generated by the FullGrad L[u, v] of width u and height v for the model prediction, the goal of the visual attention model is to identify the discriminative regions of the image that significantly influence the class prediction score of the predictive model using L[u, v] pixel attribution values. The attribution map of the FullGrad method is generated by propagating an image through a CNN, obtaining the output score before the softmax layer, and then computing the gradients with respect to the input (input-gradients) and the biases at each layer (bias-gradients). These gradients are then combined, with each bias-gradient reshaped to match the input dimensionality and all gradients summed to form the FullGrad attribution map. FullGrad Definition: Consider a CNN model f, with x denoting the input and b denoting the biases at each layer, c representing the channels of layer k. Furthermore, given an output of interest f(x), and a postprocessing operator ψ(·) the FullGrad attribution map LF ullGrad is defined as: LF ullGrad = ψ(∇xf(x) ⊙x) + X k∈K X c∈ck ψ(f b(x)c) To facilitate an efficient sampling of regions in the counterfactual analysis, we utilize the FullGrad attribution map. Definition 1: (Subobject Region Attribution Score) Using the attribution map of model f(X) and the subobject regions {z1, z2, · · · , zK} created by adaptive segmentation for the input image X, we define the subobject region attribution score, {s1, s2, · · · , sK} as follows: sk = 1 n.m X n X m LF ullGrad(F,X)[i, j], X[i, j] ∈zk Although feature attributions highlight features that are significant in terms of how they affect the model’s ability to predict, they do not indicate that altering significant features would result in a different desired outcome. Definition 2: (Subobject Region Confidence Reduction) Given a model Y = f(X) that takes an image X with subobject regions X = [z0, z1, ..., zn]T and outputs a probability distribution Y. The confidence reduction crk of subobject region zk, (k ∈[1, n]) towards Y is the change of the output by masking the k-th subobject region of X, while being classified as the same class, as follows: crk = f(X) −f(X ◦Mask(zk)) In Sec 3.3, we present our greedy region search algorithm which utilizes subobject region attribution score as heuristics and employs confidence level for causal obfuscation using counterfactual subobject region explanations. Counterfactual Generation Using Informed Subobject Region Search. The previous sections lead us to the minimum region masking problem. This can be computationally The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 777 expensive to solve, as it requires the masking and analysis of 2K different combination of regions, Z of X based on Section 3.1 . Rather than solving the problem directly, we find an approximate solution using a greedy region search. Given a predictive model f : X → {0, 1}, we can define the set of counterfactual explanations for an input x ∈ X as x′ while arg minx′ d(x, x′) and x′ = {x ∈ X | f(x) ̸= f(x′)}. In other words, x′ = {x ∈X | f(x) ̸= f(x′)} contains all the inputs x for which the model f returns a prediction different from f(x) while minimizing the distance between x and x′. Our greedy region search, starts with us first sorting the K regions in descending order by the average attribution for each region which were calculated in subsection . The greedy region search considers a subset of regions k ∈K. k begins with the top region by average attribution and iteratively expands to the top two regions by average attribution and so on until an x′ is found such that f(x′) ̸= f(x). Experimental Evaluation Datasets We evaluated our Conditional VLM and Counterfactual Subobject Explanation methods on three datasets of realworld harmful images to study the practical application of counterfactual subobject explanations. Sexually Explicit: First, we sampled a subset of images from an NSFW images dataset (Kim 2021) consisting of 334,327 images by selecting the “porn”, “neutral”, and “sexy” classes. We combine the “neutral”, and “sexy” classes into a single class of “safe” images. In the proceeding experiments, this dataset is denoted as SE. Cyberbullying: Second, we used a cyberbullying images (Vishwamitra et al. 2021) dataset consisting of nearly 20,000 images belonging to the classes “cyberbullying” and “non-cyberbullying”. In the proceeding experiments, this dataset is denoted as CB. Self-Harm: Third, we used a self-harm images dataset (Bethany et al. 2023), consisting of 5000 images with classes “self-harm” and “non self-harm”. In the proceeding experiments, this dataset is denoted as SH. Evaluation Settings ConditionalVLM. We compare our proposed method against other state-of-the-art image-to-text models such as Figure 2: Examples of segmentation methods on a cyberbullying image. From top to bottom: (1) BASS, (2) SLIC, (3) SAM. InstructBLIP (Dai et al. 2023), OFA-Large (Wang et al. 2022), and mPLUG (Li et al. 2022). We use the implementations of these methods from HuggingFace. For InstructBLIP, we use InstructBLIP-Vicuna-13b with num beams=5, max length=512, min length=1, top p=0.9, repetition penalty=1.5, length penalty=1.0, and temperature=1. The image encoder for this implementation of InstructBLIP was Vit-g/14 (Fang et al. 2023). For mPLUG, we use the parameters do sample=True, top k=5, and max length=512. For OFA we use the parameters of num beams=5, no repeat ngram size=3. To demonstrate our ConditionalVLM framework, we modify the InstructBLIPVicuna-13b architecture to include a CIIT, which we call ConditionalBLIP. All experiments were carried out on a DGX 8x A100 GPU, with 80GB of VRAM each. We fine-tuned a ResNet-50 classifier available in Pytorch (Paszke et al. 2019) using pre-trained model weights trained from the ImageNet dataset (Deng et al. 2009). The NSFW, cyberbullying and self-harm datasets were each divided into train, validation, and test sets, with 80% being allocated to the train set, and 10% each allocated to validation and test sets. We trained the models for 50 epochs and selected the models that have the highest classification accuracies on the validation sets. These models achieved accuracies of 98.9%, 91.9% and 97.6% respectively on the test set in our experiments. We use these classifiers as the control code for the CIIT in ConditionalBLIP. Counterfactual Subobject Explanations for Obfuscation. To test different segmentation methods, we experimented with SLIC (Achanta et al. 2010), Felzenszwalb (Felzenszwalb and Huttenlocher 2004), and Compact Watershed (Neubert and Protzel 2014) segmentation methods implemented in the scikit-image library (van der Walt et al. 2014), Segment Anything Model (SAM) (Kirillov et al. 2023), and Bayesian Adaptive Superpixel Segmentation (Uziel, Ronen, and Freifeld 2019). For our experiments, we selected the following parameters for each segmentation method: for SLIC, we chose the number of segments to be 25 and compactness equal to 1; for Felzenszwalb we selected the scale to be 500, Data Model Human Evaluation ConditionalBLIP 94.5 SE InstructBLIP 80.0 mPLUG 73.5 OFA-Large 13.5 ConditionalBLIP 99.5 CB InstructBLIP 23.0 mPLUG 16.0 OFA-Large 10.0 ConditionalBLIP 99.0 SH InstructBLIP 47.0 mPLUG 51.5 OFA-Large 17.0 Table 1: Quantitative results of VLM on coarse-grained image descriptions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 778 Data Questions InstructBLIP ConditionalBLIP Does the image contain any sexual content or nudity? 61.1 99.2 SE Does the image contain explicit sexual content? 37.9 99.9 Does the image contain potentially offensive gestures or symbols? 59.1 80.0 Does the image contain any weapons? 99.2 94.0 CB Is there any aggressive behavior shown in the image? 56.4 98.3 Is there any text in the image that might be considered hate speech? 57.2 95.2 Are there any depictions of self-harm in the image? 27.4 98.9 SH Are there visible signs of self-harm through physical trauma? 74.2 99.2 Does the image contain symbols associated with self-destructive behavior? 30.2 81.4 Table 2: Quantitative results of VLM on fine-grained moderator questions. sigma to be 0.5, and a minimum component size of 200; for Compact Watershed, we chose the number of markers to be 25 and the compactness parameter to be 0.001. We used the following attribution map methods in our experiments: (Grad-CAM (Selvaraju et al. 2017), XGradCAM (Fu et al. 2020), Grad-CAM ++ (Chattopadhay et al. 2018), FullGrad (Srinivas and Fleuret 2019), and AblationCAM (Ramaswamy et al. 2020)). For the implementation of the attribution map methods, we use the Pytorch Grad-CAM library (Gildenblat and contributors 2021). Evaluation Metrics ConditionalVLM. We evaluate VLM’s ability to investigate three different unsafe image categories in two phases. In the first phase, we conduct a coarse-grained evaluation by having human evaluators determine based off of the image descriptions produced by the VLM whether a moderator should be able to understand which dataset of unsafe image the image belongs to. In this evaluation, a team of three human evaluators who were involved in this research were asked to evaluate whether these descriptions produced by the VLM on the questions of ”What is happening in the image?”, and ”What are the people doing?” were sufficient to accurately categorize them into the correct dataset that the unsafe image the image belongs to. The final labels were assigned by majority voting. In the second phase, we conduct a fine-grained evaluation by having human evaluators evaluate the responses of the VLM to curated moderator questions with respect to an unsafe image image. These fine-grained questions ask about specific attributes of images relating to the unsafe image categories. In this evaluation, the same team of evaluators were asked to determine whether the answers produced by the VLM correctly answered these curated questions. Counterfactual Subobject Explanations for Obfuscation. We investigate the ability of CSE to generate a successful counterfactual explanation on an unsafe image X to satisfy two requirements: (1) the generated counterfactual example X′ must be a convincing representation of another class such that it has a softmax score greater than a threshold T on another class, and (2) the search space that the counterfactual example X′ exists in must be found by searching N or fewer different regions. Since, there are 2K different combinations of regions to be analyzed in X with K number of regions, we limit the search space to a certain number of regions in our evaluation. In our experiments on unsafe images, we select the threshold for softmax score T to be 0.5 and the threshold for regions to be 10. Results and Discussion ConditionalVLM. The results for the coarse-grained evaluations of the VLM are shown in Table 1. In this table, we present the accuracy of four models, including our model, ConditionalBLIP, that convert images to text, specifically focusing on their ability to identify unsafe attributes in images based on generic questions. In this experiment, a total of 2000 unsafe image samples from each category of unsafe image datasets were tested. The results show that ConditionalBLIP is able to significantly outperform other stateof-the-art models in identifying the unsafe image attributes of unsafe images, simply from asking generic questions on the image, with an average correct identification accuracy of 98% of unsafe image attributes across the three datasets. Compared to the 50% accuracy by InstructBLIP, 47% by mPLUG, and 13.5% by OFA-Large, we observe that existing models are insufficient for describing unsafe images. We present the questions and quantitative results of the fine-grained evaluation of ConditionalBLIP in Table 2. We compare ConditionalBLIP against InstructBLIP, which showed the best coarse-grained results compared to other methods that were evaluated in Table 1. Furthermore, the InstructBLIP model is the most similar in implementation to the ConditionalBLIP model, where the primary difference is the usage of the CIIT in ConditionalBLIP. In Table 2, we present the question posed to the VLM, alongside the detection accuracy of InstructBLIP and ConditionalBLIP on these questions. The fine-grained evaluation shows that image conditioning significantly enhances VLMs ability to understand unsafe images, with an average improvement in accuracy of 38.2% across the questions. The comparison between the performances of InstructBLIP and ConditionalBLIP reveals significant differences in their respective abilities to identify and describe unsafe content in visual data. By employing contrastive language-image pre-training and conditioning the language prompt using pre-trained unsafe image classifiers, ConditionalBLIP is able to parse the unThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 779 safe visual embedding effectively. Counterfactual Subobject Explanations for Obfuscation. For the counterfactual image obfuscation experiments, we test on 585 sexually explicit, cyberbullying and self-harm images. We compare our method against the CSRA method, setting numROI = 10 to match time complexity. Previous work showed gradient-based attribution maps were unsuitable for obfuscating unsafe images (Bethany et al. 2023). Our trained models show improvements of 13.9% on sexually explicit, 22.0% on cyberbullying, and 39.5% on selfharm images when comparing CSRA vs CSE. We tested various attribution map methods with BASS (Uziel, Ronen, and Freifeld 2019) as the constant segmentation method on unsafe image samples, with results in Table 3. The average search space required to find a counterfactual example was presented, showing that different attribution map methods do not significantly impact CSE, with most generating similar highest average attribution scores in similar areas. The exception was the FullGrad method, which provided slightly more successful counterfactual examples, better average search space, and fewer obfuscated regions. This can be attributed to FullGrad’s more dispersed attributions across the image, which does not restrict the search space as much, and its unique method of satisfying local and global importance by aggregating information from both input-gradient and intermediate bias-gradients, thus aiding CSE in finding suitable counterfactual explanations more readily. We tested different segmentation methods with FullGrad as the constant attribution map method on unsafe image samples, and the results are in Table 4. The choice of segmentation method significantly impacted the number of successful counterfactual explanations, average search space, and average number of regions obfuscated. BASS was the most effective, with a combination of BASS and FullGrad yielding 81.8% successful counterfactual examples, a search Data Attr Map CF Avg Depth Avg Obf FullGrad 90.6 5.8 35.0 Ablation-CAM 90.6 5.8 35.2 SE Grad-CAM 90.6 5.8 35.2 Grad-CAM++ 90.6 5.8 35.2 XGrad-CAM 90.6 5.8 35.2 FullGrad 82.0 5.2 35.2 Ablation-CAM 79.5 5.1 34.2 CB Grad-CAM 79.5 5.1 34.2 Grad-CAM++ 79.5 5.1 34.2 XGrad-CAM 79.5 5.1 34.2 FullGrad 72.8 5.6 50.1 Ablation-CAM 72.8 5.6 50.1 SH Grad-CAM 72.8 5.6 50.1 Grad-CAM++ 72.8 5.6 50.1 XGrad-CAM 72.8 5.6 50.1 Table 3: Quantitative results of CSE using different attribution map methods. Data Segmentation CF Avg Depth Avg Obf BASS 90.6 5.8 35.0 SLIC 76.6 7.6 33.0 SE Felzenswalb 19.9 7.5 12.2 Watershed 51.2 7.9 31.9 SAM 29.5 7.4 33.2 BASS 82.0 5.2 35.2 SLIC 60.0 6.3 25.9 CB Felzenswalb 20.5 6.3 17.6 Watershed 50.0 6.6 23.9 SAM 50.0 6.6 40.2 BASS 72.8 5.6 50.1 SLIC 33.4 6.6 26.3 SH Felzenswalb 38.4 6.5 47.5 Watershed 33.1 6.8 24.6 SAM 39.5 6.2 70.6 Table 4: Quantitative results of CSE on different segmentation methods. depth of 5.5, and an average of 40.1% of the image obfuscated. The segmentation’s effect on counterfactual examples can be seen in Figure 2, and as Table 4 showed, methods like BASS are key for successful counterfactual explanations, as they break the image into non-intersecting, color and spatially coherent subobjects. Ablation Study. To evaluate our vision-language model’s conditioning, we conducted an ablation study by changing unsafe classifier guidance on the Image Instruction-guided Transformer or CIIT model’s instruct prompt embedding from 1 to 0. This conditioning on zero-shot instruct embeddings yielded acceptable results for unsafe images by allowing CIIT to match and parse the unsafe visual embedding effectively, while also providing more explicit control over unsafe visual feature correlation with conditioned instruct prompt. For instance, the LLM decoder’s output for an unsafe image changed to suggest “women are performing potential erotic dance in a bar” vs. “women dancing in a bar”. These results suggest that conditioning is a promising approach for vision language models. Conclusion In this work, we have presented ConditionalVLM, a visual reasoning framework that generates accurate rationales for unsafe image descriptions by leveraging state-of-theart VLMs conditioned on pre-trained unsafe image classifiers, and CSE, a counterfactual visual explanation technique to obfuscate the unsafe regions in unsafe images for safer sharing. We evaluated these two methods on three categories of unsafe images. An implementation of ConditionalVLM, which we called ConditionalBLIP showed superior performance compared to other state-of-the-art imageto-text models on describing unsafe images. We also compare CSE against another recent unsafe image obfuscation method and show how our approach is effective in generating causal explanations for obfuscating unsafe images. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 780 Acknowledgments This research project and the preparation of this publication were funded in part by the Department of Homeland Security (DHS), United States Secret Service, National Computer Forensics Institute (NCFI) via contract number 70US0920D70090004 and by NSF Grant No. 2245983. References 2021. Facebook moderator: ‘Every day was a nightmare’. https://www.bbc.com/news/technology-57088382. Accessed: July 14, 2023. 2021. Judge OKs $85 mln settlement of Facebook moderators’ PTSD claims. https://www.reuters.com/legal/ transactional/judge-oks-85-mln-settlement-facebookmoderators-ptsd-claims-2021-07-23/. Accessed: July 20, 2023. 2023. Vicuna. https://github.com/lm-sys/FastChat. Accessed: July 17, 2023. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; and S¨usstrunk, S. 2010. Slic superpixels. Technical report. Adler, R. A.; and Chenoa Cooper, S. 2022. “When a Tornado Hits Your Life:” Exploring Cyber Sexual Abuse Survivors’ Perspectives on Recovery. Journal of Counseling Sexology & Sexual Wellness: Research, Practice, and Education, 4(1): 1–8. Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35: 23716–23736. Are, C. 2020. How Instagram’s algorithm is censoring women and vulnerable users but helping online abusers. Feminist media studies, 20(5): 741–744. Ashurst, L.; and McAlinden, A.-M. 2015. Young people, peer-to-peer grooming and sexual offending: Understanding and responding to harmful sexual behaviour within a social media society. Probation Journal, 62(4): 374–388. Berrios, W.; Mittal, G.; Thrush, T.; Kiela, D.; and Singh, A. 2023. Towards Language Models That Can See: Computer Vision Through the LENS of Natural Language. arXiv preprint arXiv:2306.16410. Bethany, M.; Seong, A.; Silva, S. H.; Beebe, N.; Vishwamitra, N.; and Najafirad, P. 2023. Towards targeted obfuscation of adversarial unsafe images using reconstruction and counterfactual super region attribution explainability. In 32nd USENIX Security Symposium (USENIX Security 23), 643– 660. Billy Perrigo. 2019. Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch. Binder, M. 2019. Facebook claims its new AI technology can automatically detect revenge porn. https://mashable. com/article/facebook-ai-tool-revenge-porn. Accessed: July 17, 2023. Bronstein, C. 2021. Deplatforming sexual speech in the age of FOSTA/SESTA. Porn Studies, 8(4): 367–380. Cabral, L.; Haucap, J.; Parker, G.; Petropoulos, G.; Valletti, T. M.; and Van Alstyne, M. W. 2021. The EU digital markets act: a report from a panel of economic experts. Cabral, L., Haucap, J., Parker, G., Petropoulos, G., Valletti, T., and Van Alstyne, M., The EU Digital Markets Act, Publications Office of the European Union, Luxembourg. Chandrasekaran, J.; Lei, Y.; Kacker, R.; and Kuhn, D. R. 2021. A combinatorial approach to explaining image classifiers. In 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 35–43. IEEE. Chattopadhay, A.; Sarkar, A.; Howlader, P.; and Balasubramanian, V. N. 2018. Grad-cam++: Generalized gradientbased visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV), 839–847. IEEE. Chelmis, C.; and Yao, M. 2019. Minority report: Cyberbullying prediction on Instagram. In Proceedings of the 10th ACM conference on web science, 37–45. Dai, W.; Li, J.; Li, D.; Tiong, A. M. H.; Zhao, J.; Wang, W.; Li, B.; Fung, P.; and Hoi, S. 2023. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. arXiv:2305.06500. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Exon, J. 1996. The Communications Decency Act. Federal Communications Law Journal, 49(1): 4. Fang, Y.; Wang, W.; Xie, B.; Sun, Q.; Wu, L.; Wang, X.; Huang, T.; Wang, X.; and Cao, Y. 2023. Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19358–19369. Felzenszwalb, P. F.; and Huttenlocher, D. P. 2004. Efficient graph-based image segmentation. International journal of computer vision, 59(2): 167–181. Fu, R.; Hu, Q.; Dong, X.; Guo, Y.; Gao, Y.; and Li, B. 2020. Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs. In BMVC. Gildenblat, J.; and contributors. 2021. PyTorch library for CAM methods. https://github.com/jacobgil/pytorch-gradcam. Accessed: July 17, 2023. Hargrave, A. M.; and Livingstone, S. M. 2009. Harm and offence in media content: A review of the evidence. Hendricks, T. 2021. Cyberbullying increased 70% during the pandemic; Arizona schools are taking action. https: //www.12news.com/article/news/crime/cyberbullyingincreased-70-during-the-pandemic-arizona-schools-aretaking-action/75-fadf8d2c-cf11-43f0-b074-5de485a3247d. Accessed: July 17, 2023. John, A.; Glendenning, A. C.; Marchant, A.; Montgomery, P.; Stewart, A.; Wood, S.; Lloyd, K.; Hawton, K.; et al. 2018. Self-harm, suicidal behaviours, and cyberbullying in children and young people: Systematic review. Journal of Medical Internet Research, 20(4). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 781 Kim, A. 2021. NSFW Data Scraper. https://github.com/ alex000kim/nsfw data scraper. Accessed: August 2, 2022. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Krause, M. 2009. Identifying and managing stress in child pornography and child exploitation investigators. Journal of Police and Criminal Psychology, 24(1): 22–29. Lenhart, A.; Ybarra, M.; and Price-Feeney, M. 2016. Nonconsensual image sharing: one in 25 Americans has been a victim of” revenge porn”. Li, C.; Xu, H.; Tian, J.; Wang, W.; Yan, M.; Bi, B.; Ye, J.; Chen, H.; Xu, G.; Cao, Z.; et al. 2022. mPLUG: Effective and Efficient Vision-Language Learning by Crossmodal Skip-connections. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 7241–7259. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Li, Y.; Vishwamitra, N.; Knijnenburg, B. P.; Hu, H.; and Caine, K. 2017. Effectiveness and users’ experience of obfuscation as a privacy-enhancing technology for sharing photos. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW): 1–24. Meta. 2022. Appealed Content. https://transparency.fb.com/policies/improving/appealedcontent-metric/. Accessed: July 14, 2023. Neubert, P.; and Protzel, P. 2014. Compact watershed and preemptive slic: On improving trade-offs of superpixel segmentation algorithms. In 2014 22nd international conference on pattern recognition, 996–1001. IEEE. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Wallach, H.; Larochelle, H.; Beygelzimer, A.; d'Alch´e-Buc, F.; Fox, E.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32, 8024–8035. Curran Associates, Inc. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ramaswamy, H. G.; et al. 2020. Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 983–991. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Sanchez, L.; Grajeda, C.; Baggili, I.; and Hall, C. 2019. A practitioner survey exploring the value of forensic tools, AI, filtering, & safer presentation for investigating child sexual abuse material (CSAM). Digital Investigation, 29. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626. Srinivas, S.; and Fleuret, F. 2019. Full-gradient representation for neural network visualization. Advances in neural information processing systems, 32. Steiger, M.; Bharucha, T. J.; Venkatagiri, S.; Riedl, M. J.; and Lease, M. 2021. The psychological well-being of content moderators: the emotional labor of commercial moderation and avenues for improving support. In Proceedings of the 2021 CHI conference on human factors in computing systems, 1–14. Tenbarge, K. 2023. Instagram’s sex censorship sweeps up educators, adult stars and sex workers. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Uziel, R.; Ronen, M.; and Freifeld, O. 2019. Bayesian adaptive superpixel segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8470–8479. van der Walt, S.; Sch¨onberger, J. L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J. D.; Yager, N.; Gouillart, E.; Yu, T.; and the scikit-image contributors. 2014. scikit-image: image processing in Python. PeerJ, 2: e453. Vermeire, T.; Brughmans, D.; Goethals, S.; de Oliveira, R. M. B.; and Martens, D. 2022. Explainable image classification with evidence counterfactual. Pattern Analysis and Applications, 25(2): 315–335. Vishwamitra, N.; Hu, H.; Luo, F.; and Cheng, L. 2021. Towards Understanding and Detecting Cyberbullying in Realworld Images. In 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA). Wachter, S.; Mittelstadt, B.; and Russell, C. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31: 841. Wang, P.; Yang, A.; Men, R.; Lin, J.; Bai, S.; Li, Z.; Ma, J.; Zhou, C.; Zhou, J.; and Yang, H. 2022. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework. In International Conference on Machine Learning, 23318–23340. PMLR. Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; Liu, Q.; Aggarwal, K.; Mohammed, O. K.; Singhal, S.; Som, S.; et al. 2023. Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19175–19186. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 782
2024
87
18,706
A Pre-convolved Representation for Plug-and-Play Neural Illumination Fields Yiyu Zhuang1* Qi Zhang2*, Xuan Wang3, Hao Zhu1, Ying Feng2, Xiaoyu Li2, Ying Shan2, Xun Cao1 1Najing University, Nanjing, China 2Tencent AI Lab, Shenzhen, China 3Ant Group, Hangzhou, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Recent advances in implicit neural representation have demonstrated the ability to recover detailed geometry and material from multi-view images. However, the use of simplified lighting models such as environment maps to represent nondistant illumination, or using a network to fit indirect light modeling without a solid basis, can lead to an undesirable decomposition between lighting and material. To address this, we propose a fully differentiable framework named Neural Illumination Fields (NeIF) that uses radiance fields as a lighting model to handle complex lighting in a physically based way. Together with integral lobe encoding for roughness-adaptive specular lobe and leveraging the pre-convolved background for accurate decomposition, the proposed method represents a significant step towards integrating physically based rendering into the NeRF representation. The experiments demonstrate the superior performance of novel-view rendering compared to previous works, and the capability to re-render objects under arbitrary NeRF-style environments opens up exciting possibilities for bridging the gap between virtual and real-world scenes. Introduction Modeling and representing the environment illumination from multi-view images is the fundamental issue that has been extensively studied throughout the development of rendering algorithms (Park, Holynski, and Seitz 2020; Yao et al. 2022; Zhang et al. 2022). This task is inherently related to materials decomposition, since the observed scene appearance is affected by the interactions between environment illumination and scene materials. It has drawn significant attention in this era of blowout VR and AR applications, where there is a high demand for photo-realistic rendering of the scene with visually natural illumination in a realistic environment. However, this problem is hard to solve because the environment illumination is of high-dimensional information and strongly coupled with materials. Recent methods employ approximated illumination representations (e.g., environment map (Zhang et al. 2021b), and spherical Gaussian (SG) models (Zhang et al. 2021a; Boss et al. 2021a,b; Zhang et al. 2022)) to simplify the interaction *Both authors contributed equally to this work. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. between environment illumination and the object and reduce computational expense. Unfortunately, the assumption used in doing so is that the environment illumination of the scene is infinitely far away. Neither the environment map nor SG models take the position of 3D illumination into account so they are unable to handle environment occlusion and directional lighting practically. To address this problem, NeILF (Yao et al. 2022) models an incident illumination map for each surface point to handle environment occlusion, indirect light and directional light. However, the constraint of spatial information in lighting is ignored, which is leading to worse material-lighting ambiguity under complex 3D environments. A common follow-up question to ask is: can we find an elegant representation to express the complex environment illumination? Recent advances in Neural Radiance Fields (NeRF) and its variants (Mildenhall et al. 2020; Barron et al. 2021; Verbin et al. 2022; Zhang et al. 2021b) have shown great potential to recover underlying scene proprieties (e.g. geometry, materials, and lighting) from a set of images. NeRF uses a continuous volumetric function to represent the outgoing ray observed by a viewer. The recent development of NeRF provides new possibilities to model complex environmental illumination. To the best of our knowledge, little in the literature has ever tapped into using volumetric radiance fields to express lighting. By doing so we could achieve plug-andplay: re-render an object with natural illumination of the NeRF-style real-world environment. However, directly gathering thousands of incoming rays through volume rendering to compute the color of each surface point is computationally expensive and may seem impossible. This paper presents the Neural Illumination Fields (NeIF) to express the incoming ray of each surface point as the volumetric radiance fields (density and color) that have the efficient capability of handling environment occlusions and directional lighting naturally, as shown in Fig. 1. We first acquire the object’s geometry from the input images using the existing method (Yariv et al. 2020). We focus solely on the decomposition of environment illumination and object material. Specifically, pixel’s specular color is equivalent to the interaction between object materials and the integral of incoming rays within the specular lobe, whose size is related to material roughness. Inspired by environment convolution maps used in traditional image-based lighting, we consider The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7828 Tint Rough Diffuse Cam Ground Truth Rendering P1 P2 Decomposed Edited (Derived through Volume Rendering) Place in “Garden” (c) Novel View Synthesize + (d) Material Editing (b) Visualization of Ambient Illum. Neural Ambient Illumination Environment Map at P1 Environment Map at P2 (e) Plug and Play Specular Lobe (a) Training Phase Optimizing Tint Rough Diffuse Tint Rough Diffuse Ours PSNR↑: 33.72dB Move Closer Figure 1: We introduce a ’plug-and-play’ Neural Illumination Fields (NeIF) that uses volumetric radiance fields to portray 3D environment samples as light emitters, naturally re-rendering new objects under any NeRF-style environments. (a) Using a set of images and masks, our method first optimizes the geometry and then jointly optimizes the NeIF and an object’s materials in two stages. (b) It illustrates the environment map at arbitrary samples in 3D scenes via volume rendering, which handles environment occlusions and directional lighting naturally. The proposed method provides photo-realistic novel views (c) and visually reasonable material editing (d). Furthermore, (e) applies our model to a pre-trained NeRF scene and produces realistic specular reflections with nice directional illumination. Notably, the Fresnel effect of the table (red circle) is exactly reproduced, which is almost impossible for the previous illumination representation. the rough refractive surfaces are inherently related to the convolved background, although the background is ignored in most methods. Our contributions are summarized as: • Neural Illumination Fields (NeIF) is proposed to express radiance fields of incoming rays such that treats each sample in the 3D scene as a light emitter. • Fully differentiable rendering pipeline is presented to seamlessly illuminate meshes using a NeRF-style environment. • Integrated Lobe Encoding (ILE) is proposed to featurize incoming rays within roughness-adaptive specular lobe to reduce computational cost. • Multiscale pre-convolved representation for the background is proposed to assist in the decomposition of object materials and environmental illumination. Related Work Illumination Representation. Illumination representation is essential for photo-realistic rendering in various view synthesis and relighting applications (Haber et al. 2009; Xu et al. 2018; Bi et al. 2020a; Li et al. 2020; Boss et al. 2021a; Srinivasan et al. 2021; Zhang et al. 2022; Yao et al. 2022; Boss et al. 2021b; Zhang et al. 2021b,a). Considering that the observed surface appearance is the result of the interactions between ambient illumination and object materials, the ambient illumination is often jointly inferred with object materials from images, also known as inverse rendering (Sato, Wheeler, and Ikeuchi 1997; Marschner 1998; Yu et al. 1999; Ramamoorthi and Hanrahan 2001). Since it is an ill-posed problem, previous methods mitigate this issue by using simplified material models (Zhang et al. 2021a) and varying lighting conditions (Nam et al. 2018; Bi et al. 2020b,a; Yang et al. 2022). This related work specifically focuses on ambient illumination representation techniques. The seminal work (Debevec 1998) proposes an omnidirectional radiance map, also known as environment map, to represent ambient illumination, which can be applied to render novel objects into the scene realistically. Followup methods (Wen, Liu, and Huang 2003; Haber et al. 2009; Barron and Malik 2014; Valgaerts et al. 2012; Song and Funkhouser 2019) use the environment map to handle inverse rendering problems naturally. Furthermore, given high-quality geometry, prior works (Lombardi and Nishino 2016; Park, Holynski, and Seitz 2020) factorize scene appearance into the diffuse image and the environment map from multi-view images. To be extended beyond a constant term, the environment map is expressed as spherical Gaussians (SGs) formulation and integrated its product with surface material BRDF in the same representation to perform illumination calculations (Green, Kautz, and Durand 2007; Wang et al. 2009; Zhang et al. 2021a). However, it has a significant approximation that light captured by the environment map is emitted infinitely far away. Considering the illumination from nearly all real-world light sources varies by direction as well as distance, global illumination representations (e.g., ray-tracing (Miyazaki and Ikeuchi 2007; Srinivasan et al. 2021) and path-tracing (Azinovic et al. 2019; Zhang et al. 2020)) are proposed to use ray casting to express the interactions between ambient illumination and surface materials (Akenine-Moller, Haines, and Hoffman 2019). It is intrinsically described as a ray from a position to determine what objects are in a particular direction. However, global illumination representation is computationally expensive with the pre-computed process, and difficult to reconstruct to 3D real-world illumination for relighting without hand manner. Our work takes inspiration from this line of work in graphics and presents a new illumination representation. We represent the 3D sample of the surrounding environment as a light emitter, such that both The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7829 𝐱𝐱𝑠𝑠 (𝐂𝐂𝑑𝑑 , 𝛼𝛼, 𝜌𝜌) (𝐥𝐥𝑟𝑟, 𝐼𝐼𝐼𝐼𝐼𝐼(𝐱𝐱)) Volume Rendering 𝐥𝐥𝑟𝑟 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 Neural Ambient Illumination 𝜎𝜎 Ray Distance Composition 𝐂𝐂𝑠𝑠 Camera 𝐱𝐱𝑠𝑠 Object’s Geometry 𝐂𝐂 𝐝𝐝 + = 𝐂𝐂d Material MLP Material Decomposition Lobe Tracing Illumination Estimation 𝛼𝛼 𝜌𝜌 𝐂𝐂𝑑𝑑 Material Ambient MLP 𝐧𝐧 Cam 𝐥𝐥𝑟𝑟 −𝐝𝐝 Radius 𝐱𝐱𝐬𝐬 Specular Lobe ( ) 𝛾𝛾 AAAI 20240130 FINAL Reflective Surface Figure 2: The overview of the proposed method that decomposes materials (Cd, α, ρ) of an object and NeIF radiance fields from a set of images and a geometry reconstructed by (Yariv et al. 2020). For a specific surface point xs, NeIF traces its incoming ray within a 3D specular lobe, whose center axis and width are defined by the reflection direction lr and roughness ρ. We then feature samples along that lobe with our integral lobe encoding (ILE) and feed into the ambient MLP to predict the color and volume density of each sample. Using volume rendering techniques, we integrate these values direction-wise multiplied by the material’s function into the specular color Cs. We combine this with the diffuse color Cd provided by the material MLP to render photo-realistic novel views. This rendering procedure is end-to-end differentiable, so we can jointly optimize our NeIF representation and object’s materials. position and direction of illumination are taken into account. Neural Radiance Fields. Neural rendering (Mildenhall et al. 2020; Yariv et al. 2020; Liu et al. 2020; Yariv et al. 2021; Wang et al. 2021a; Zhuang et al. 2023), the task of learning to recover the properties of 3D scenes from observed images, has seen significant success. In particular, Neural Radiance Fields (NeRF) (Mildenhall et al. 2020) recover the radiance fields (volume density and viewdependent color) of a ray using a continuous volumetric function. Numerous works are extended from NeRF based on its continuous neural volumetric representation for generalizable models (Wang et al. 2021b; Chen et al. 2021; Johari, Lepoittevin, and Fleuret 2022; Huang et al. 2023), nonrigidly deformable objects (Tretschk et al. 2021; Park et al. 2021; Zhuang et al. 2022; Wu et al. 2023), imaging processing (Huang et al. 2022; Ma et al. 2022; Chen et al. 2022) Recently, Mip-NeRF (Barron et al. 2021, 2022) uses the integral along a cone instead of a ray to recover an anti-aliasing radiance field from a set of multi-scale downsampling images. Besides, Ref-NeRF (Verbin et al. 2022) is proposed for better reflected radiance interpolation. NeRF and its variants have demonstrated remarkable performance in rendering photo-realistic views, but they only model the outgoing radiance of the surface without considering the underlying interaction between ambient illumination and material. Recent advances in differentiable rendering make it possible to reconstruct environment illumination under casual lighting conditions. Specifically, PhySG (Zhang et al. 2021a) and NeRD (Boss et al. 2021a) use SG representation to decompose the scene under complex and unknown illumination. Zhang et al. (Zhang et al. 2022) model the indirect illumination via SGs without considering the environment occlusion. Neither the environment map nor SG models take the position of 3D environments into account so they are unable to handle environment occlusion and indirect lighting realistically. NeILF (Yao et al. 2022) proposes the local environment map of each surface point for environment occlusion but ignores the distance of lighting. Overall, recent methods cannot construct a detailed illumination that takes into account both near-field lighting and environment occlusion. We propose the NeIF that uses the volumetric radiance fields to express arbitrary illumination in the environment, such that environment occlusion and directional lighting could be handled naturally. Method Given a set of posed images of an object captured under static illumination, our goal is to decompose the shape, material, and lighting, with a primary focus on representing the environment lighting and its interaction with the object surface. Initially, we represent the shape as a zero-level set (Yariv et al. 2020) by learning a Signal Distance Function (SDF) to reconstruct the geometry. Our main contribution, as shown in Fig. 2, is introduced after the object geometry reconstruction in stage one. Preliminaries Prior knowledge of NeRF and physically based rendering such as BRDF is recommended for readers to fully comprehend the modeling presented in this paper. NeRF. NeRF represents traditional discrete sampled geometry with a continuous volumetric radiance field (i.e. density σ and color c). Given a sampled point x along a single ray r originating at o with direction d, a positional MLP predicts its corresponding density σ(x), and a direction MLP outputs color c(x, d) of that point along the ray direction. To render a pixel’s color, NeRF casts a single ray r(t) = o+td through that pixel and out into its volumetric representation, accumulates (σi, ci) into a single color C(r) of the pixel via numerical quadrature (Max 1995), C(r)= X i exp − i−1 X k=0 σk ! (1 −exp(−σi))ci. (1) The rendering equation. In contrast to NeRF, we replace the pixel’s color of outgoing radiance from a surface point The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7830 xs along a view direction d with the diffuse color Cd of that point and an interaction (specular color Cs) between the incoming radiance Lin of environment illumination and scene material based on the rendering equation (Kajiya 1986), C(xs, d) = Cd(xs) + Z Ω f(l, −d, xs)Lin(xs, l)(l · n)dl, (2) where n and l denote the normal vector at xs and the direction of Lin respectively. The f is the bidirectional reflectance distribution function (BRDF) that focuses on the local reflectance phenomena. Eq. (2) integrate all incoming direction l on the hemisphere Ωwhere l · n > 0. Neural Illumination Fields Although previous methods have attempted to model diverse types of lighting in various ways (Zhang et al. 2021a,b; Boss et al. 2021a; Yao et al. 2022), there still remains a discrepancy between virtual and real-world scenes. However, NeRF has the potential to bridge this gap by enabling the accurate modeling of spatially and directionally varying illumination. By expressing the Neural Illumination Fields (NeIF) of an object directly as the continuous volumetric radiance field which includes the volume density and directional emitted radiance at any point in the 3D environment, we can achieve more precise modeling of the light. Given a sample point x in the environment, we approximate this 5D volumetric radiance field function with the ambient MLP network R : (x, l) →(c, σ), where l is the direction of incoming ray pass through x. To further integrate the incoming radiance Lin of an incoming ray r(t) = xs + tl to a surface point xs along with the direction l, we accumulate the corresponding densities and directional emitted colors of (r(t), l) according to volume rendering, Lin(xs, l) = Z tf tn T(t)σ(r(t))c(r(t), l)dt, where T(t) = exp  − Z t tn σ(r(s))ds  , (3) where T indicates the accumulated transmittance. In general, the near and far bounds tn and tf are ideally set to infinitely close to zero and infinitely distant, respectively. Eq. (3) indicates that we treat each sample in the 3D environment as a light emitter. This allows the proposed NeIF to model the directional emitted rays and environment occlusions of any static 3D environments. To define how light derived from a NeRF-style environment is reflected at an opaque surface, we parameterize the spatially-varying material of surface point xs as roughness ρ ∈[0, 1], diffuse color Cd ∈[0, 1]3 and specular tint α ∈[0, 1], which are output by a material MLP network (using a softplus activation), i.e., M : xs →(ρ, Cd, α). We assume that the BRDF f is rotationally symmetric with respect to the reflection direction lr around the specular lobe, such as Phong (Akenine-Moller, Haines, and Hoffman 2019). lr is computed by 2(−d · n)n + d. Consequently, we approximate the BRDF as von Mises-Fisher (vMF) distribution which is defined on the unit lope with a normalized spherical Gaussian (Akenine-Moller, Haines, and Hoffman 2019), f(l, −d, xs)≈G(l, lr, xs)=α exp (ρ(xs) (l · lr −1)) , (4) where l and lr are the unit vector, and the value is positively correlated to l · lr. Specifically, lr refers to the center axis of the lobe, and spatially-varying roughness ρ(xs) controls its angular width (also called the concentration parameter or spread). Noting that, α could be considered as the lobe amplitude, which is learned from the material MLP. By substituting Eq. (4) and (3) into Eq. (2), we obtain the specular term of Eq. (2) as: Cs(xs, d)= ZZ αeρ(l·lr−1)T(x)σ(x)c(x, l)(l · n)dxdl. (5) According to Eq. (4), a larger ρ value corresponds to a rougher surface with a wider vMF distribution. Therefore, Eq. (5) is equivalent to the integration of the radiance field of each sampled point in the specular lobe defined by the reflection direction. By doing so, we can more effectively and stereographically represent the reflection. Integrated Lobe Encoding To better learn the high-frequency variation of NeIF related to roughness, we introduce a featurized representation, which we call an Integrated Lobe Encoding (ILE), that efficiently and simply constructs positional encoding of all coordinates that lie within specular lobe around lr. Our ILE is inspired by IPE used in Mip-NeRF (Barron et al. 2021), which enables the spatial MLP to represent volume density inside the cone along with view direction. We feature all coordinates inside a roughness-adaptive lobe which considers both the lobe width decided by the roughness and the vMF distribution correlated to the incoming direction. We could divide the specular lobe of Eq. (5) into a series of conical frustums. Similarly, we approximate this featurized procedure with a set of sinusoids via a multivariate Gaussian. Specifically, we compute the mean and covariance (µ, Σ) of the conical frustum, which is obtained by xs and lr. Note that the radius variance’s part in Σ(ρ) is decided by the material roughness of the surface point, which is different with IPE (Barron et al. 2021). Given that the integral value of Eq. (5) is under a vMF distribution, it attenuate with the weights of l·lr. Our ILE then formulates the encoding of those coordinates (µ, Σ) within the conical frustum, ILE=  sin(µ) exp(−2ℓ−1)diag(Σ(ρ))) cos(µ) exp(−2ℓ−1)diag(Σ(ρ))) L−1 ℓ=0 , (6) These features encoded by ILE are used as input to the MLP network R to output the density and color of our NeIF. This encoding allows the MLP to parameterize the incoming illumination inside the roughness-varying specular lobe, whose strength is variable with the incoming direction under vMF distribution, to behave as an interpolation function. As a result, our ILE efficiently maps continuous input coordinates into a high-frequency space. Please refer to our supplement for detail derivation. According to Eq. (2), the pixel’s color C captured by a camera is equivalent to the diffuse color Cd of that point and the volumetric integration Cs of the incoming rays within the specular lobe, C(xs, d) = γ (Cd(xs) + Cs(xs, d)) , (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7831 Lobe Tracing in Neural Illumination Fields Pre-convoluted Image Camera Radius ∗ Raw Gaussian Kernel 3𝜎𝜎 Convolution (Relation) Examples of multiscale pre-convoluted images AAAI 20240130 FINAL 𝜎𝜎= 4 𝜎𝜎= 128 𝜎𝜎= 64 𝜎𝜎= 16 Figure 3: The pipeline demonstrates how pre-convolved images can be used to construct a pre-convolved illumination representation. where γ is a learned HDR-to-LDR mapping. We approximate it as the gamma correction with a learned parameter considering the transformation (e.g., exposure and white balance) learned into the radiance of incoming rays. Multiscale Pre-convolved Representation Due the complexity of high-dimension representation of NeIF, it’s easy to integrate the diffuse color into environment illumination, causing ambiguous decomposition. Consequently, we propose a multiscale pre-convolved technique to introduce the background of the object to stabilize the convergence. As shown in Eq. (5), the integral’s results of incoming rays within a specular lobe can be considered as convolution, and the width of the lobe is related to the roughness. As the roughness increases, the environment illumination is convolved with more scattered samples within a wider lobe, creating blurrier reflections. By applying a set of different discrete Gaussian blur kernels to the background of the object, we obtain the pre-convolved results of the incoming rays that correspond to different levels of roughness. In training phase as shown in Fig. 3, the pixels of the background are evaluated through Eq. (5) and used to supervise the decomposition. We manually set radius rp = 3σr0, α = 1, Cd = [0]3, where the σ is the variance of the Gaussian kernel and r0 is the radius of the raw pixel size. This strategy looks similar to the pre-filtered environment map in CG rendering. However, we originally apply it to the neural rendering framework and consider the view direction, which allows for more accurate and realistic specular reflections that are consistent with the object’s roughness level. Loss To alleviate the ambiguity of the material and illumination, we constrain the roughness ρ(xs) and specular tint α(xs) of the surface point xs to be relatively smooth. With the guidance of the image gradient of pixel p, we defined the Bilateral Smoothness regularization (Yao et al. 2022) as: ls = 1 |SI| X p∈SI (∥∇xsα(xs)∥+∥∇xsρ(xs)∥) e−∥∇pI(p)∥, (8) which forces the material gradient of the surface point xs and its projected image pixel p to be corresponding. The image gradient ∥∇pI(p)∥is pre-computed, and SI is the set of the pixel on the object. Glossy Blender Dataset Real-word PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓ NeILF 25.851 0.930 0.088 21.664 0.797 0.203 NeILF∗ 27.279 0.941 0.084 21.226 0.760 0.242 Ne-Env 25.971 0.935 0.086 20.786 0.761 0.251 w/o ILE 37.362 0.979 0.036 24.472 0.801 0.191 w/o BG 36.404 0.979 0.033 24.248 0.801 0.193 w/o Conv. 37.748 0.981 0.032 24.854 0.808 0.185 w/o Reg. 38.442 0.983 0.029 24.973 0.816 0.176 Ours 37.985 0.981 0.031 25.022 0.814 0.179 Table 1: Our method and its ablations outperform the baseline NeILF and its variants on the Glossy Blender dataset, showing superior rendering quality in averaged metrics. We compute the L2-norm reconstruction loss between the predicted color bC and the ground truth color C to jointly optimize the environment illumination and object’s materials: lrec = X p∈SI bC(xs, d) −C(p) 2 2 . (9) The regularization lpre of the pre-convolved representation is performed in the same manner as the Eq. (9), where the pixel p is replaced with a pixel from the pre-convolved background SB, rather than from the object SI. Similar to the hierarchical sampling procedure in NeRF, the proposed method also uses “coarse” and “fine” networks for further promising results and sampling efficiency. Overall, the entire loss in our method is l = lrec +λsls +λplpre, where the weights λs and λp are empirically set to 10−4 and 10−1 in all our experiments. Experiment Implementation Details. Our method is implemented on top of Mip-NeRF (Barron et al. 2021) with PyTorch, and we discretize the Eq. (5) as Mip-NeRF. The number of samples for both the ”coarse” and ”fine” phases is 128. We use the same architecture as Mip-NeRF (8 layers, 256 hidden units, ReLU activations) to train our ambient MLP network, but we apply the ILE module to featurize the input coordinates of incoming rays. We also use an 8-layer MLP with a feature size of 512 and a skip connection in the middle to represent the material MLP network. Please refer to our supplement for more details about network setting and training schemes. Through evaluating the results of novel view synthesis, the performance of the decomposition and illumination quality will be measured. We report the image quality with three metrics: PSNR, SSIM and LPIPS (Zhang et al. 2018), on both synthetic and real-world datasets. Baselines. We compare our method with the following methods: 1) NeILF (Yao et al. 2022) modeled by the Disney BRDF and incident light field of each surface point; 2) NeILF* extended from NeILF which replaces the Disney BRDF to ours; and 3) Ne-ENV extended from NeILF* that uses a neural environment map instead of the incident light field. These methods using different illumination models could demonstrate the performance of our NeIF. The implementation details could be found in our supplement. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7832 Novel View Ground Truth Ours NeILF Ours w/o BG NeILF* Ne-Env Environment Map PSNR↑ 22.57dB 23.32dB 22.66dB 32.05dB 33.27dB Novel View Figure 4: Qualitative comparison of our method against baseline NeILF (Yao et al. 2022) and its variants on self-rendered Glossy Blender dataset. The zoom-in details labeling with yellow and green boxes and corresponding reconstructed environment map surrounding the surface point of each box center are shown. Our method significantly outperforms other methods, both in the rendering quality of novel views and the reconstruction of environment illumination. Noted that, although ours w/o background obtains reconstructs geometrically correct results, but fails on the decomposition of diffuse color and environment illumination due to the incorrect white balance and less background supervision. Glossy Blender Dataset. Although previous NeRF variants have proposed various datasets containing diverse materials, the objects are all synthesized under the environment map, which ideally ignores the ambient distance. It is unrealistic and unnatural that an image renders without background and neglects the 3D environment. Therefore, we propose a new dataset closer to the natural condition. There are 7 synthetic scenes rendered in Blender (Foundation 1994), each containing one glossy object placed in a 3D simulated environment with natural illumination. In our setting, 390 views are rendered around the upper hemisphere of the object, with 180 for training, 10 for validation, and 200 for testing. Const Diffuse Color Roughness Novel View Synthesis AAAI final version Ours w/o BG w/o Reg. w/o ILE Figure 5: Visualizations of decomposed materials for ablation study. “w/o Reg.” generates noisy materials. “w/o ILE” cannot handle the varying in material roughness, resulting in artifacts of diffuse items. “w/o BG” appears incorrect roughness due to lighting. Tab. 1 and Fig. 4 show quantitative and qualitative comparisons of our method against baseline methods, respectively. Metrics in Tab. 1 are averaged over all scenes while the full experiments are accessible in supplement. Considering the inputs of NeILF (Yao et al. 2022) are the masked images of the objects without background, we test our method on the same setting for fairness, referring to “w/o BG” in the experiments. Tab. 1 and Fig. 4 show the significantly superior performance of our method compared to baselines in terms of rendering quality on novel views, even without the Ground Truth NeILF Ours Ours-Diffuse Ne-Env NeILF* Figure 6: Qualitative comparison with zoom-in details of our method against baselines on real-world dataset. Our method produces the most visually pleasing novel views, especially on reconstructing the highlight regions and detailed textures. guidance of background (“w/o BG”). Specifically, NeILF and NeILF∗struggle to handle nearby illumination, which varies dramatically with the position, and their inability to gather information across different views exacerbates the ambiguity. Although NeILF∗simplifies the BRDF function like ours, its performance only slightly improves compared to NeILF, as shown in Tab. 1. For Ne-Env, sharing the same environment map across different positions reduces uncertainty and constructs a more meaningful map. However, the decomposition of materials and illumination is poor due to the simplification of the illumination model, which assumes all radiance comes from an infinite distance. Compared to NeILF and its variants, our method renders photo-realistic views and recovers precise ambient illumination. Even without background guidance, our method reconstructs geometrically correct results. However, the white balance is incorrect, as the majority color of the object is red, causing ambiguity about the red object or red incident light. This issue highlights the importance of using background guidance to disentangle material and light. Ablation Studies. Except “w/o BG”, three additional ablations are contained: “w/o ILE” refers to ignoring ILE, “w/o Conv.” omits multiscale pre-convolved representation, and “w/o Reg.” excludes the Bilateral smoothness regularizaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7833 Unedited Scene Translation/ Rotation Exchange Env. Place in Garden Figure 7: Illumination manipulation visualizations show our method recovering environment lighting for high-fidelity specularities and natural illumination in various manipulations. It handles occlusion and high lighting effectively, and when placed in virtual or pre-trained Mip-NeRF environments, it produces novel views with realistic reflections. tion (Eq. (8)). The results are reported in Tab. 1 and Fig. 5. Specifically, while “w/o Reg.” achieves the best metric performance, it generates noise and becomes inconsistent across the same material, as shown in Fig. 5. This makes it unsuitable for material editing applications. Besides, in “w/o ILE, the model is trained without the roughness-related encoding, leading to difficulties in handling spatially-varying roughness and resulting in artifacts for diffuse items. The “w/o BG” results show that without background assistance, it becomes challenging to distinguish between light and material, causing ambiguity. Real-world. We then test our method on 8 real-world scenes from BlendedMVS (Yao et al. 2020) and Bag of Chips (Park, Holynski, and Seitz 2020), which provide images and depth maps reconstructed by MVS methods or RGBD camera. We selected the last ten images as the test set for BlendedMVS, while for Bag Of Chips, we left the last 1/5 as the test set. The qualitative and quantitative comparisons are shown in Tab. 1 and Fig. 6 respectively. All the results validate the performance of our method on illumination reconstruction and material decomposition, representing the robustness of our method in geometric perturbation. Compared to baselines, the performance gap degrades for two reasons: 1) the images with varying exposures and rough geometry in real-world datasets make the decomposition difficult; 2) not enough background in original images could be used to help the ambient illumination reconstruction. Applications Illumination manipulation. Fig. 7 illustrates several types (rotation, translation, exchange) of manipulation to the ambient illumination around the object. Our method produces natural photo-realistic views, especially for the environment occlusions (e.g. reflected chair) and highlights (e.g. chips) as shown in the translation and rotation cases respectively. More importantly, with a slight scale to the decomposed materials, we place our plug-and-play NeIF model in a pre-trained Mip-NeRF environment (Barron et al. 2022), as shown in the last two columns of Fig. 7. Although there is complex and detailed illumination, visually harmonious novel views are re-rendered with more realistic reflections Increasing Roughness AAAI, 20240130 FINAL Figure 8: Visualizations of roughness editing are presented. The first row displays environment maps for the initial three balls, while the second row demonstrates increasing blurriness with higher roughness. Despite roughness exceeding the Gaussian kernel used in pre-convolved representation, our method still produces visually realistic outcomes. in the “Garden” dataset. It verifies that our NeIF is an easyto-use plugin for NeRF that gives objects a better sense of belonging in the new 3D NeRF-style environments. Please refer to our supplement video for better visualization. Object’s material editing. As our model disentangles the object’s material well, our components behave intuitively and enable visually reasonable material editing results. Fig. 8 shows convincing results of roughness editing (second row) and their corresponding environment maps (first row) on the ’Metal Ball’ dataset. As the roughness increases, novel views gradually become blurred, even when the roughness becomes extremely large that exceeds the maximum scale of pre-convolved representation. Conclusion This paper presents a novel neural approach to efficiently and stereographically modeling 3D ambient illumination. Previous methods focus on simplified lighting models (e.g. environment map and spherical Gaussian) to represent nodistant illumination. Instead, we propose NeIF to model illumination as volumetric radiance fields such that each sample of the surrounding 3D environments is equivalent to a light emitter. We show that, together with our integral lobe encoding and pre-convolved representation, our method can accurately recover ambient illumination and naturally rerender high-quality views for a decomposed object under new NeRF-style environments. We believe that with this high-fidelity and fully differentiable lighting representation, it can be easily extended to downstream tasks and bring us closer to bridging the gap between virtual and real scenes. Limitations. It is difficult to model ambient illumination and decompose the object’s material, our pipeline relies on the geometry reconstructed through stage one. Acknowledgments Supported by China’s National Key Research and Development Program (Grant 2022YFF0902201), National Natural Science Foundation (Grants 62001213, 62025108), and Tencent Rhino-Bird Research Program. Thanks to anonymous reviewers for valuable feedback. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7834 References Akenine-Moller, T.; Haines, E.; and Hoffman, N. 2019. Real-time rendering. AK Peters/crc Press. Azinovic, D.; Li, T.-M.; Kaplanyan, A.; and Nießner, M. 2019. Inverse path tracing for joint material and lighting estimation. In CVPR, 2447–2456. Barron, J. T.; and Malik, J. 2014. Shape, illumination, and reflectance from shading. IEEE transactions on pattern analysis and machine intelligence, 37(8): 1670–1687. Barron, J. T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; and Srinivasan, P. P. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In ICCV, 5855–5864. Barron, J. T.; Mildenhall, B.; Verbin, D.; Srinivasan, P. P.; and Hedman, P. 2022. Mip-nerf 360: Unbounded antialiased neural radiance fields. In CVPR, 5470–5479. Bi, S.; Xu, Z.; Sunkavalli, K.; Haˇsan, M.; Hold-Geoffroy, Y.; Kriegman, D.; and Ramamoorthi, R. 2020a. Deep reflectance volumes: Relightable reconstructions from multiview photometric images. In ECCV, 294–311. Springer. Bi, S.; Xu, Z.; Sunkavalli, K.; Kriegman, D.; and Ramamoorthi, R. 2020b. Deep 3d capture: Geometry and reflectance from sparse multi-view images. In CVPR, 5960– 5969. Boss, M.; Braun, R.; Jampani, V.; Barron, J. T.; Liu, C.; and Lensch, H. 2021a. NeRD: Neural reflectance decomposition from image collections. In ICCV, 12684–12694. Boss, M.; Jampani, V.; Braun, R.; Liu, C.; Barron, J.; and Lensch, H. 2021b. Neural-pil: Neural pre-integrated lighting for reflectance decomposition. NeuIPS, 34: 10691–10704. Chen, A.; Xu, Z.; Zhao, F.; Zhang, X.; Xiang, F.; Yu, J.; and Su, H. 2021. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In ICCV, 14124– 14133. Chen, X.; Zhang, Q.; Li, X.; Chen, Y.; Feng, Y.; Wang, X.; and Wang, J. 2022. Hallucinated neural radiance fields in the wild. In CVPR, 12943–12952. Debevec, P. 1998. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In SIGGRAPH, 189–198. Foundation, B. 1994. Blender. https://www.blender.org/. Accessed: 2022-07-15. Green, P.; Kautz, J.; and Durand, F. 2007. Efficient reflectance and visibility approximations for environment map rendering. In Computer Graphics Forum, volume 26, 495– 502. Wiley Online Library. Haber, T.; Fuchs, C.; Bekaer, P.; Seidel, H.-P.; Goesele, M.; and Lensch, H. P. 2009. Relighting objects from image collections. In CVPR, 627–634. IEEE. Huang, X.; Zhang, Q.; Feng, Y.; Li, H.; Wang, X.; and Wang, Q. 2022. HDR-NeRF: High Dynamic Range Neural Radiance Fields. In CVPR, 18398–18408. Huang, X.; Zhang, Q.; Feng, Y.; Li, X.; Wang, X.; and Wang, Q. 2023. Local Implicit Ray Function for Generalizable Radiance Field Representation. In CVPR. Johari, M. M.; Lepoittevin, Y.; and Fleuret, F. 2022. Geonerf: Generalizing nerf with geometry priors. In CVPR, 18365–18375. Kajiya, J. T. 1986. The rendering equation. In SIGGRAPH, 143–150. Li, Z.; Shafiei, M.; Ramamoorthi, R.; Sunkavalli, K.; and Chandraker, M. 2020. Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image. In CVPR, 2475–2484. Liu, L.; Gu, J.; Zaw Lin, K.; Chua, T.-S.; and Theobalt, C. 2020. Neural sparse voxel fields. NeuIPS, 33: 15651–15663. Lombardi, S.; and Nishino, K. 2016. Radiometric scene decomposition: Scene reflectance, illumination, and geometry from rgb-d images. In 2016 fourth international conference on 3d vision (3dv), 305–313. IEEE. Ma, L.; Li, X.; Liao, J.; Zhang, Q.; Wang, X.; Wang, J.; and Sander, P. V. 2022. Deblur-NeRF: Neural Radiance Fields from Blurry Images. In CVPR, 12861–12870. Marschner, S. R. 1998. Inverse rendering for computer graphics. Cornell University. Max, N. 1995. Optical models for direct volume rendering. TVCG, 1(2): 99–108. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing scenes as neural radiance fields for view synthesis. In ECCV, 405–421. Springer. Miyazaki, D.; and Ikeuchi, K. 2007. Shape estimation of transparent objects by using inverse polarization ray tracing. PAMI, 29(11): 2018–2030. Nam, G.; Lee, J. H.; Gutierrez, D.; and Kim, M. H. 2018. Practical svbrdf acquisition of 3d objects with unstructured flash photography. TOG, 37(6): 1–12. Park, J. J.; Holynski, A.; and Seitz, S. M. 2020. Seeing the world in a bag of chips. In CVPR, 1417–1427. Park, K.; Sinha, U.; Barron, J. T.; Bouaziz, S.; Goldman, D. B.; Seitz, S. M.; and Martin-Brualla, R. 2021. Nerfies: Deformable neural radiance fields. In ICCV, 5865–5874. Ramamoorthi, R.; and Hanrahan, P. 2001. A signalprocessing framework for inverse rendering. In SIGGRAPH, 117–128. Sato, Y.; Wheeler, M. D.; and Ikeuchi, K. 1997. Object shape and reflectance modeling from observation. In SIGGRAPH, 379–387. Song, S.; and Funkhouser, T. 2019. Neural illumination: Lighting prediction for indoor environments. In CVPR, 6918–6926. Srinivasan, P. P.; Deng, B.; Zhang, X.; Tancik, M.; Mildenhall, B.; and Barron, J. T. 2021. Nerv: Neural reflectance and visibility fields for relighting and view synthesis. In CVPR, 7495–7504. Tretschk, E.; Tewari, A.; Golyanik, V.; Zollh¨ofer, M.; Lassner, C.; and Theobalt, C. 2021. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In ICCV, 12959–12970. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7835 Valgaerts, L.; Wu, C.; Bruhn, A.; Seidel, H.-P.; and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. TOG, 31(6): 187–1. Verbin, D.; Hedman, P.; Mildenhall, B.; Zickler, T.; Barron, J. T.; and Srinivasan, P. P. 2022. Ref-nerf: Structured viewdependent appearance for neural radiance fields. In CVPR, 5481–5490. IEEE. Wang, J.; Ren, P.; Gong, M.; Snyder, J.; and Guo, B. 2009. All-frequency rendering of dynamic, spatially-varying reflectance. In SIGGRAPH Asia, 1–10. Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; and Wang, W. 2021a. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. NeuIPS, 34: 27171–27183. Wang, Q.; Wang, Z.; Genova, K.; Srinivasan, P. P.; Zhou, H.; Barron, J. T.; Martin-Brualla, R.; Snavely, N.; and Funkhouser, T. 2021b. Ibrnet: Learning multi-view imagebased rendering. In CVPR, 4690–4699. Wen, Z.; Liu, Z.; and Huang, T. S. 2003. Face relighting with radiance environment maps. In CVPR, volume 2, II– 158. IEEE. Wu, M.; Zhu, H.; Huang, L.; Zhuang, Y.; Lu, Y.; and Cao, X. 2023. High-Fidelity 3D Face Generation From Natural Language Descriptions. In CVPR, 4521–4530. Xu, Z.; Sunkavalli, K.; Hadap, S.; and Ramamoorthi, R. 2018. Deep image-based relighting from optimal sparse samples. TOG, 37(4): 1–13. Yang, W.; Chen, G.; Chen, C.; Chen, Z.; and Wong, K.-Y. K. 2022. S3-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint. In NeuIPS. Yao, Y.; Luo, Z.; Li, S.; Zhang, J.; Ren, Y.; Zhou, L.; Fang, T.; and Quan, L. 2020. Blendedmvs: A large-scale dataset for generalized multi-view stereo networks. In CVPR, 1790– 1799. Yao, Y.; Zhang, J.; Liu, J.; Qu, Y.; Fang, T.; McKinnon, D.; Tsin, Y.; and Quan, L. 2022. NeILF: Neural Incident Light Field for Physically-based Material Estimation. In ECCV, 700–716. Springer. Yariv, L.; Gu, J.; Kasten, Y.; and Lipman, Y. 2021. Volume rendering of neural implicit surfaces. NeuIPS, 34: 4805– 4815. Yariv, L.; Kasten, Y.; Moran, D.; Galun, M.; Atzmon, M.; Ronen, B.; and Lipman, Y. 2020. Multiview neural surface reconstruction by disentangling geometry and appearance. NeuIPS, 33: 2492–2502. Yu, Y.; Debevec, P.; Malik, J.; and Hawkins, T. 1999. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In SIGGRAPH, 215–224. Zhang, C.; Miller, B.; Yan, K.; Gkioulekas, I.; and Zhao, S. 2020. Path-space differentiable rendering. TOG, 39(4). Zhang, K.; Luan, F.; Wang, Q.; Bala, K.; and Snavely, N. 2021a. Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting. In CVPR, 5453–5462. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 586–595. Zhang, X.; Srinivasan, P. P.; Deng, B.; Debevec, P.; Freeman, W. T.; and Barron, J. T. 2021b. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. TOG, 40(6): 1–18. Zhang, Y.; Sun, J.; He, X.; Fu, H.; Jia, R.; and Zhou, X. 2022. Modeling Indirect Illumination for Inverse Rendering. In CVPR, 18643–18652. Zhuang, Y.; Zhang, Q.; Feng, Y.; Zhu, H.; Yao, Y.; Li, X.; Cao, Y.-P.; Shan, Y.; and Cao, X. 2023. Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail. In SIGGRAPH Asia, 1–10. Zhuang, Y.; Zhu, H.; Sun, X.; and Cao, X. 2022. Mofanerf: Morphable facial neural radiance field. In ECCV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7836
2024
870
18,707
IPRemover: A Generative Model Inversion Attack against Deep Neural Network Fingerprinting and Watermarking Wei Zong1, Yang-Wai Chow1, Willy Susilo1, Joonsang Baek1, Jongkil Kim2, Seyit Camtepe3 1Institute of Cybersecurity and Cryptology (iC2), University of Wollongong, Australia 2Ewha Womans University, South Korea 3CSIRO Data61, Australia [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Training Deep Neural Networks (DNNs) can be expensive when data is difficult to obtain or labeling them requires significant domain expertise. Hence, it is crucial that the Intellectual Property (IP) of DNNs trained on valuable data be protected against IP infringement. DNN fingerprinting and watermarking are two lines of work in DNN IP protection. Recently proposed DNN fingerprinting techniques are able to detect IP infringement while preserving model performance by relying on the key assumption that the decision boundaries of independently trained models are intrinsically different from one another. In contrast, DNN watermarking embeds a watermark in a model and verifies IP infringement if an identical or similar watermark is extracted from a suspect model. The techniques deployed in fingerprinting and watermarking vary significantly because their underlying mechanisms are different. From an adversary’s perspective, a successful IP removal attack should defeat both fingerprinting and watermarking. However, to the best of our knowledge, there is no work on such attacks in the literature yet. In this paper, we fill this gap by presenting an IP removal attack that can defeat both fingerprinting and watermarking. We consider the challenging data-free scenario whereby all data is inverted from the victim model. Under this setting, a stolen model only depends on the victim model. Experimental results demonstrate the success of our attack in defeating stateof-the-art DNN fingerprinting and watermarking techniques. This work reveals a novel attack surface that exploits generative model inversion attacks to bypass DNN IP defenses. This threat must be addressed by future defenses for reliable IP protection. Introduction Deep Neural Networks (DNNs) have achieved great success in many domains, such as computer vision (Carion et al. 2020; Krizhevsky, Sutskever, and Hinton 2017), Automatic Speech Recognition (ASR) (Hannun et al. 2014; Amodei et al. 2016), Natural Language Process (NLP) (Chowdhary 2020), and so on. Successful training of DNNs requires large volumes of labeled data in the specific domain. Acquiring such data can be costly when they are difficult to collect or significant domain expertise is required for labeling them. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Therefore, it is crucial for model owners to be able to protect the Intellectual Property (IP) of DNN models trained using valuable data. Researchers have recently proposed DNN IP protection techniques that identify the unique characteristics, i.e., the fingerprints, of a model (Lukas, Zhang, and Kerschbaum 2021; Peng et al. 2022; Chen et al. 2022). This direction of work is attracting more and more attention in the research community because fingerprinting techniques do not alter a model’s weights, so the performance of the model is preserved. The key to fingerprinting is to find unique properties that only exist in a victim model because if these properties appear in another model, that model is determined to be a stolen model. As an example, Adversarial Examples (AEs) have been used for this purpose. AEs generated by a victim model can be used to uniquely represent a model’s decision boundaries (Chen et al. 2022). A stolen model will also be fooled by the same AEs because its decision boundaries will resemble the decision boundaries of the victim model. However, the robustness of existing fingerprinting techniques has only been evaluated using general attacks, such as fine-tuning all layers of a DNN, adversarial training (Shafahi et al. 2019), and model extraction attacks (Orekondy, Schiele, and Fritz 2019). Although fingerprinting techniques were shown to be robust against these attacks, such attacks were not specifically designed to defeat fingerprint detection. In other words, the robustness of state-of-the-art fingerprinting techniques has not been thoroughly verified. In addition to DNN fingerprinting, researchers have also explored the embedding of watermarks in DNNs for IP protection (Jia et al. 2021). Ownership can be claimed if a similar watermark can be extracted from a suspect model. For instance, model owners can use a backdoor technique to make their model output a predefined label whenever a special trigger is stamped on its input (Adi et al. 2018). Although researchers have systematically studied the vulnerabilities of existing watermarking schemes (Lukas et al. 2022), most of the watermark removal attacks, such as transfer learning and adversarial training, have been shown to be ineffective against fingerprinting (Chen et al. 2022). In practice, an effective attack must bypass both DNN fingerprinting and watermarking without prior knowledge of their defenses. This is challenging because DNN watermarking and fingerprinting techniques vary significantly because they are The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7837 based on different mechanisms. Such an attack will pose a real-world threat to model owners because the IP of their models cannot be protected reliably. In this paper, we propose an IP removal attack, called IPRemover, that can evade detection by both state-of-theart DNN fingerprinting and watermarking techniques. We focus on the challenging data-free scenario where an adversary has no access to any existing data. We assume that a victim model can be accessed in a white-box manner, which is straightforward when an adversary has a local copy of the victim model. A vital component of our method is a model inversion attack that inverts training data from a victim model. Nonetheless, our goal is to remove IP protection while preserving satisfactory model performance. This goal is different from typical model inversion attacks that aim to compromise privacy by reconstructing representative training data (Nguyen et al. 2023). Moreover, state-of-the-art model inversion attacks assume access to a large volume of data with structural similarity to the original training data (Zhang et al. 2020; Wang et al. 2021; Chen et al. 2021). In contrast, in the context of DNN IP protection removal, an adversary has limited access to useful data. Otherwise, there is no motivation for the adversary to steal the model because the adversary can train a satisfactory model independently via supervised or semi-supervised learning (Zheng et al. 2022). Our work is related to recent emerging research on datafree Knowledge Distillation (KD) (Yin et al. 2020; Fang et al. 2021; Yu et al. 2023). Data-free KD generates training data from a teacher model, then applies KD to train a student model. The generated data may differ significantly from the original training data because it only needs to be effective for KD. While DNN fingerprinting techniques can detect attacks using KD, to date, there is no method that can detect IP infringement from generated data. Hence, if a stolen model can be trained from scratch on generated data without KD, model owners cannot claim IP infringement on the stolen model because it was trained independently on “legal” data. In comparison with the state-of-the-art in data-free KD, Contrastive Model Inversion (CMI) (Fang et al. 2021), the data generated using our method results in over 10% higher accuracy if models are trained without KD. Our contributions are summarized as follows: • To the best of our knowledge, we are the first in the literature to propose a data-free attack, called IPRemover, that can evade detection by both DNN fingerprinting and watermarking techniques. • We empirically demonstrate that IPRemover can universally defeat a diverse range of recent state-of-theart DNN fingerprinting and watermarking techniques, namely, MetaFinger (Yang, Wang, and Wang 2022), IPGuard (Cao, Jia, and Gong 2021), DeepJudge (Chen et al. 2022), Jia (Jia et al. 2021), CosWM (Charette et al. 2022), and Adversarial Frontier Stitching (FS) (Le Merrer, Perez, and Tr´edan 2020). • Our work reveals a novel attack surface based on generative model inversion attacks. Future defenses must address this line of attack for reliable DNN IP protection. Related Work DNN Fingerprinting Most existing DNN fingerprinting techniques are based on AEs (Cao, Jia, and Gong 2021; Chen et al. 2022; Peng et al. 2022). The underlying assumption of AE-based fingerprinting is that the decision boundaries of independently trained models differ significantly from one another, such that AEs can be used to uniquely identify their decision boundaries. If the decision boundaries of a suspect model resemble the decision boundaries of a victim model, this indicates that the suspect model is a copy of the victim model. Other than AEs, researchers recently discovered that meta-learning can be exploited for DNN fingerprinting (Yang, Wang, and Wang 2022). Instead of generating adversarial perturbations to fool DNNs, meta-learning aims to produce noisy input that can only be correctly classified by the model in question. The robustness of state-of-the-art fingerprinting techniques has not been thoroughly verified because they were only evaluated against attacks that were not specifically designed for defeating fingerprint detection. A recent study by (Wang et al. 2023) proposed to apply preprocessing to mitigate DNN fingerprinting. However, their attack can be easily identified and made ineffective by removing the preprocessing module. DNN Watermarking A watermark is a unique signature that can be extracted from a suspect model for verification. A model owner can claim ownership if the watermark extracted from a suspect model resembles the watermark in the owner’s model. Jia et al. (Jia et al. 2021) recently proposed to entangle representations of watermarks and benign input to defend against extraction attacks. The assumption is that representations in a stolen model will also contain watermark information, which can be used for IP infringement detection. Charette et al. (Charette et al. 2022) proposed a watermarking technique, called CosWM, that is robust to model extraction attacks. Their defense was specifically designed for detecting whether an adversary applied KD (Hinton, Vinyals, and Dean 2015) to steal knowledge from a victim model. CosWM empirically showed that an embedded cosine signal can be extracted from the output of a stolen model. Lukas et al. (Lukas et al. 2022) conducted a systematic study on the robustness of existing watermarking techniques using watermark removal attacks. However, most of the studied watermark removal attacks have been shown to be ineffective against fingerprinting (Chen et al. 2022). A practical attack must bypass both DNN fingerprinting and watermarking without prior knowledge of the defense. The Proposed Method Threat Model In this study, we develop a data-free IP removal attack that is able to defeat state-of-the-art DNN IP protection mechanisms. We consider the following threat model: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7838 Figure 1: The three directions of watermarking and fingerprinting techniques. • The adversary’s goal is to obtain a stolen model, with decent performance, from a victim model. The adversary has white-box access to the victim model, which is straightforward if the adversary has a local copy of the model. • The adversary does not require additional data because training data can be inverted from the victim model. This makes the stolen model depend solely on the victim model. • The adversary has no knowledge of the DNN fingerprinting or watermarking techniques used to protect the victim model. As such, the stolen model needs to universally defeat DNN watermarking and fingerprinting techniques so that IP infringement cannot be proven. A Unifying View of DNN Fingerprinting and Watermarking Before detailing IPRemover, we present a unifying view of fingerprinting and watermarking techniques. This is important for proposing an attack that can defeat both watermarking and fingerprinting. Otherwise, different attack strategies will likely be required to defeat different defenses. While the underlying techniques behind watermarking and fingerprinting vary significantly on the surface, they all exploit the unique characteristics of a model’s decision space. Specifically, a model’s decision space can be divided into space that is on the task manifold and space that is off the task manifold. As depicted in Fig. 1, existing fingerprinting and watermarking techniques are based on one of three directions. The first direction focuses on unique transitions from space on the task manifold to space off the task manifold. These unique transitions only exist in the model under protection and cannot be found in other independently trained models. Most existing DNN IP protection techniques are based on this. For instance, AEs can be seen as data points that are off the task manifold (Gilmer et al. 2018; Khoury and Hadfield-Menell 2018). This means that AE based fingerprinting techniques use adversarial perturbations as unique transitions that move data points off the task manifold (Chen et al. 2022; Cao, Jia, and Gong 2021). The assumption is that a stolen model will inherit the unique transitions from the victim model. Hence, when these unique transitions are applied to specific data points, a stolen model will behave the same way as the victim model. The second direction focuses on unique characteristics of space on the task manifold. As an example, LeMerrer et al. (Le Merrer, Perez, and Tr´edan 2020) proposed adversarial frontier stitching, which basically extends the task manifold by fine-tuning a model on AEs. In other work, DAWN (Szyller et al. 2021) deliberately introduces incorrect predictions for specific input so that given the same input a stolen model will also make the same errors. Such incorrectly classified input resides in space on the task manifold, where they uniquely characterize the behavior of the model under protection. The third direction focuses on the unique characteristics of space off the task manifold. A recently proposed method known as MetaFinger (Yang, Wang, and Wang 2022), utilized meta-learning to generate fingerprints that are off the task manifold. Fingerprinting and watermarking both exploit the unique characteristics of a model’s decision space based on one of three directions. Fingerprinting finds unique characteristics that already exist, whereas watermarking actively introduces unique characteristics by modifying the model. Hence, to universally defeat both watermarking and fingerprinting, a successful attack must significantly change the model’s decision space while maintaining satisfactory model performance. IPRemover Overview To universally evade IP infringement detection, an adversary must significantly change the decision space. However, as fingerprints and watermarks are secret information, an adversary will not know which part of the decision space to modify. Therefore, while a straightforward approach is to alter the entire decision space, doing this will significantly deteriorate the stolen model’s performance. We adopt an approach that looks at the problem from a different perspective. An overview of our method is depicted in Fig. 2. IPRemover consists of 3 stages. In the first, training data is inverted from a victim model. In the second, a stolen model is trained from scratch on the generated data. Finally, a specially designed variant of KD, called Virtual Ensemble Knowledge Distillation (VEKD), is applied to distill knowledge from the victim model while evading IP infringement detection. Algorithm 1 provides the workflow of our attack. Details of generating training data and VEKD are presented in the following subsections. Model Inversion Our model inversion technique trains a generative model from scratch when recovering a batch of training data from a victim model. This strategy was inspired by the state-of-the-art data-free KD, CMI, proposed by Fang et al. (Fang et al. 2021). The architecture of the generative model simply stacks convolutional layers and upsampling layers. The method for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7839 Figure 2: An overview of the IPRemover stages. Algorithm 1 IPRemover. Input: a victim model v; the size of generated data C Output: a stolen model s generated set: D ←∅ while |D| < C do z ←N(0, 1) initialize θg randomly sample labels y solve Equation 1 D ←D ∪generated data end while create and initialize s train s on D apply VEKD to transfer knowledge from v to s using D return s training the generative model is as follows: min θg H(v ◦g(z; θg), y) + Lv X l αlℓl bn(v ◦g(z; θg)) (1) where θg represents parameters of the generative model to optimize. v represents the victim model. H denotes the cross-entropy loss. z is sampled from standard distribution N(0, 1) and y are randomly selected labels. It should be noted that z is not optimized in our method while CMI optimizes z and θg together. We observed that optimizing z makes the generated data for the same label look almost the same, which destroys diversity in the generated data and is thus detrimental to our purpose. Lv denotes the number of batch normalization layers in the victim model. ℓbn is the widely used batch normalization regularization originally proposed in (Yin et al. 2020). ℓbn aims to constrain the features of each batch normalization layer to be consistent with the running-mean and running-variance values so that generated data will visually resemble the original training data. ℓl bn denotes the loss calculated for the lth layer with αl for weighting. Each αl is independently initialized from a uniform distribution, which differs from common practice that uses a constant for weighting all ℓl bn. The benefit of random sampling is that it improves diversity in the generated data because different batch normalization layers correspond to different weights. Random initialization of θg and αl will result in diversity in the generated data. A small batch size is used so that a large number of generative models will be trained from scratch. After completing the training, only correctly classified data is kept. In practice, to be efficient, multiple batches of data can be generated in parallel. Virtual Ensemble Knowledge Distillation (VEKD) After a stolen model is trained on the generated data from scratch, VEKD is applied to transfer knowledge from the victim model to the stolen model while evading IP infringement detection. Unlike naive KD, which only uses the victim model as a single teacher, VEKD includes the stolen model itself as an additional teacher to create an ensemble. When the stolen model serves as a teacher, we changed its output probabilities into a form where the predicted label corresponds to a high probability while the remaining probability mass is evenly distributed among the other labels. This approach is similar to the teacher-free KD proposed by Yuan et al. (Yuan et al. 2020). The purpose of including the stolen model itself as an additional “virtual” teacher is to reduce the resemblance between the stolen model and the victim model during knowledge transfer, which is beneficial for bypassing IP defenses. The loss function for VEKD is defined as follows: ℓ= H(p, y) + βKDτ(p, ϵv + (1 −ϵ)q) (2) where p and y are probabilities output by the stolen model and the label of generated data, respectively. v denotes probabilities output by the victim model. KDτ is the KD loss defined in (Hinton, Vinyals, and Dean 2015) with temperature τ. q is the output probability when the stolen model serves as an additional teacher: qi = Q, if i = arg max(p) (1 −Q)/(K −1), otherwise (3) where Q is a predefined value representing high probability and K is the number of classes. Finally, β balances different loss values and ϵ balances output probabilities. If ϵ = 1, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7840 Category Method Transition Jia (Jia et al. 2021) DeepJudge (Chen et al. 2022) IPGuard (Cao, Jia, and Gong 2021) On Manifold CosWM (Charette et al. 2022) FS (Le Merrer, Perez, and Tr´edan 2020) Off Manifold MetaFinger (Yang, Wang, and Wang 2022) Table 1: The different DNN watermarking and fingerprinting techniques used to evaluate IPRemover. VEKD is the same as the naive KD. Whereas if ϵ = 0, VEKD is the same as the teacher-free KD. Experimental Results Setup All experiments were conducted on an Ubuntu 22.04 server with 64G RAM and a Nvidia A100 GPU. We report average values and the standard deviation of results in the form a±b, where appropriate1. We evaluated our method against the different state-ofthe-art IP protection techniques shown in Table 1. These techniques cover all three directions previously discussed. Specifically, for watermarking, we considered Jia (Jia et al. 2021), FS (Le Merrer, Perez, and Tr´edan 2020) and CosWM (Charette et al. 2022). We focused on watermarking techniques that aimed to be robust against model extraction attacks because IPRemover is essentially a model extraction attack. Lukas et al. (Lukas et al. 2022) demonstrated that Jia (Jia et al. 2021) and FS (Le Merrer, Perez, and Tr´edan 2020) were robust against model extraction attacks. On the other hand, watermarking techniques that are not robust against extraction attacks, such as DeepMarks (Lukas et al. 2022), were not considered as these techniques are already susceptible to such attacks. In addition, the recently proposed CosWM (Charette et al. 2022) was specifically designed for combating ensemble KD. It should be noted that we did not evaluate our method against DAWN (Szyller et al. 2021), even though it is supposed to be robust against model extraction. The reason is that DAWN is an additional component added to a victim model. So under a white-box assumption, an adversary can easily remove such a component. For fingerprinting, we evaluated our method against IPGuard (Cao, Jia, and Gong 2021), DeepJudge (Chen et al. 2022) and MetaFinger (Yang, Wang, and Wang 2022). These are recently proposed techniques that present practical solutions and demonstrated state-of-the-art results. The details of each defense method are discussed in Technical Appendix. The datasets used were those widely adopted in deep learning security research, i.e., CIFAR10 (Krizhevsky, Hinton et al. 2009) and the German Traffic Sign Recognition Benchmark (GTSRB) (Stallkamp et al. 2011). CIFAR10 consists of 32 × 32 color images in 10 classes. There are 50,000 training images and 10,000 test images. We trained wide ResNet (Zagoruyko and Komodakis 2016) on CIFAR10. GTSRB consists of 26,640 images for training and 1Our code is on https://github.com/WeiZong01/IPRemover Scenario Accuracy (%) Query (%) Detected∗ GTSRB 85.32 ± 0.11 85.33 ± 0.94 3/3 CIFAR100 89.37 ± 0.13 98.67 ± 1.25 3/3 Data-free 82.98 ± 0.18 44.33 ± 3.77 0/3 1% data 84.59 ± 0.29 39.67 ± 2.49 0/3 5% data 87.40 ± 0.19 33.00 ± 4.32 0/3 ∗: IP infringement is detected if the accuracy on the query set exceeds a threshold of 62%. The victim model achieved 90.92% accuracy on the test set and 100% accuracy on the query set. Table 2: Experimental results for the MetaFinger case study. 12,630 images for testing in 43 classes. We resize images in GTSRB to 48 × 48 and trained VGG11 (Sengupta et al. 2019) on it. More details of the datasets and trained models are provided in Technical Appendix. For each dataset, we independently trained 3 models from scratch. When calculating detection thresholds for fingerprinting, we considered one model as a victim model while the other independently trained models were treated as benign models. For each victim model, we ran IPRemover 3 times. To ensure fairness in our evaluation, whenever the authors of a particular defense published their pre-trained models, we used the pre-trained models rather than training our own. A Case Study on MetaFinger We present a detailed case study to evaluate our method against a recently proposed DNN fingerprinting technique called MetaFinger (Yang, Wang, and Wang 2022). MetaFinger is effective in detecting IP infringement based on KD because its fingerprints are generated based on the Kullback–Leibler (KL) divergence, which is also the key component in KD. The purpose of this case study is to demonstrate the effectiveness of our method through comparative experiments. As the authors of MetaFinger published their trained models2, our experiments used their wide ResNet pre-trained on CIFAR10. Using the open-source code, we generated a query set of 100 samples in which the victim model achieved 100% accuracy on this query set. We considered several model stealing scenarios described below: 1. Scenario “GTSRB”: KD is applied using GTSRB as outof-distribution (OOD) data to transfer knowledge from the victim model to a stolen model. 2. Scenario “CIFAR100”: KD is applied using CIFAR100 (Krizhevsky, Hinton et al. 2009) as OOD data to transfer knowledge from the victim model to a stolen model. 3. Scenario “Data-free”: Using VEKD with only generated data. 4. Scenario “1% data”: Using VEKD with a mixture of generated data and 1% labeled training data. 5. Scenario “5% data”: Using VEKD with a mixture of generated data and 5% labeled training data. 2https://github.com/kangyangWHU/MetaFinger/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7841 Defense Victim Acc (%) Stolen Acc (%) Threshold† Detect+ Stolen Metric IPGuard 93.25 83.71 ± 0.10 0.0 ↑ 0.0 ± 0.0 DeepJudge∗ 93.25 83.71 ± 0.10 RobD:0.326 ↓ 0.426 +- 0.006 JSD:0.217 0.282 +- 0.005 Jia 92.16 83.65 ± 0.26 10% (99%) ↑ 5.22 ± 1.10% FS 93.30 85.24 ± 0.23 87% (100%) ↑ 84.00 ± 0.82% CosWM 82.40 75.23 ± 0.34 8.0 (39.83) ↑ 1.85 ± 1.36 ∗: DeepJudge proposed two metrics: “RobD” and “JSD”. +: ↑(↓) means IP infringement is detected if the measured metric is higher (lower) compared to the metric of benign models. †: for fingerprinting, the detection threshold was set to the worst value calculated based on benign models; for watermarking, the watermark accuracy of the victim model or a provided detectable stolen model is shown in parentheses. Table 3: Experimental results on CIFAR10. Defense Victim Acc (%) Stolen Acc (%) Threshold† Detect+ Stolen Metric IPGuard 97.24 91.66 ± 0.29 0.03 ↑ 0.060 +- 0.008 DeepJudge∗ 97.24 91.66 ± 0.29 RobD:0.133 ↓ 0.271 +- 0.077 JSD:0.079 0.156 +- 0.048 Jia 96.67 90.77 ± 0.21 2.33% (81.33%) ↑ 0.89 ± 0.83% FS 96.95 90.48 ± 0.44 77% (96%) ↑ 68.33 ± 0.47% ∗: DeepJudge proposed two metrics: “RobD” and “JSD”. +: ↑(↓) means IP infringement is detected if the measured metric is higher (lower) compared to the metric of benign models. †: for fingerprinting, the detection threshold was set to the worst value calculated based on benign models; for watermarking, the watermark accuracy of the victim model or a provided detectable stolen model is shown in parentheses. Table 4: Experimental results on GTSRB. It should be noted that while we focused on the data-free attack, the purpose of including Scenarios 4 and 5, which used a small amount of training data, in the case study was to demonstrate the potential benefits of an adversary having access to extra data. For each class, we generated 5000 images to make the size of our generated data match the size of the original training data. In Scenarios 4 and 5 with “1% data” and “5% data”, respectively, labeled training data was upsampled by 100 times and 20 times in order for their size to be equal to the size of generated data. As MetaFinger did not define a method for calculating detection thresholds, we used the maximum accuracy on the query set achieved by fine-tuning a ResNet as the detection threshold. The ResNet was pre-trained on ImageNet and fine-tuning it on CIFAR10 did not infringe on the IP of the victim model. We ran the experiments 3 times and achieved a threshold of 62%. The experimental results are shown in Table 2. As expected, IP infringement can be easily detected when using KD, even when data for a different task, i.e., GTSRB, was used. In contrast, our IPRemover managed to evade detection. For the data-free case, the stolen model achieved an 82.98% accuracy, which was 7.94% lower than the 90.92% accuracy achieved by the victim model. If our generated data was mixed with 5% labeled training data, the accuracy of the stolen model increased to 87.40%, which was only 3.52% lower than the accuracy of the victim model. An interesting observation was that if more labeled training data were accessible, the accuracy on the query set decreased. Specifically, the accuracy on the query set was 44.33% for the “Data-free” scenario and this value decreased to 33.00% when 5% labeled training data were included. This implies that stolen models would less resemble the victim model when more labeled training data were accessible. Defeating Other Defenses The data-free IPRemover was evaluated against other defenses. For CIFAR10, we generated 5000 images for each class. For GTSRB, we generated at most 600 images for each class. We observed that the success rate for generating some classes of GTSRB was low. This may be due to the uneven distribution of the original training set. We stopped generating images for GTSRB when most classes consisted of 600 images and the success rate of generating the remaining images was low. Details can be found in Technical Appendix. The experimental results on CIFAR10 and GTSRB are shown in Tables 3 and 4, respectively. We used the same set of hyperparameters and generator architecture for all the defenses on both datasets. This means the accuracy of our stolen models in the experiments represents the lower bound. If an adversary has knowledge about the defense, the hyperparameters can be adaptively adjusted to improve the accuracy of the stolen models. In addition, if additional training data were used, the performance gap between stolen models and victim models would decrease further. There was only one exception for IPGuard on GTSRB where the metrics of our stolen models slightly exceeded the worst metric of benign models. Nonetheless, the metThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7842 Figure 3: Mean accuracy and standard deviation obtained by running VEKD with different ϵ. rics of our stolen models were still close to 0, which will force model owners to use a low threshold for IPGuard, e.g., 0.04. Such a low threshold will render IPGuard to be unreliable in practice. Moreover, we only used 2 benign models in the experiments. The threshold for IPGuard is expected to increase if more benign models are involved. Hence, we conclude that our IPRemover bypassed all the defenses on CIFAR10 and GTSRB. For DeepJudge, an interesting observation was that untargeted AEs were highly transferable to other independently trained models. For example, running two iterations of PGD with a perturbation bound of 0.1 and 0.001 made the RobD and JSD of independently trained models less than 0.1 on CIFAR10. These low values made DeepJudge unreliable since the thresholds would be close to 0. Empirically, this high transferability of untargeted AEs is strongly related to the number of iterations of PGD. It can be seen in Technical Appendix that using different values for the perturbation bounds slightly affects the overall metrics. In contrast, running 3 or more iterations significantly lowers the metrics making them close to 0. A potential reason for this is that we applied standard normalization to the input. This differs from the open-source implementation of DeepJudge. However, standard normalization is common practice for many machine learning tasks to make input features on a similar scale as it stabilizes the training process and improves the performance of trained models. Our experimental results imply that standard normalization may also facilitate the transferability of untargeted AEs between independently trained models. Comparison with CMI To date, there is no method that can detect IP infringement in generated data. Hence, it is advantageous for a stolen model to be trained from scratch without KD. In this case, model owners cannot claim IP on the stolen model because it was trained independently on “legal” data. We compared our method with CMI by varying the dependence on KD. The authors of CMI published their in(a) CMI (b) Our method Figure 4: Visual comparison of the generated images. verted data for a “wrn-40-2” model trained on CIFAR10 3. Hence, we also applied our method to this model to generate training data. We used the same set of hyperparameters and generator architecture for generating data as in the previous experiments on CIFAR10, although this victim model was much smaller. Fig. 3 shows the results of running VEKD 3 times with different ϵ. Recall that a smaller ϵ corresponds to less dependence on KD. The results show that for both CMI and our method, the accuracy of the models decreased with less dependence on KD. However, models trained on our generated data achieved higher accuracy when ϵ ≤0.1. When no KD was applied, ϵ = 0, models trained on our generated data achieved 10% higher accuracy. Fig. 4 shows randomly selected images for a visual comparison. Compared to CMI, the colors of our generated data are visually clearer and more natural. Conclusion and Future Work In this work, we proposed a generative model inversion attack that can defeat both DNN fingerprinting and watermarking techniques. We considered the challenging datafree scenario where data is inverted from a victim model. After a stolen model is trained on generated data, VEKD is applied to transfer knowledge from the victim model to the stolen model while evading IP infringement detection. Our work reveals a novel attack surface that exploits model inversion attacks to bypass DNN IP protection. In future work, we will explore methods of detecting IP infringement from generated data, which is an untouched research direction. Another interesting direction is to extend our work to defeat IP protection in areas other than image recognition. 3https://github.com/zju-vipa/CMI The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7843 Acknowledgments This work is partially supported by the Data61 CRP project. W. Susilo is supported by the Australian Research Council Australian Laureate Fellowship FL230100033. J. Kim is partially supported by IITP grant funded by the Korea government (MSIT) (No.RS-2022-00155966, Artificial Intelligence Convergence Innovation Human Resources Development (Ewha Womans University)). References Adi, Y.; Baum, C.; Cisse, M.; Pinkas, B.; and Keshet, J. 2018. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium (USENIX Security 18), 1615–1631. Amodei, D.; Ananthanarayanan, S.; Anubhai, R.; Bai, J.; Battenberg, E.; Case, C.; Casper, J.; Catanzaro, B.; Cheng, Q.; Chen, G.; et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning, 173–182. PMLR. Cao, X.; Jia, J.; and Gong, N. Z. 2021. IPGuard: Protecting intellectual property of deep neural networks via fingerprinting the classification boundary. In Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 14–25. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. In Vedaldi, A.; Bischof, H.; Brox, T.; and Frahm, J., eds., Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, volume 12346 of Lecture Notes in Computer Science, 213–229. Springer. Charette, L.; Chu, L.; Chen, Y.; Pei, J.; Wang, L.; and Zhang, Y. 2022. Cosine Model Watermarking against Ensemble Distillation. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, 9512–9520. AAAI Press. Chen, J.; Wang, J.; Peng, T.; Sun, Y.; Cheng, P.; Ji, S.; Ma, X.; Li, B.; and Song, D. 2022. Copy, right? A testing framework for copyright protection of deep learning models. In 2022 IEEE Symposium on Security and Privacy (SP), 824– 841. IEEE. Chen, S.; Kahla, M.; Jia, R.; and Qi, G.-J. 2021. Knowledgeenriched distributional model inversion attacks. In Proceedings of the IEEE/CVF international conference on computer vision, 16178–16187. Chowdhary, K. 2020. Natural language processing. Fundamentals of artificial intelligence, 603–649. Fang, G.; Song, J.; Wang, X.; Shen, C.; Wang, X.; and Song, M. 2021. Contrastive Model Inversion for Data-Free Knowledge Distillation. CoRR, abs/2105.08584. Gilmer, J.; Metz, L.; Faghri, F.; Schoenholz, S. S.; Raghu, M.; Wattenberg, M.; and Goodfellow, I. 2018. Adversarial spheres. arXiv preprint arXiv:1801.02774. Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenger, R.; Satheesh, S.; Sengupta, S.; Coates, A.; et al. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Jia, H.; Choquette-Choo, C. A.; Chandrasekaran, V.; and Papernot, N. 2021. Entangled watermarks as a defense against model extraction. In 30th USENIX Security Symposium (USENIX Security 21), 1937–1954. Khoury, M.; and Hadfield-Menell, D. 2018. On the geometry of adversarial examples. arXiv preprint arXiv:1811.00525. Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6): 84–90. Le Merrer, E.; Perez, P.; and Tr´edan, G. 2020. Adversarial frontier stitching for remote neural network watermarking. Neural Computing and Applications, 32(13): 9233–9244. Lukas, N.; Jiang, E.; Li, X.; and Kerschbaum, F. 2022. Sok: How robust is image classification deep neural network watermarking? In 2022 IEEE Symposium on Security and Privacy (SP), 787–804. IEEE. Lukas, N.; Zhang, Y.; and Kerschbaum, F. 2021. Deep Neural Network Fingerprinting by Conferrable Adversarial Examples. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Nguyen, N.-B.; Chandrasegaran, K.; Abdollahzadeh, M.; and Cheung, N.-M. 2023. Re-thinking Model Inversion Attacks Against Deep Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16384–16393. Orekondy, T.; Schiele, B.; and Fritz, M. 2019. Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4954–4963. Peng, Z.; Li, S.; Chen, G.; Zhang, C.; Zhu, H.; and Xue, M. 2022. Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13430–13439. Sengupta, A.; Ye, Y.; Wang, R.; Liu, C.; and Roy, K. 2019. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in neuroscience, 13: 95. Shafahi, A.; Najibi, M.; Ghiasi, M. A.; Xu, Z.; Dickerson, J.; Studer, C.; Davis, L. S.; Taylor, G.; and Goldstein, T. 2019. Adversarial training for free! Advances in Neural Information Processing Systems, 32. Stallkamp, J.; Schlipsing, M.; Salmen, J.; and Igel, C. 2011. The German traffic sign recognition benchmark: a multiclass classification competition. In The 2011 international joint conference on neural networks, 1453–1460. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7844 Szyller, S.; Atli, B. G.; Marchal, S.; and Asokan, N. 2021. DAWN: Dynamic Adversarial Watermarking of Neural Networks. In Shen, H. T.; Zhuang, Y.; Smith, J. R.; Yang, Y.; C´esar, P.; Metze, F.; and Prabhakaran, B., eds., MM ’21: ACM Multimedia Conference, Virtual Event, China, October 20 - 24, 2021, 4417–4425. ACM. Wang, K.-C.; Fu, Y.; Li, K.; Khisti, A.; Zemel, R.; and Makhzani, A. 2021. Variational model inversion attacks. Advances in Neural Information Processing Systems, 34: 9706–9719. Wang, M.; Qiu, H.; Zhang, T.; Qiu, M.; and Thuraisingham, B. 2023. Mitigating Query-based Neural Network Fingerprinting via Data Augmentation. ACM Transactions on Sensor Networks. Yang, K.; Wang, R.; and Wang, L. 2022. MetaFinger: Fingerprinting the Deep Neural Networks with Meta-training. In Raedt, L. D., ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, 776–782. ijcai.org. Yin, H.; Molchanov, P.; Alvarez, J. M.; Li, Z.; Mallya, A.; Hoiem, D.; Jha, N. K.; and Kautz, J. 2020. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8715–8724. Yu, S.; Chen, J.; Han, H.; and Jiang, S. 2023. Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 24266–24275. Yuan, L.; Tay, F. E.; Li, G.; Wang, T.; and Feng, J. 2020. Revisiting knowledge distillation via label smoothing regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3903–3911. Zagoruyko, S.; and Komodakis, N. 2016. Wide Residual Networks. In Wilson, R. C.; Hancock, E. R.; and Smith, W. A. P., eds., Proceedings of the British Machine Vision Conference 2016, BMVC 2016, York, UK, September 19-22, 2016. BMVA Press. Zhang, Y.; Jia, R.; Pei, H.; Wang, W.; Li, B.; and Song, D. 2020. The secret revealer: Generative model-inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 253–261. Zheng, M.; You, S.; Huang, L.; Wang, F.; Qian, C.; and Xu, C. 2022. Simmatch: Semi-supervised learning with similarity matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14471–14481. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7845
2024
871
18,708
DiffBEV: Conditional Diffusion Model for Bird’s Eye View Perception Jiayu Zou1,3, Kun Tian1,3, Zheng Zhu2, Yun Ye2, Xingang Wang1* 1Institute of Automation, Chinese Academy of Sciences 2PhiGent Robotics 3University of Chinese Academy of Sciences [email protected], [email protected], [email protected], [email protected], [email protected] Abstract BEV perception is of great importance in the field of autonomous driving, serving as the cornerstone of planning, controlling, and motion prediction. The quality of the BEV feature highly affects the performance of BEV perception. However, taking the noises in camera parameters and LiDAR scans into consideration, we usually obtain BEV representation with harmful noises. Diffusion models naturally have the ability to denoise noisy samples to the ideal data, which motivates us to utilize the diffusion model to get a better BEV representation. In this work, we propose an endto-end framework, named DiffBEV, to exploit the potential of diffusion model to generate a more comprehensive BEV representation. To the best of our knowledge, we are the first to apply diffusion model to BEV perception. In practice, we design three types of conditions to guide the training of the diffusion model which denoises the coarse samples and refines the semantic feature in a progressive way. What’s more, a cross-attention module is leveraged to fuse the context of BEV feature and the semantic content of conditional diffusion model. DiffBEV achieves a 25.9% mIoU on the nuScenes dataset, which is 6.2% higher than the bestperforming existing approach. Quantitative and qualitative results on multiple benchmarks demonstrate the effectiveness of DiffBEV in BEV semantic segmentation and 3D object detection tasks. Introduction Bird’s Eye View (BEV) perception plays a crucial role in autonomous driving tasks, which need a compact and accurate representation of the real world. One of the most important components of BEV perception is the quality of the BEV feature. Taking the classical LSS (Philion and Fidler 2020) as an illustration, it first extracts image features from the backbone encoder and then transforms them into BEV space along with depth estimation. However, the downstream perception results are often distorted, since the flat-world assumption is not always valid and the feature distribution in BEV is usually sparse. As shown in Fig. 1, when LSS (Philion and Fidler 2020) is utilized as the view transformer, the final segmentation results have three deficiencies: (1) The prediction of dynamic object boundaries is ambiguous, *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. where pixels of different vehicles are connected; (2) The perception of static areas such as the pedestrian crossing and walkway is too rough. In particular, there are a lot of redundant predictions on the nuScenes benchmark; (3) LSS (Philion and Fidler 2020) has a poor discriminative ability for background and foreground pixels. In the last two rows of Fig. 1, the interested drivable area and vehicle objects are misclassified into the background. The above observations intuitively motivate us to explore more fine-grained and highly detailed BEV feature for downstream perception tasks. Taking the noises in camera parameters and LiDAR scans into consideration, we usually obtain BEV representation with harmful noises. Diffusion models naturally have the ability to denoise noisy samples to the ideal data. Recently, the diffusion probability models (DPM) have illustrated their great power in generative tasks (Meng et al. 2021; Kim, Kwon, and Ye 2022; BondTaylor et al. 2022; Janner et al. 2022), but their potential in BEV perception tasks has not been fully explored. In this work, we propose DiffBEV, a novel framework that utilizes conditional DPM to improve quality of the BEV feature and push the boundary of BEV perception. In DiffBEV, the depth distribution or the BEV feature obtained from the view transformer is the input of conditional DPM. DiffBEV explores the potential of conditional diffusion model and progressively refines the noisy BEV feature. Then, the crossattention module is proposed to fuse the fine-grained output of conditional diffusion model and the original BEV feature. This module adaptively builds the content relationship between the generated feature and the source BEV content, which helps to obtain a more precise and compact perception result. DiffBEV is an end-to-end framework and can be easily extended by altering task-specific decoders. In this paper, we evaluate the performance of BEV semantic segmentation on standard benchmarks, i.e. nuScenes (Caesar et al. 2020), KITTI Raw (Geiger, Lenz, and Urtasun 2012), KITTI Odometry (Behley et al. 2019), and KITTI 3D Object (Geiger, Lenz, and Urtasun 2012). DiffBEV achieves a 25.9% mIoU on the nuScenes benchmark, which is 6.2% higher than previous best-performing approaches. DiffBEV outperforms other methods in the segmentation of drivable area, pedestrian crossing, walkway, and car by a substantial margin (+5.0%, +10%, +6.7%, and +11.6% IoU scores). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7846 Original Image LSS Ground Truth nuScenes KITTI Odometry KITTI 3D Object Drivable Car Walkway Ped.crossing Carpark Bicyle Bus Trailer Constr. veh. Pedestrian MotorCycle Cone Truck Barrier Figure 1: The poor segmentation results of LSS (Philion and Fidler 2020) model on the nuScenes and KITTI datasets. Qualitative visualization results show that DiffBEV presents more clear edges than existing approaches. Furthermore, we compare the performance of 3D object detection on the popular nuScenes benchmark with other modern 3D detectors. Without bells and whistles, DiffBEV offers benefits to 3D object detection and provides approximately 1% NDS improvement on nuScenes. DiffBEV achieves leading performance both in BEV semantic segmentation and 3D object detection. Our contributions can be summarized into three folds as follows. (1) To the best of our knowledge, DiffBEV is the first work that utilizes conditional DPM to assist multiple autonomous driving perception tasks in BEV. Furthermore, DiffBEV needs no extra pre-training stage and is optimized in an end-to-end manner along with downstream tasks. (2) The conditional DPM and the attentive fusion module are proposed to refine the original BEV feature in a progressive way, which can be seamlessly extended to different perspective view transformers, e.g. VPN (Pan et al. 2020), LSS (Philion and Fidler 2020), PON (Roddick and Cipolla 2020), and PYVA (Yang et al. 2021). (3) Extensive experiments on multiple benchmarks demonstrate that DiffBEV achieves state-of-the-art performance and is effective in semantic segmentation and 3D object detection. DiffBEV achieves a 25.9% mIoU on the nuScenes dataset, which outperforms previous bestperforming approach (Philion and Fidler 2020) by a substantial margin, i.e. 6.2% mIoU. Related Works Diffusion Model Diffusion models are widely used in Artificial Intelligence Generated Content (AIGC), which are of great importance in generative models. Diffusion models have illustrated their power in image generation (Rombach et al. 2022; Xiao, Kreis, and Vahdat 2021; Graikos et al. 2022; Huang, Lim, and Courville 2021), detection (Chen et al. 2022a), segmentation (Chen et al. 2022b; Amit et al. 2021; Baranchuk et al. 2021), image-to-image translation (Kawa et al. 2022; Jooyoung et al. 2021), super resolution (Saharia et al. 2022), image inpainting (Bond-Taylor et al. 2022), image editing (Meng et al. 2021), text-to-image (Kim, Kwon, and Ye 2022; Avrahami, Fried, and Lischinski 2022; Gu et al. 2022a), video generation (Singer et al. 2022; Ho et al. 2022), point cloud (Zeng et al. 2022; Zhou, Du, and Wu 2021; Luo and Hu 2021), and human motion synthesis (Janner et al. 2022; Shao et al. 2022). DDPM-Segmentation (Baranchuk et al. 2021) is the first work to apply the diffusion model to semantic segmentation, which pre-trains a diffusion model and then trains classifiers for each pixel. But the two-stage paradigm, i.e. pre-training and fine-tuning, costs much training time, which is harmful to model efficiency. DiffusionInst (Gu et al. 2022b) applies the diffusion model to instance segmentation. A generalist framework (Chen et al. 2022b) leverages the diffusion model to generate results of panoptic segmentation. To this end, we are motivated to further explore the potential of employing the diffusion model to generate a high-quality representation for BEV perception tasks. Compared with DDPMSegmentation (Baranchuk et al. 2021), DiffBEV is a generalist end-to-end framework, which can be optimized along with downstream tasks. BEV Semantic Segmentation BEV semantic segmentation is a fundamental and crucial vision task in BEV scene understanding and serves as the cornerstone of path planning and controlling. VPN (Pan et al. 2020) and PYVA (Yang et al. 2021) present the layout of static or dynamic objects through learnable fully connected layers and attention mechanisms, respectively. LSS (Philion and Fidler 2020) takes advantage of camera parameters to lift image-view features to BEV and is widely applied in modern 3D detectors. HFT (Zou et al. 2022) presents an approach to leverage the strengths of both camera parameter-free methods and camera parameter-based methods. CVT (Zhou and Kr¨ahenb¨uhl 2022) extracts the content from surrounding-view images and achieves a simple yet effective design. GitNet (Gong et al. 2022) follows a two-stage paradigm, improving the segmentation performance by geometry-guided pre-alignment module and raybased transformer. However, these works suffer from defective factors, such as distortion caused by inaccurate camera parameters. In DiffBEV, we propose a conditional diffusion model to refine the distorted features and improve the performance of previous methods for BEV semantic segmentation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7847 3D Object Detection 3D object detection (Duan et al. 2019; Wang et al. 2022b) is a prevailing research topic in autonomous driving. FCOS3D (Wang et al. 2021) proposes 3D centerness and learns the 3D attributes. PGD (Wang et al. 2022a) explores the geometric relationship of different objects and improves depth estimation. PETR (Liu et al. 2022) projects the camera parameters of multi-view images into 3D positional embeddings. BEVDet (Huang et al. 2021) shows the positive effects of data augmentation in image view and BEV. BEVDet4D (Huang and Huang 2022) explores both the spatial and temporal content to improve the performance. BEVDepth (Li et al. 2022) exploits the explicit depth supervision of multiview images and further pushes the boundary of 3D object detection. BEVerse (Zhang et al. 2022) proposes a unified framework that jointly handles the tasks of 3D object detection, map construction, and motion prediction. In our work, we further exploit the ability of the conditional diffusion model to handle the task of 3D object detection. Approach Framework Overview Fig. 2 shows the overall architecture of DiffBEV, which comprises of image view backbone, view transformer, conditional diffusion model, cross-attention module, and taskspecific decoder. DiffBEV doesn’t require an independent stage of pre-training and is trained in an end-to-end manner. The image view backbone extracts the image features and the view transformer lifts the image-view features to BEV. Conditional diffusion model refines noisy samples and generates high-quality semantic feature. Cross-attention module is in charge of merging BEV feature and the output of conditional diffusion model. Finally, a task-specific decoder is applied for some downstream BEV perception tasks, such as segmentation and 3D object detection. In practice, LSS (Philion and Fidler 2020) is adopted as the default view transformer in our implementation. Conditional Diffusion Probability Model Diffusion Probability Model. We formulate the conditional diffusion probability model in this section. The feature generated by the view transformer is treated as the condition of diffusion model. Noise xT obeys standard normal distribution N(0, I). Diffusion model transforms the noise xT to the original sample x0 in a progressive way. We denote the variance at step t(0 ⩽t ⩽T) as βt. The forward process of the conditional diffusion model is presented as follows. q(xt|xt−1) ∼N(xt; p 1 −βtxt−1, βtI) (1) For convenience, we denote a series of constant. αt = 1 −βt, ¯αt = tY s=1 αs (2) The noisy sample at step t is transformed from the input data x0 by Eq. 3. q(xt|x0) ∼N(xt; √¯αtxt−1, (1 −α)I) (3) xt ∼√¯αtx0 + √ 1 −¯αtϵ, where ϵ ∼N(0, I) (4) Σθ(xt, t) is the covariance predictor and ϵθ(xt, t) is the denoising model. In our experiments, a typical variance of UNet (Wu et al. 2022) is used as the denoising network. In the denoising process, the diffusion model progressively refines the noisy sample xt. The reverse diffusion process is written as Eq. 5. pθ(xt−1|xt) ∼N(xt−1; µθ(xt, t), Σθ(xt, t)) (5) The Design of Condition. In practice, there are three types of conditions xcond to choose: (1) The original BEV feature from the view transformer (F O−BEV ∈RC×H×W ); (2) The semantic feature learned from the depth distribution (F S−BEV ∈RC×H×W ); (3) The element-wise sum of F O−BEV and F S−BEV . The view transformer lifts the image-view feature to BEV space, obtaining the original BEV feature F O−BEV . For each point, the view transformer estimates the distribution on different predefined depth ranges and generates the corresponding depth distribution F d ∈Rc×h×w. We employ a 1 × 1 convolutional layer to convert the channel and interpolate F d into F S−BEV , which has the same size as F O−BEV . The above three conditions are features in BEV space, where we add gaussian noise. By denoising samples progressively, we hope the conditional diffusion model helps to learn the fine-granularity content of objects, such as precise boundary and highly detailed shape. We strictly follow the standard DPM model to add BEV noise, while the difference is that we employ condition-modulated denoising, which is shown in Fig. 2. Given noisy BEV feature xt and condition xcond at time step t, xt is further encoded and interacts with xcond through element-wise multiplication. To alleviate the computational burden, we set a flexible choice for the encoding mechanism of noisy BEV feature xt, i.e. the self-attention mechanism or a simple convolutional layer, which will be discussed in Section . A UNet-style structure, whose components include an encoder and a decoder, serves as the denoising network ϵθ(xt, t). Cross-Attention Module After obtaining the output of conditional diffusion model, we design a cross-attention module (CA) to refine the original BEV feature, which is shown in Fig. 3. Specifically, the output of the conditional diffusion model is treated as the source of K and V , while the original BEV feature from the perspective view transformer is projected into Q. The cross-attention process of the two-stream features is formulated as: CA(Q, K, V ) = Attn(QW Q i , KW K i , V W V i )W Out, Attn (Q, K, V ) = softmax QKT √dk  V. (6) Q, K, and V are linearly mapped to calculate the attention matrix Attn, where W Q i , W K i , W V i are the projection layers with the shape of Rdmodel×dq, Rdmodel×dk, Rdmodel×dv. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7848 Image Image Backbone Encoder Decoder Conditional DPM View Transformer Condition Signal Source BEV Feature Task-specific Decoder Cross Attention U-Net Denoised BEV Feature Noised BEV Feature x0 xt xT Adding Noise BEV Feature Segmentation Head Detection Head Camera Model Ego Figure 2: Overall architecture of DiffBEV. DiffBEV is comprised of the image backbone, view transformer, conditional diffusion model, cross-attention module, and task-specific decoder. By flexibly changing the task-specific decoder, DiffBEV can be easily extended to different downstream tasks, such as segmentation and 3D object detection. K V Q Output of Conditional DPM BEV Feature Cross Attention × Softmax × … … … Figure 3: Overall structure of the cross-attention module. Then, the refined BEV feature is obtained from the output layer W Out ∈Rdv×dmodel, which aims to facilitate the downstream tasks to learn better. Training Loss Depth Loss. Given the intrinsic parameter matrix Ki ∈ R3×3, rotation matrix Ri ∈R3×3, and translation matrix ti ∈R3, we introduce a depth loss Ldepth to assist model training. The depth loss is defined as the binary cross entropy (BCE) between the predicted depth map Di and D∗ i . The specific process is expressed as: Pi = Ki (RiP + ti) , D∗ i = one hot(Pi), Ldepth = BCE(D∗ i , Di) (7) Diffusion Loss. We denote the gaussian noise at time step t as ¯zt. Please refer to Section for the meaning of the rest symbols. The diffusion loss Ldiff is defined as: Ldiff = E[||¯zt −Σθ(√¯αtx0 + √ 1 −¯αt¯zt, t)||2] (8) Task-specific Training Loss. The training loss for segmentation and detection can be written as Eq. 9. In practice, we empirically set the loss weights λ1 = 10 and λ2 = 1. We introduce the details of segmentation loss Lwce and detection loss Ldetect in the supplementary material. Lseg = Lwce + λ1Ldepth + λ2Ldiff Ldet = Ldetect + λ1Ldepth + λ2Ldiff (9) Task-specific Decoder As a general framework for BEV perception, DiffBEV can reason about different downstream tasks by altering the taskspecific decoder. We adopt a residual-style decoding head for the semantic segmentation task, which consists of 8 convolutional blocks and a fully connected (FC) layer. Each convolutional block has a convolution layer, followed by batch normalization (BN) and a rectified linear unit (ReLU) layer. As for the 3D object detection task, the classification and regression heads are composed of several convolution layers respectively. Please refer to CenterPoint (Yin, Zhou, and Krahenbuhl 2021) for more structure details. Experiment Datasets We compare the performance of DiffBEV with the existing methods on four different benchmarks, i.e. nuScenes (Caesar et al. 2020), KITTI Raw (Geiger, Lenz, and Urtasun 2012), KITTI Odometry (Behley et al. 2019), and KITTI 3D Object (Geiger, Lenz, and Urtasun 2012). As a popular benchmark in autonomous driving, nuScenes (Caesar et al. 2020) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7849 dataset is collected by six surrounding cameras and one LiDAR, which includes multi-view images and point cloud of 1,000 scenes. KITTI Raw (Geiger, Lenz, and Urtasun 2012) and KITTI Odometry (Behley et al. 2019) provide the images and BEV ground truth of the static road layout, while KITTI 3D Object (Geiger, Lenz, and Urtasun 2012) provides the images and labels for dynamic vehicles. By flexibly leveraging different task-specific decoders, DiffBEV can be extended to various downstream tasks. In this work, extensive experiments are conducted on the BEV semantic segmentation and 3D object detection tasks. Implementation Details We train all semantic segmentation models using the AdamW optimizer (Ilya and Frank 2017) with learning rate and weight decay as 2e-4 and 0.01. Two NVIDIA GeForce RTX 3090 are utilized and the mini-batch per GPU is set to 4 images. The input resolution is 800 × 600 for nuScenes and 1024 × 1024 for KITTI datasets. The total training schedule includes 20,000 iterations (200, 000 iterations for nuScenes) and the warm-up strategy (Goyal et al. 2017) gradually increases the learning rate for the first 1,500 iterations. Then, a cyclic policy (Yan, Mao, and Li 2018) linearly decreases the learning rate from 2e-4 to 0 during the remainder training process. For 3D object detection, we follow the implementation details of BEVDet (Huang et al. 2021). For the image backbone, the SwinTransformer (Liu et al. 2021) is initialized with the weights pre-trained on the ImageNet (Russakovsky et al. 2015) dataset. The model structures of VPN (Pan et al. 2020), PON (Roddick and Cipolla 2020), LSS (Philion and Fidler 2020), and PYVA (Yang et al. 2021) are the same as the original paper. In addition, we mainly follow the methods of the BEVDet (Huang et al. 2021) family to achieve 3D object detection. The training and testing details are consistent with (Huang et al. 2021) and (Huang and Huang 2022). Last but not least, there is no extra pre-training stage for the conditional diffusion probability model, which can be optimized in an end-to-end manner along with the downstream tasks. BEV Semantic Segmentation Evaluation on the nuScenes benchmark. In this part, we compare the effectiveness of DiffBEV with other approaches on the pixel-wise segmentation task. Both the layout of static objects and dynamic objects are estimated on the nuScenes benchmark. As illustrated in Tab. 1, we report the segmentation performance of DiffBEV and some advanced methods described in Section . It can be seen that the previous state-of-theart method LSS (Philion and Fidler 2020) is good at predicting static objects with wide coverage, such as the drivable area, walkway, and pedestrian crossing, compared to the car, pedestrian, bicycle, etc. This is because dynamic objects usually occupy fewer pixels and appear less frequently in BEV. A similar performance can also be observed from PYVA (Yang et al. 2021) and PON (Roddick and Cipolla 2020), which achieve a comparable accuracy in the drivable area class but perform worse in the rare class, such as truck, bus, and trailer. In contrast, DiffBEV has a remarkable improvement in the Intersection over Union (IoU) score of both static and dynamic objects. As listed in Tab. 1, we design three varieties according to the condition. The condition of DiffBEVB, DiffBEV-D, and DiffBEV-DB comes from the original BEV feature (F O−BEV ), conditional features learned from the depth distribution (F S−BEV ), and the element-wise sum of F O−BEV and F S−BEV , respectively. DiffBEV-D leads the performance in most classes and achieves a 25.9% mIoU score, which is 6.2% higher than previous best-performing approach (Philion and Fidler 2020). In particular, DiffBEV improves the segmentation accuracy of the drivable area, pedestrian crossing, walkway, and car by a substantial margin (+5.0%, +10.0%, +6.7%, and +11.6% IoU scores), which are crucial classes for the safety of autonomous driving systems. We attribute this improvement to that the conditional DPM reduces noises and complements more spatial information about objects of interest. DiffBEV significantly improves the pixel-wise perception accuracy of the model in both high-frequency classes and sparsely distributed classes. Please refer to visualization results for a more intuitive analysis and explanation. Evaluation on KITTI Raw, KITTI Odometry, and KITTI 3D Object benchmark. Tab. 2 reports the quantitative results of static scene layout estimation on KITTI Raw and KITTI Odometry datasets. The performance comparison on KITTI 3D Object dataset shows the segmentation results for dynamic vehicles. Three varieties of DiffBEV obtain higher mIoU and mAP scores than existing methods. For example, DiffBEV-Dep surpasses the second-best model PYVA (Yang et al. 2021) by 0.71%, 1.51%, and 7.97% mIoU on KITTI Raw, KITTI Odometry, and KITTI 3D Object dataset, which achieves state-of-the-art perception accuracy consistently on all evaluation benchmarks. 3D Object Detection We conduct 3D object detection experiments on the nuScenes benchmark and Tab. 3 reports the official evaluation metrics: mean Average Precision (mAP), Average Translation Error (ATE), Average Scale Error (ASE), Average Orientation Error (AOE), Average Velocity Error (AVE), Average Attribute Error (AAE), and NuScenes Detection Score (NDS). Note that we select LSS (Philion and Fidler 2020) as the default view transformer, and use the semantic feature learned from the depth distribution (F S−BEV ) as the condition of DiffBEV. The data augmentations in image view and BEV are strictly consistent with that of the BEVDet (Huang et al. 2021) and BEVDet4D (Huang and Huang 2022). After applying the conditional diffusion model, it can be observed that all evaluation metrics for 3D object detection are improved. This is because DiffBEV progressively refines the original BEV feature and interactively exchanges the semantic context through the cross-attention mechanism. Without bells and whistles, BEVDet (Huang et al. 2021) with DiffBEV raises the NDS score from 38.7% to 39.8%, while BEVDet4D (Huang and Huang 2022) with DiffBEV raises the NDS score from 47.6% to 48.6%. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7850 Method Drivable Ped. crossing Walkway Carpark Car Truck Bus Trailer Constr. veh. Pedestrian Motorcycle Bicycle Traf. Cone Barrier Mean IPM 40.1 14.0 4.9 3.0 0.6 0.8 0.2 Unproj. 27.1 14.1 11.3 6.7 2.2 2.8 1.3 VED 54.7 12.0 20.7 13.5 8.8 0.2 0.0 7.4 0.0 0.0 0.0 0.0 0.0 4.0 8.7 PYVA 56.2 26.4 32.2 21.3 19.3 13.2 21.4 12.5 7.4 4.2 3.5 4.3 2.0 6.3 16.4 VPN 58.0 27.3 29.4 12.9 25.5 17.3 20.0 16.6 4.9 7.1 5.6 4.4 4.6 10.8 17.5 PON 60.4 28.0 31.0 18.4 24.7 16.8 20.8 16.6 12.3 8.2 7.0 9.4 5.7 8.1 19.1 LSS 55.9 31.3 34.4 23.7 27.3 16.8 27.3 17.0 9.2 6.8 6.6 6.3 4.2 9.6 19.7 DiffBEV-BEV 65.3 40.2 41.0 27.2 37.9 21.3 32.9 20.5 7.6 9.2 13.7 13.1 7.2 16.0 25.2 DiffBEV-DepBEV 64.9 39.7 40.7 27.7 37.7 22.3 32.5 21.4 12.7 9.2 13.3 12.8 6.6 15.9 25.5 DiffBEV-Dep 65.4 41.3 41.1 28.4 38.9 23.1 33.7 21.1 8.4 9.6 14.4 13.2 7.5 16.7 25.9 Table 1: Intersection over Union scores (%) of hybrid scene layout estimation on the nuScenes val dataset. KITTI Raw Odometry 3D Object Method mIoU mAP mIoU mAP mIoU mAP OFT 25.34 34.69 MonoOcc 58.41 66.01 65.74 67.84 20.45 22.29 Mono3D 59.58 79.07 66.81 81.79 17.11 26.62 VPN 64.65 78.20 78.16 84.73 26.52 35.54 PYVA 65.70 81.62 78.19 85.55 29.11 36.86 PON 60.47 77.45 70.92 76.27 26.78 44.50 DiffBEV-B 66.19 81.08 79.48 88.30 36.76 52.81 DiffBEV-DB 66.40 81.89 79.58 88.44 37.08 53.96 DiffBEV-D 66.41 81.91 79.70 89.68 36.99 53.61 Table 2: Segmentation performance of static scene layout estimation on KITTI Raw and KITTI Odometry, and dynamic scene layout estimation on KITTI 3D Object. Ablation Study Condition Design. In order to exploit the advantages of the conditional diffusion model, we conduct ablation experiments for different DPM conditions on the KITTI Raw dataset to estimate the layout of static roads. Specifically, there are three DPM conditions to choose, i.e. the original BEV feature (F O−BEV ), the semantic feature learned from the depth distribution (F S−BEV ), and the element-wise sum of F O−BEV and F S−BEV (both). As shown in Tab. 4, no matter which condition is used, three conditions can guide the DPM to learn discriminative BEV feature. F S−BEV and F S−BEV & F O−BEV achieve better modulation effects than the F O−BEV , while the best segmentation result comes from F S−BEV . This observation demonstrates the effectiveness of semantic feature learned from the depth distribution. Feature Interaction Mechanism. Another ablation study is to explore the most effective way for feature interaction. As shown in each row of Tab. 4, regardless of which feature interaction mechanism is employed, DiffBEV achieves better segmentation results than the baseline model with 63.38% mIoU. It can be seen that cross-attention can learn better BEV feature than the other two simple feature interactions, which is beneficial for the downstream perception tasks. In summary, the combination of F S−BEV and the cross-attention feature interaction mechanism achieves the best segmentation results, which improves 2.48% mIoU based on LSS (Philion and Fidler 2020) model. If not specified, the DiffBEV model corresponds to the setting of F S−BEV with the cross-attention mechanism. Encoding Mechanism for Noisy BEV Samples. For the noisy BEV sample xt, we calculate the self-attention semantic map or obtain the refined affinity map through a simple convolutional layer. Tab. 5 shows the comparison between the computational burden and segmentation performance. The DiffBEV model using self-attention mechanism achieves a higher 65.86% mIoU and an 80.62% mAP. By simplifying self-attention to a simple convolutional layer, the DiffBEV model achieves a 64.23% mIoU and a 78.34% mAP while decreases the GFLOPs from 446.81 to 433.72. More View Transformers with DiffBEV In the main experiments, we adopt LSS (Philion and Fidler 2020) as the view transformer. To investigate the generality of DiffBEV, we conduct experiments on more view transformers. As shown in Tab. 6, the model equipped with DiffBEV outperforms the version without DPM on both mIoU and mAP metrics by a significant margin. Benefited from DiffBEV, the models of VPN (Pan et al. 2020), PYVA (Yang et al. 2021), and PON (Roddick and Cipolla 2020) raise their performances on mIoU scores (+1.19%, +1.61%, +0.59%, respectively) and mAP scores (+10.14%, +7.01%, +10.11%, respectively). This observation illustrates that DiffBEV is not only effective for a specific view transformer. Visualization Analysis As indicated in Fig. 4, previous state-of-the-art methods tend to output relatively rough predictions. For instance, cars that should be independent individuals are connected into a strip region and the drivable area is misclassified as background. Despite the complex and challenging street layouts on the nuScenes dataset, DiffBEV produces more accurate semantic maps and is able to resolve fine-grained details such as the spatial separation between neighboring vehicles, especially in the crowded autonomous driving scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7851 Methods Image Size mAP↑ mATE↓ mASE↓ mAOE↓ mAVE↓ mAAE↓ NDS↑ CenterNet 0.306 0.716 0.264 0.609 1.426 0.658 0.328 FCOS3D 1600×900 0.295 0.806 0.268 0.511 1.315 0.170 0.372 DETR3D 1600×900 0.303 0.860 0.278 0.437 0.967 0.235 0.374 PGD 1600×900 0.335 0.732 0.263 0.423 1.285 0.172 0.409 PETR-R50 1056×384 0.313 0.768 0.278 0.564 0.923 0.225 0.381 PETR-R101 1408×512 0.357 0.710 0.270 0.490 0.885 0.224 0.421 PETR-Tiny 1408×512 0.361 0.732 0.273 0.497 0.808 0.185 0.431 BEVDet-Tiny 704×256 0.310 0.681 0.273 0.570 0.933 0.223 0.387 BEVDet-Tiny+DiffBEV 704×256 0.315 0.660 0.265 0.567 0.878 0.219 0.398 BEVDet4D-Tiny 704×256 0.338 0.672 0.274 0.460 0.337 0.185 0.476 BEVDet4D-Tiny+DiffBEV 704×256 0.344 0.652 0.262 0.453 0.312 0.176 0.486 Table 3: 3D object detection performance of different paradigms on the nuScenes val set. Tiny means tiny Swin Transformer. Image VPN PYVA PON LSS DiffBEV Ground Truth Figure 4: Qualitative segmentation results on the nuScenes benchmark. We visualize the class with the largest index c which has occupancy probability pi > 0.5. Black regions (outside field of view or no LiDAR returns) are ignored during evaluation. Interaction Mechanism F S−BEV F O−BEV both Concat 65.03 64.81 64.95 Add 64.85 64.11 64.50 Cross-Attention 65.86 64.33 65.16 Table 4: Ablation study on condition design and feature fusion mechanism. Encoding #param. GFLOPs mIoU mAP Conv 78.16M 433.72 64.23 78.34 Self-Attention 78.80M 446.81 65.86 80.62 Table 5: Ablation study on encoding mechanism in conditional diffusion model. The mIoU and mAP (%) of the basic LSS (Philion and Fidler 2020) on the KITTI Raw dataset are 63.38% and 77.52%, respectively. Conclusion In this work, we propose a novel framework, namely DiffBEV, which first applies the conditional diffusion model to BEV perception tasks. DiffBEV utilizes BEV feature and semantic feature learned from the depth distribution as the condition of diffusion model, which progressively refines Model mIoU mAP DiffBEV % ✓ % ✓ VPN 27.02 28.21 (+1.19) 35.63 45.77 (+10.14) PYVA 29.22 30.83 (+1.61) 36.97 43.98 (+7.01) PON 36.49 37.08 (+ 0.59) 45.51 55.62 (+10.11) Table 6: Extension experiments of more view transformers with DiffBEV on the KITTI 3D Object dataset. The metric (%) in the middle and right columns represent the performance without and with DiffBEV, respectively. the noisy samples to generate highly detailed information. Then, a cross-attention module is proposed to attentively learn the interactive relationship between the output of conditional DPM and the BEV feature. Extensive experiments on multiple benchmarks illustrate that DiffBEV achieves favorable performance in both semantic segmentation and 3D object detection. DiffBEV obtains a 25.9% mIoU on the nuScenes, outperforming the previous state-of-the-art method by a substantial margin. The extension studies on different view transformers confirm the generality of DiffBEV. We hope to further explore the potential of DiffBEV and broaden its application ranges to more perception tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7852 References Amit, T.; Nachmani, E.; Shaharbany, T.; and Wolf, L. 2021. Segdiff: Image segmentation with diffusion probabilistic models. arXiv preprint arXiv:2112.00390. Avrahami, O.; Fried, O.; and Lischinski, D. 2022. Blended latent diffusion. arXiv preprint arXiv:2206.02779. Baranchuk, D.; Rubachev, I.; Voynov, A.; Khrulkov, V.; and Babenko, A. 2021. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126. Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; and Gall, J. 2019. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Bond-Taylor, S.; Hessey, P.; Sasaki, H.; Breckon, T. P.; and Willcocks, C. G. 2022. Unleashing transformers: parallel token prediction with discrete absorbing diffusion for fast high-resolution image generation from vector-quantized codes. In Proceedings of the European Conference on Computer Vision. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Chen, S.; Sun, P.; Song, Y.; and Luo, P. 2022a. Diffusiondet: Diffusion model for object detection. arXiv preprint arXiv:2211.09788. Chen, T.; Li, L.; Saxena, S.; Hinton, G.; and Fleet, D. J. 2022b. A generalist framework for panoptic segmentation of images and videos. arXiv preprint arXiv:2210.06366. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; and Tian, Q. 2019. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Gong, S.; Ye, X.; Tan, X.; Wang, J.; Ding, E.; Zhou, Y.; and Bai, X. 2022. GitNet: Geometric Prior-based Transformation for Birds-Eye-View Segmentation. arXiv preprint arXiv:2204.07733. Goyal, P.; Doll´ar, P.; Girshick, R.; Noordhuis, P.; Wesolowski, L.; Kyrola, A.; Tulloch, A.; Jia, Y.; and He, K. 2017. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Graikos, A.; Malkin, N.; Jojic, N.; and Samaras, D. 2022. Diffusion models as plug-and-play priors. arXiv preprint arXiv:2206.09012. Gu, S.; Chen, D.; Bao, J.; Wen, F.; Zhang, B.; Chen, D.; Yuan, L.; and Guo, B. 2022a. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Gu, Z.; Chen, H.; Xu, Z.; Lan, J.; Meng, C.; and Wang, W. 2022b. DiffusionInst: Diffusion Model for Instance Segmentation. arXiv preprint arXiv:2212.02773. Ho, J.; Salimans, T.; Gritsenko, A.; Chan, W.; Norouzi, M.; and Fleet, D. J. 2022. Video diffusion models. arXiv preprint arXiv:2204.03458. Huang, C.-W.; Lim, J. H.; and Courville, A. C. 2021. A variational perspective on diffusion-based generative models and score matching. In Advances in Neural Information Processing Systems. Huang, J.; and Huang, G. 2022. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054. Huang, J.; Huang, G.; Zhu, Z.; and Du, D. 2021. Bevdet: High-performance multi-camera 3d object detection in birdeye-view. arXiv preprint arXiv:2112.11790. Ilya, L.; and Frank, H. 2017. Fixing Weight Decay Regularization in Adam. arxiv preprint arXiv:1711.05101. Janner, M.; Du, Y.; Tenenbaum, J. B.; and Levine, S. 2022. Planning with diffusion for flexible behavior synthesis. arXiv preprint arXiv:2205.09991. Jooyoung, C.; Sungwon; Kim, Y.; Jeong, Y.; Gwon, S.; and YoonJ. 2021. ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models. arXiv preprint arXiv:2108.02938. Kawa, B.; Elad, M.; Ermon, S.; and Song, J. 2022. Denoising diffusion restoration models. arXiv preprint arXiv:2201.11793. Kim, G.; Kwon, T.; and Ye, J. C. 2022. DiffusionCLIP: TextGuided Diffusion Models for Robust Image Manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Li, Y.; Ge, Z.; Yu, G.; Yang, J.; Wang, Z.; Shi, Y.; Sun, J.; and Li, Z. 2022. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092. Liu, Y.; Wang, T.; Zhang, X.; and Sun, J. 2022. Petr: Position embedding transformation for multi-view 3d object detection. arXiv preprint arXiv:2203.05625. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030. Luo, S.; and Hu, W. 2021. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Meng, C.; Song, Y.; Song, J.; Wu, J.; Zhu, J.-Y.; and Ermon, S. 2021. Sdedit: Image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073. Pan, B.; Sun, J.; Leung, H. Y. T.; Andonian, A.; and Zhou, B. 2020. Cross-view semantic segmentation for sensing surroundings. In IEEE Robotics and Automation Letters. Philion, J.; and Fidler, S. 2020. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Proceedings of the European Conference on Computer Vision. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7853 Roddick, T.; and Cipolla, R. 2020. Predicting semantic map representations from images using pyramid occupancy networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. S.; Berg, A. C.; and Fei-Fei, L. 2015. ImageNet Large Scale Visual Recognition Challenge. In International Journal of Computer Vision. Saharia; C, H.; J, C.; and W. 2022. Image Super-Resolution via Iterative Refinement. In IEEE Transactions on Pattern Analysis and Machine Intelligence. Shao, R.; Zheng, Z.; Zhang, H.; Sun, J.; and Liu, Y. 2022. Diffustereo: High quality human reconstruction via diffusion-based stereo using sparse cameras. In Proceedings of the European Conference on Computer Vision. Singer, U.; Polyak, A.; Hayes, T.; Yin, X.; An, J.; Zhang, S.; Hu, Q.; Yang, H.; Ashual, O.; Gafni, O.; et al. 2022. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792. Wang, T.; Xinge, Z.; Pang, J.; and Lin, D. 2022a. Probabilistic and geometric depth: Detecting objects in perspective. In Conference on Robot Learning. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Wang, Y.; Guizilini, V. C.; Zhang, T.; Wang, Y.; Zhao, H.; and Solomon, J. 2022b. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning. Wu, J.; Fang, H.; Zhang, Y.; Yang, Y.; and Xu, Y. 2022. MedSegDiff: Medical Image Segmentation with Diffusion Probabilistic Model. arXiv preprint arXiv:2211.00611. Xiao, Z.; Kreis, K.; and Vahdat, A. 2021. Tackling the generative learning trilemma with denoising diffusion GANs. arXiv preprint arXiv:2112.07804. Yan, Y.; Mao, Y.; and Li, B. 2018. Second: Sparsely embedded convolutional detection. In Sensors. Yang, W.; Li, Q.; Liu, W.; Yu, Y.; Ma, Y.; He, S.; and Pan, J. 2021. Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-View Transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Yin, T.; Zhou, X.; and Krahenbuhl, P. 2021. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zeng, X.; Vahdat, A.; Williams, F.; Gojcic, Z.; Litany, O.; Fidler, S.; and Kreis, K. 2022. LION: Latent Point Diffusion Models for 3D Shape Generation. arXiv preprint arXiv:2210.06978. Zhang, Y.; Zhu, Z.; Zheng, W.; Huang, J.; Huang, G.; Zhou, J.; and Lu, J. 2022. BEVerse: Unified Perception and Prediction in Birds-Eye-View for Vision-Centric Autonomous Driving. arXiv preprint arXiv:2205.09743. Zhou, B.; and Kr¨ahenb¨uhl, P. 2022. Cross-view Transformers for real-time Map-view Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhou, L.; Du, Y.; and Wu, J. 2021. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Zou, J.; Xiao, J.; Zhu, Z.; Huang, J.; Huang, G.; Du, D.; and Wang, X. 2022. HFT: Lifting perspective representations via hybrid feature transformation. arXiv preprint arXiv:2204.05068. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7854
2024
872
18,709
Cross-Covariate Gait Recognition: A Benchmark Shinan Zou1, Chao Fan2,3, Jianbo Xiong1, Chuanfu Shen2, 4, Shiqi Yu2,3, Jin Tang1* 1School of Automation, Central South University 2Department of Computer Science and Engineering, Southern University of Science and Technology 3Research Institute of Trustworthy Autonomous System, Southern University of Science and Technology 4The University of Hong Kong {zoushinan, jianbo x, tjin}@csu.edu.cn, {12131100, 11950016}@mail.sustech.edu.cn, [email protected] Abstract Gait datasets are essential for gait research. However, this paper observes that present benchmarks, whether conventional constrained or emerging real-world datasets, fall short regarding covariate diversity. To bridge this gap, we undertake an arduous 20-month effort to collect a cross-covariate gait recognition (CCGR) dataset. The CCGR dataset has 970 subjects and about 1.6 million sequences; almost every subject has 33 views and 53 different covariates. Compared to existing datasets, CCGR has both population and individual-level diversity. In addition, the views and covariates are well labeled, enabling the analysis of the effects of different factors. CCGR provides multiple types of gait data, including RGB, parsing, silhouette, and pose, offering researchers a comprehensive resource for exploration. In order to delve deeper into addressing cross-covariate gait recognition, we propose parsingbased gait recognition (ParsingGait) by utilizing the newly proposed parsing data. We have conducted extensive experiments. Our main results show: 1) Cross-covariate emerges as a pivotal challenge for practical applications of gait recognition. 2) ParsingGait demonstrates remarkable potential for further advancement. 3) Alarmingly, existing SOTA methods achieve less than 43% accuracy on the CCGR, highlighting the urgency of exploring cross-covariate gait recognition. Link: https://github.com/ShinanZou/CCGR. Introduction Gait recognition aims to use physiological and behavioral characteristics extracted from walking videos to certify individuals’ identities. Compared to other biometric modalities, such as face, fingerprints, and iris, gait patterns have the distinct advantage of being extracted from a distance in uncontrolled environments. These strengths place gait recognition as an effective solution for security applications. In the latest literature, the research on gait recognition is developing rapidly, with the evaluation benchmark developing from early indoor to outdoor environments. During this remarkable journey, most representative gait models (Chao et al. 2019; Lin, Zhang, and Yu 2021) boasting historical progress have unexpectedly performed unsatisfactory results when faced with emerging challenges posed by real-world gait datasets such as GREW (Zhu et al. 2021) *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Differences between CCGR and other datasets. Population-level diversity is roughly quantified by the count of covariate categories present within the whole dataset. Correspondingly, individual-level diversity is measured by the count of covariate categories for each subject. Here, the population-level diversity of Gait3D and GREW is rich, but the exact amount is unknown due to the wild scenarios. and Gait3D (Zheng et al. 2022). Surprisingly, successive works (Fan et al. 2023b,a) quickly address this performance gap to a large extent, rekindling the promise of gait recognition for practical applications, as illustrated in Figure 1(a). However, this paper argues that the gait recognition task is much more challenging than these datasets have defined. In general, previous indoor gait datasets often require subjects repeatedly walk along fixed paths while introducing variations in clothing and carrying. This approach yields controllable and well-annotated data, facilitating the early exploration of key covariates influencing recognition accuracy. However, as shown in Fig. 1(b), these datasets fall short regarding population-level diversity, as subjects of them contain the same limited group of covariates. Conversely, the emergence of outdoor datasets effectively addresses this The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7855 Dataset #Id #Seq #Cam Data types Covariates except view Environment Diversity CMU MoBo 25 600 6 RGB, Sil. TR, Speed, BA, IN Controlled Not Rich SOTON 115 2,128 2 RGB, Sil. TR Controlled Not Rich USF 122 1,870 2 RGB CO, GR, SH, BR, DU Controlled Not Rich CASIA-B 124 13,640 11 RGB, Sil. Coat, Bag Controlled Not Rich CASIA-C 153 1,530 1 Inf., Sil. SP, Bag Controlled Not Rich OU-ISIR Speed 34 612 1 Sil. TR, Speed Controlled Not Rich OU-ISIR Cloth 68 2,764 1 Sil. TR, CL Controlled Not Rich OU-ISIR MV 168 4,200 25 Sil. TR Controlled Not Rich OU-LP 4,007 7,842 2 Sil. None Controlled Not Rich TUM GAID 305 3,370 1 RGB, Depth, A. DU, BAC, SH Controlled Not Rich OU-LP Age 63,846 63,846 1 Sil. Age Controlled Not Rich OU-MVLP 10,307 288,596 14 Sil., Pose, 3DM. None Controlled Not Rich OU-LP Bag 62,528 187,584 1 Sil. Carrying Controlled Not Rich GREW 26,345 128,671 882 Sil., Flow, Pose Free walking Wild Population-Level ReSGait 172 870 1 Sil., Pose Free walking Wild Population-Level UAV-Gait 202 9,895 6 Sil., pose None Controlled Not Rich Gait3D 4,000 25,309 39 Sil., Pose, 3DM. Free walking Wild Population-Level CASIA-E 1,014 778,752 26 RGB, Sil. Bag, CL, WS Controlled Not Rich CCPG 200 16,566 10 RGB, Sil. CL Controlled Not Rich SUSTech1K 1,050 25,279 12 RGB, Sil., 3DP Bag, CL, UB, OC, NI Controlled Not Rich CCGR (ours) 970 1,580,617 33 RGB, Parsing, Sil., Pose 53 types per subject, as detailed in Figure 2. Controlled Population- and Individual-Level Table 1: Comparison of CCGR with existing datasets. Sil., Inf., A., and 3DM. mean silhouette, infrared, audio, and 3D Mesh&SMPL. #Id, #Seq, and #Cam refer to the number of identities, sequences, and cameras. BAC, CO, GR, BR, DU, IN, BA, TR, SH, CL, UB, OC, NI, and WS are abbreviations of backpack, concrete, grass, briefcase, duration, incline, ball, treadmill, shoes, clothing, umbrella, uniform, occlusion, night and walking style. CMU MoBo (Gross and Shi 2001); SOTON (Shutler et al. 2004); USF (Sarkar et al. 2005); CASIA-B (Yu, Tan, and Tan 2006); CASIA-C (Tan et al. 2006); OU-ISIR Speed (Mansur et al. 2014); OU-ISIR Cloth (Altab Hossain et al. 2010); OU-ISIR MV (Makihara, Mannami, and Yagi 2011); OU-LP (Iwama et al. 2012); TUM GAID (Hofmann et al. 2014); OU-LP Age (Xu et al. 2017); OU-MVLP (Takemura et al. 2018; An et al. 2020; Li et al. 2022); OU-LP Bag (Uddin et al. 2018); GREW (Zhu et al. 2021); ReSGait (Mu et al. 2021); UAV-Gait (Ding et al. 2022); Gait3D (Zheng et al. 2022); CASIA-E (Song et al. 2022); CCPG (Li et al. 2023); SUSTech1K (Shen et al. 2023). limitation due to their real-world collection scenarios. Although their data distribution closely mirrors practical applications, we contend that current outdoor gait datasets lack individual-level diversity, as each subject typically contributes no more than seven variants (sequences) on average. This situation gives rise to two potential drawbacks for research: a) A majority of data pairs may qualify as “easy cases” owing to limited collection areas and short-term data gathering. b) The lack of fine annotations blocks exploring critical challenges relevant to real-world applications. More details of the existing dataset are in Table 1. To overcome these limitations, we propose a novel gait recognition benchmark that introduces both population-level and individual-level diversity, named Cross-Covariate Gait Recognition or CCGR. Statistically, the CCGR dataset covers 970 subjects and approximately 1.6 million walking sequences. These sequences span 53 distinct walking conditions and 33 different filming views. Thus, each subject within CCGR ideally contains a comprehensive collection of 53 × 33 = 1, 749 sequences. Notably, the walking conditions are widely distributed and well annotated, encompassing diverse factors such as carried items (book, bag, box, umbrella, trolley case, heavy bag, and heavy box), road types (up the stair, down the stair, up the ramp, down the ramp, bumpy road, soft road, and curved road), styles of walking (fast, stationary, normal, hands in pockets, free, and crowd), and more. The all-side camera array consisting of 33 cameras is installed at five different heights, effectively simulating the pitching angles of typical CCTVs. Every subject is recruited through a transparent process and accompanied by written consent. The age range of subjects spans from 6 to 70 years. The dataset encompasses raw RGB sequences. Releasing RGB images can facilitate the exploration of camerabased gait representations, and this paper officially provides common gait data like silhouette, parsing, and pose. CCGR will be made publicly available for research purposes. Equipped with the proposed CCGR, we re-implement several representative state-of-the-art methods and investigate that: 1) Cross-covariate gait recognition is more challenging than that simulated by previous gait datasets, as the achieved best rank-1 accuracy is only 42.5%. 2) Certain less-researched covariates, such as the crowd, umbrella, overhead view, walking speed, road, mixed covariate, and more, significantly degrade the recognition accuracy. 3) The more covariates involved, regardless of population-level and individual-level diversity perspectives, the more challenging gait recognition becomes. To solve complex covariate problems, this paper further introduces human parsing, which contains many semantic characteristics that describe body parts, to form a parsingbased baseline framework termed ParsingGait. In practice, we instantiate the backbone of PrasingGait using various The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7856 Figure 2: Examples of 53 covariates in CCGR. For a single covariate (the 1st row and the left of the 2nd row), the red numbers at the top of the pictures are indices of the covariates. For mixed covariates, numbers separated by “/” at the top of the picture indicate the co-occur of multi-single covariates corresponding to these numbers. silhouette-based gait models, consistently achieving significant enhancements. By this means, this paper highlights the value of informative gait representations like human parsing images for gait pattern description. In summary, our main contributions are as follows: • We present the first well-annotated million-sequencelevel gait recognition benchmark called CCGR, designed to research cross-covariate gait recognition deeply. • We propose an efficient, compatible, and feasible parsing-based baseline framework named ParsingGait. • We begin by evaluating existing algorithms to establish a baseline, then validating the effectiveness of ParsingGait. Next, we demonstrate the necessity of incorporating both population- and individual-level diversity. Finally, we thoroughly explore the impact of covariates and views. The CCGR Benchmark Covariates of CCGR The dataset has 53 covariates; 21 are single covariates, while the remaining 32 are mixed covariates. Examples of the 53 covariates are shown in Figure 2. Carrying: We have defined seven carrying covariates: book, bag, heavy bag, box, heavy box, and trolley case, umbrella. We have prepared 12 different types for the bag category, including single-shoulder bags, double-shoulder bags, satchels, backpacks, and handbags. Similarly, we have prepared eight boxes with varying shapes and volumes for the box category. As for the trolley case, we have prepared options in both 20-inch and 28-inch sizes. When subjects are asked to carry a bag, box, or trolley case, they can choose from the props we have provided. In the case of the heavy bag and box, we have placed counterweights inside them, ranging from 8kg to 15kg, to simulate the desired weight. Clothing: Regarding the thick coat covariates, we have prepared a selection of 20 clothing items, which include down coats, overcoats, windbreakers, jackets, and cotton coats. When subjects are instructed to wear a thick coat, they can choose from our clothing collection. Road: In addition to the normal road, we have prepared seven road covariates: up/down the stair, up/down the ramp, bumpy road, soft (muddy) road, and curved road. Ramps have a slope of 15o. Curved road means subjects are asked to walk a curved track instead of a straight path. Speed: In addition to the normal walking speed, we discuss two additional walking speeds: fast and stationary. Fast entails the subject walking at a speed close to a trot, while stationary refers to the subject remaining unmoving. Walking Style: The remaining four single covariates include normal walking, confident, multi-person walking, and freedom walking. Normal walking indicates walking on a horizontal path at a normal speed without wearing a thick coat or carrying any items. Confident means that subjects place their hands inside their pant or clothing pockets. Multi-person walking means multiple subjects walking together. Freedom walking means subjects are free to choose their carrying, clothing, road, and speed. Mixed covariates: In the real world, multiple covariates often co-occur. For instance, a man may wear a thick coat, carry a bag, and walk up a ramp. To simplify matters, we utilize mixed covariates to represent the co-occurrence of multiple covariates. In CCGR, we have designed 32 mixed covariates that are frequently encountered in daily life. Refer to Figure 2 for further details about these mixed covariates. Views of CCGR We rent a 500-square-meter warehouse and set up 33 cameras to collect data. Camera settings are shown in Figures The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7857 Figure 3: Examples of 33 views in CCGR. The red numbers at the top of the picture represent the horizontal angle. Figure 4: Camera setup in CCGR. Figure 5: Examples of different gait data in CCGR . 4. The cameras are divided into five layers, from bottom to top. Layer 5 is the overhead camera with a pitch angle of 90o. For the other four layers, the pitch angles from bottom to top are 5o, 30o, 55o, and 75o, and the horizontal angles of each layer increase from 0o to 180o counterclockwise. The frame size of the video files is 1280×720, and the frame rate is 25 fps. Figure 3 shows the example with various views. Extraction of Multiple Gait Data We offer various types of gait data, including RGB, parsing, silhouette, and pose; examples can be seen in Figure 5. Parsing: Predicting the semantic category of each pixel on the human body is a fundamental task in computer vision, often referred to as human parsing (Liang et al. 2018; Zhao et al. 2018; Gong et al. 2018; Xia et al. 2017). We uses QANet (Yang et al. 2021) for parsing extraction. QANet takes an RGB image as its input and produces the semantic category of each pixel on the human body, including hair, Figure 6: Age and gender attributes. Ages are categorized into five groups (< 19, 19 −30, 31 −45, 46 −60, and > 60). face, and left leg. Initially, QANet employed integers ranging from 0 to 19 to represent these different categories. To facilitate visualization and image pruning, we multiply these integers by 13 to generate a grayscale image. Silhouette: We generate the silhouettes by directly binarizing the previously acquired parsing images. We have also tried the instance and semantic segmentation algorithms but attained relatively inferior gait recognition accuracy. Pose: We use HRNet (Sun et al. 2019) to extract 2D Pose. We also try AlphaPose (Fang et al. 2017) and Openpose (Cao et al. 2017), which result in inferior accuracy. Collection, Statistics and Evaluation Collection Process: To simplify the description, we refer to covariates mentioned in the previous subsection as the “walking conditions”. In the normal walking condition, each subject walks twice. In the remaining 52 walking conditions, each subject only walks once per condition. Therefore, a total of 54 walks per subject are required. Since each subject has to walk 54 times, and the walking conditions have to be changed each time, it takes 2 hours to collect one subject. Dataset statistics: Figure 6 presents the distribution of age and gender in CCGR. The proportions of the various covariates align with the number of walks for each covariate. Furthermore, CCGR exhibits an average of 110 frames per sequence, more than 94% of sequences with > 60 frames. Evaluation Protocol: Subjects are labeled from 1 to 1000. Subjects 134 to 164 are missing. Subjects 1 to 600 are used for training, and the rest are used for testing. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7858 Figure 7: Evaluation metrics. C and V denote covariates and views, where subscripts indicate the order. NM is normal walking. “Easy” is employed by CASIA-B and OUMVLP (the gallery is normal walking). “Hard” is similar to GREW and Gait3D, closer to real life (the gallery is uncertain). evaluation metrics are illustrated in Figure 7. Parsing-based Gait Recognition Although silhouette and pose are commonly employed as gait modalities, they possess significant limitations. Silhouette provides only contour information, while pose offers solely structural details, resulting in sparse and simplistic representations. Consequently, these modalities prove less effective when confronted with complex covariate environments. We are fortunate to discover that parsing can simultaneously provide contour, structural and semantic information. Notably, parsing eliminates texture and color, providing a basis for treating as a gait pattern. Parsing and silhouettes have similar data structures, enabling parsing to inherit all silhouette-based algorithms without modification. This convenient compatibility allows us to explore parsing-based gait recognition efficiently. This paper explores the effectiveness of “Parsing + silhouettebased algorithms” and calls it ParsingGait. Baseline on CCGR Appearance-based Approaches We evaluate some SOTA algorithms: GEINet (Shiraga et al. 2016), GaitSet (Chao et al. 2019), GaitPart (Fan et al. 2020), CSTL (Huang et al. 2021), GaitGL (Lin, Zhang, and Yu 2021), GaitBase (Fan et al. 2023b), and DeepGaitV2 (Fan et al. 2023a). Implementation details: All silhouettes are aligned by the approach mentioned in (Takemura et al. 2018) and transformed to 64 × 44. The batch size is 8 × 16 × 30, where 8 denotes the number of subjects, 16 denotes the number of training samples per subject, and 30 is the number of frames. The optimizer is Adam. The number of iterations is 320K. The learning rate starts at 1e-4 and drops to 1e-5 after 200K iterations. For GaitBase and DeepGaitV2: The optimizer is SGD. The number of iterations is 240K. The learning rate starts at 1e-1 and drops by 1/10 at 100k, 140k, and 170k. All models are trained on the entire training set. Model-based Approaches We evaluate two SOTA algorithms: GaitGraph (Teepe et al. 2021) and GaitGraph2 (Teepe et al. 2022). We train GaitMethods R-1hard R-1easy R-5easy R-5hard GEINet 3.10 4.62 9.20 12.7 GaitSet 25.3 35.3 46.7 58.9 GaitPart 22.6 32.7 42.9 55.5 GaitGL 23.1 35.2 39.9 54.1 CSTL 7.25 11.8 13.79 20.1 GaitBase 31.3 43.8 51.3 64.4 DeepGaitV2 42.5 55.2 63.2 75.2 GaitGraph 15.2 25.2 37.2 51.6 GaitGraph2 0.26 0.27 1.4 1.41 Table 2: The accuracy of representative methods on CCGR. Backbone R-1hard R-1easy R-5hard R-5easy GaitSet 31.6 42.8 54.8 67 GaitPart 29.0 40.9 51.5 64.5 GaitGL 28.4 42.1 46.6 61.4 CSTL 27.9 40.7 47.1 61.5 GaitBase 43.2 56.9 63.7 76.0 DeepGaitV2 52.7 67.2 74.7 87.7 Table 3: The accuracy of ParsingGait (ours) on CCGR. Graph for 1200 epochs with a batch size of 128. We train GaitGraph2 for 500 epochs with a batch size of 768. Experiment Analysis of Representative Methods The results are shown in Table 2. The R-1hard of GEINet, GaitSet, GaitPart, GaitGL, and CSTL falls below 26%. While these methods demonstrate near 90% accuracy on previous indoor datasets, their validity under complex covariates remains untested. On the other hand, GaitGraph and GaitGraph2 exhibit poorer performance compared to silhouette-based methods, potentially because the pose information can be sparser than the silhouette, resulting in less available information. GaitBase and DeepGaitV2 are proposed to address the challenge of outdoor datasets; they are more robust against complex covariates. However, DeepGaitV2 achieves an impressive 82% rank-1 accuracy on the outdoor dataset GREW. In contrast, its performance on CCGR falls considerably below, reaching a mere 43%. This disparity may be due to the lack of individual-level diversity in the existing outdoor datasets. Analysis of Parsing-based Gait Recognition As shown in Table 3, the accuracy of ParsingGait is substantially improved. These findings effectively illustrate the three main advantages of parsing: feasibility, validity, and compatibility. By distinguishing between different body parts, parsing makes it more robust in the face of complex covariates. ParsingGait is the same computationally efficient as its silhouette-based counterpart because our parsing is consistent with the silhouette data structure. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7859 Figure 8: Increasing population and individual diversity. Similar Sample Setup NoC per Sbj NoC in S ub-dataset A CASIAB NM BG CL L1 3 3 B1 Gait3D/ GREW Random 8 Seqs per Sbj Max 8 53 B2 Max 8 53 B3 Max 8 53 C Ours All Seqs 5 53 Table 4: Covariate Sampling Setup: L1, Seq, Sbj, and NoC refer to layer 1, sequence, subject, and number of covariates. Population and Individual-Level Diversity We research the impact of covariate diversity by sampling and isolating various covariates. The specific sampling setup is provided in Table 4. The experiments are categorized into five groups. Group A represents the absence of covariate diversity, while B1, B2, and B3 demonstrate population-level diversity without individual-level diversity. Lastly, Group C exhibits both population-level and individual-level diversity. Based on the experimental data in Figure 8. From A to B1/2/3, the accuracy averagely decreased by -18.6%. However, from B1/2/3 to C, the accuracy averagely decreased by -25.1%. These findings indicate that relying solely on population-level diversity is insufficient to accurately represent the underlying challenge, while individuallevel diversity also is a significant challenge. In addition, the trend of Figure 8 is generally consistent with Figure 1 at the beginning of the paper, further strengthening the credibility of the experimental results. Impact of the Number of Covariates We examine how the number of covariates impacts accuracy, and the experimental outcomes are illustrated in Figure 9. The accuracy is substantially decreased as we progressively increase the covariate number from 1 to 53. Furthermore, a troubling trend emerges: even when the number reaches 53, the decline in accuracy rate does not significantly decelerate. This observation may indicate that gait recognition faces greater challenges in real-world scenarios. Evaluation of Covariates and Views Single-Covariate Evaluation: As shown in Table 5. Multiperson walking significantly affects accuracy because many parts of the human body are obscured. Speed also significantly affects the accuracy as it dramatically impacts the Figure 9: Impact of the number of covariates. Gallery: Normal 1 Type Covariate Gait Base Deep GaitV2 Parsi ngGait Carry Book(BK) 65.7 75.3 85.5 Bag(BG) 64.9 75.4 86.1 Heavy Bag(HVBG) 60.0↓ 72.3 84.2 Box(BX) 61.5 71.6 83.0 Heavy Box(HVBX) 58.7↓ 69.7↓ 81.9↓ Trolley Case(TC) 64.1 73.0 83.4 Umbrella(UB) 47.2↓ 60.5↓ 71.3↓ Average 60.3 71.1 82.2 Cloth Thick Coat(CL) 40.4 53.5 66.8 Road Up Ramp(UTR) 60.3↓ 69.5↓ 80.9 Down Ramp(DTR) 60.5↓ 70.1↓ 80.2 Up Stair(UTS) 54.9↓ 66.7↓ 78.0↓ Down Stair(DTS) 54.0↓ 65.4↓ 76.7↓ Bumpy Road(BM) 63.3 71.4 82.0 Curved Road(CV) 70.0 77.3 86.1 Soft Road(SF) 66.0 73.2 83.7 Average 61.3 70.5 79.3 Speed Normal 1(NM1) 76.6 83.5 91.3 Fast(FA) 47.2↓ 60.7↓ 74.1↓ Stationary(ST) 32.0↓ 45.0↓ 60.9↓ Average 51.9 63.1 75.4 Walk Style Normal 2(NM2) 75.3 82.3 90.7 Confident(CF) 64.9 74.8 83.9 Freedom(FD) 57.1 68.1 79.2 Multi-person(MP) 24.0↓ 32.6↓ 39.4↓ Average 55.3 64.4 73.3 Table 5: Single-Covariate Evaluation: R-1easy accuracy (%) with excluding identical-view cases. ↓and bold respectively indicate the sub-Average and SOTA performance. temporal feature extraction of the algorithm. Clothing is still a big challenge. In addition, carrying and road also have a notable negative impact on accuracy. Mixed-Covariate Evaluation: As shown in Table 6. Mixed covariates impact precision more, with a significant classical decrease as the number of mixes increases. for example, “Bag →BG-TC →BG-TC-CL →BG-TC-CL-ST”, accuracy is gradually declining. However, mixed covariates are a challenge that must be addressed because ideal condiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7860 Gallery: Normal 1 Type Covariate Gait Base Deep GaitV2 Parsi ngGait Two Mixed CL-UB 25.2↓ 37.8↓ 46.9↓ HVBX-BG 52.1 64.7 78.3 BG-TC 58.1 69.3 81.3 SF-CL 36.1↓ 48.0↓ 62.8↓ UTR-BX 51.0 62.0 75.4 DTR-BK 55.1 66.0 77.4 DTS-HVBX 42.6↓ 56.1↓ 69.8↓ UTS-BG 46.8 60.9 74.5 BM-CL 35.2↓ 46.3 61.8 CV-HVBX 61.0 70.8 82.0 CL-CF 39.2↓ 52.7↓ 65.6↓ Average 45.7 57.7 70.5 Three Mixed CL-UB-BG 23.4↓ 36.1↓ 44.9↓ BX-BG-CL 35.1↓ 48.8 60.7 BG-TC-CL 34.3↓ 48.5 63.0 SF-UB-BG 36.4↓ 49.4 62.5 UTR-HVBX-CL 31.8↓ 43.1↓ 55.3↓ DTR-BK-BG 49.2 61.7 74.9 DTS-HVBX-CL 26.4↓ 38.0↓ 49.1↓ UTS-BG-CL 25.1↓ 37.7↓ 52.5↓ BM-CL-BG 33.0↓ 44.8↓ 59.6↓ CV-BX-BG 58.8 69.6 80.8 UB-BG-FA 28.0↓ 41.0↓ 52.8↓ Average 34.7 47.1 59.7 Four Mixed CL-UB-BG-FA 16.2↓ 27.6↓ 35.7↓ BM-CL-BG-BX 32.2 43.5 56.1 BG-TC-CL-CV 38.0 51.2 66.9 DTR-BK-BG-CL 32.2 44.9 56.9 DTS-BX-CL-BG 25.6 37.3 48.9 SF-UB-BG-CL 20.6↓ 31.8↓ 41.9↓ BG-TC-CL-ST 11.7↓ 18.4↓ 29.4↓ UTS-UB-BG-CL 15.8↓ 26.1↓ 36.4↓ Average 24.0 35.1 46.5 Five Mixed BG-TC-CLCV-UB 34.1 35.9 47.4 UTR-BG-CLBX-CV 31.3 45.2 58.3 Table 6: Mixed-Covariate Evaluation: R-1easy accuracy (%) with excluding identical-view cases. We use “-” to connect the mixed covariates. Tab. 5 presents the dictionary containing abbreviations and their corresponding full spellings of these covariates. ↓and bold respectively indicate the subAverage and SOTA performance. tions for single covariates in real life tend to be rare. Cross-View Evaluation: As shown in Table 7. The existing algorithms perform well, considering only the views. The current challenge with views is how to address the highpitch angle case. Encouragingly, ParsingGait demonstrates distinct improvement in recognizing overhead views. Cross-view Evaluation Pitch Angle Probe View Gait Base Deep GaitV2 Parsi ngGait 5◦ 0.0◦ 80.1 85.7 90.6 22.5◦ 84.7 89.5 93.1 45.0◦ 83.7 89.1 93.9 67.5◦ 79.3 85.7 93.6 90.0◦ 75.7 83.7 93.2 112.5◦ 76.9 84.6 93.2 135.0◦ 81.6 87.1 93.7 157.5◦ 83.8 88.6 92.7 180.0◦ 77.4 83.3 89.9 Average 80.4 86.4 92.6 30◦ 0.0◦ 79.6 85.2 92.0 22.5◦ 85.0 89.8 93.6 45.0◦ 86.0 90.9 94.9 67.5◦ 82.7 88.8 95.0 90.0◦ 78.9 86.4 94.6 112.5◦ 79.1 86.3 94.5 135.0◦ 82.8 88.5 94.5 157.5◦ 84.1 89.9 93.7 180.0◦ 79.5 85.3 91.8 Average 82.0 87.9 93.9 55◦ 0.0◦ 74.8 81.8 90.6 22.5◦ 81.5 86.7 93.3 45.0◦ 83.9 88.9 95.0 67.5◦ 82.2 88.4 95.1 90.0◦ 63.6 76.3 92.0 112.5◦ 77.3 84.5 93.6 135.0◦ 81.2 87.4 93.9 157.5◦ 80.8 86.3 93.2 180.0◦ 75.9 83.2 91.3 Average 77.9 84.8 93.1 75◦ 0.0◦ 64.4 74.8 86.0 45.0◦ 78.7 85.2 92.7 90.0◦ 40.8 60.9 87.5 135.0◦ 73.2 80.5 90.6 180.0◦ 62.5 74.0 86.2 Average 63.9 75.1 88.6 OverHead 2.0 8.4 32.0 Table 7: Cross-View Evaluation: Rank-1 accuracy (%) with excluding identical-view cases. Conclusion This paper introduces CCGR, a well-labeled dataset which provides diversity at both the population and individual levels. As gait recognition on many public gait datasets is close to saturation, future works can explore how gait is affected by covariates and how to design robust gait recognition. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7861 Acknowledgements This work wad supported by the Natural Science Foundation of Hunan Province (No.2023JJ30697), the Changsha Natural Science Foundation (No.kq2208286) and the National Natural Science Foundation of China (No.61502537). This work was also supported in part by the National Key Research, and in part by Development Program of China under Grant (No.61976144) and the Shenzhen International Research Cooperation Project under Grant (No.GJHZ20220913142611021). References Altab Hossain, M.; Makihara, Y.; Wang, J.; and Yagi, Y. 2010. Clothing-invariant gait identification using part-based clothing categorization and adaptive weight control. PR, 43(6): 2281–2291. An, W.; Yu, S.; Makihara, Y.; Wu, X.; Xu, C.; Yu, Y.; Liao, R.; and Yagi, Y. 2020. Performance Evaluation of Modelbased Gait on Multi-view Very Large Population Database with Pose Sequences. IEEE Trans. on Biometrics, Behavior, and Identity Science. Cao, Z.; Simon, T.; Wei, S.-E.; and Sheikh, Y. 2017. Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. In CVPR. Chao, H.; He, Y.; Zhang, J.; and Feng, J. 2019. GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition. In AAAI. Ding, T.; Zhao, Q.; Liu, F.; Zhang, H.; and Peng, P. 2022. A Dataset and Method for Gait Recognition with Unmanned Aerial Vehicless. In ICME. Fan, C.; Hou, S.; Huang, Y.; and Yu, S. 2023a. Exploring Deep Models for Practical Gait Recognition. ArXiv, abs/2303.03301. Fan, C.; Liang, J.; Shen, C.; Hou, S.; Huang, Y.; and Yu, S. 2023b. OpenGait: Revisiting Gait Recognition Towards Better Practicality. In CVPR, 9707–9716. Fan, C.; Peng, Y.; Cao, C.; Liu, X.; Hou, S.; Chi, J.; Huang, Y.; Li, Q.; and He, Z. 2020. GaitPart: Temporal Part-Based Model for Gait Recognition. In CVPR. Fang, H.-S.; Xie, S.; Tai, Y.-W.; and Lu, C. 2017. RMPE: Regional Multi-Person Pose Estimation. In ICCV. Gong, K.; Liang, X.; Li, Y.; Chen, Y.; Yang, M.; and Lin, L. 2018. Instance-Level Human Parsing via Part Grouping Network. In ECCV, 805–822. ISBN 978-3-030-01225-0. Gross, R.; and Shi, J. 2001. The CMU Motion of Body (MoBo) Database. Monumenta Nipponica. Hofmann, M.; Geiger, J.; Bachmann, S.; Schuller, B.; and Rigoll, G. 2014. The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits. JVCIR, 25(1): 195–206. Huang, X.; Zhu, D.; Wang, H.; Wang, X.; Yang, B.; He, B.; Liu, W.; and Feng, B. 2021. Context-Sensitive Temporal Feature Learning for Gait Recognition. In ICCV, 12909– 12918. Iwama, H.; Okumura, M.; Makihara, Y.; and Yagi, Y. 2012. The OU-ISIR Gait Database Comprising the Large Population Dataset and Performance Evaluation of Gait Recognition. IEEE Trans. on Information Forensics and Security, 7, Issue 5: 1511–1521. Li, W.; Hou, S.; Zhang, C.; Cao, C.; Liu, X.; Huang, Y.; and Zhao, Y. 2023. An In-Depth Exploration of Person ReIdentification and Gait Recognition in Cloth-Changing Conditions. In CVPR, 13824–13833. Li, X.; Makihara, Y.; Xu, C.; and Yagi, Y. 2022. Multi-View Large Population Gait Database With Human Meshes and Its Performance Evaluation. IEEE Transactions on Biometrics, Behavior, and Identity Science, 4(2): 234–248. Liang, X.; Gong, K.; Shen, X.; and Lin, L. 2018. Look into Person: Joint Body Parsing & Pose Estimation Network and a New Benchmark. IEEE TPAMI. Lin, B.; Zhang, S.; and Yu, X. 2021. Gait Recognition via Effective Global-Local Feature Representation and Local Temporal Aggregation. In ICCV, 14648–14656. Makihara, Y.; Mannami, H.; and Yagi, Y. 2011. Gait Analysis of Gender and Age Using a Large-Scale Multi-view Gait Database. In Kimmel, R.; Klette, R.; and Sugimoto, A., eds., ACCV, 440–451. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-19309-5. Mansur, A.; Makihara, Y.; Aqmar, R.; and Yagi, Y. 2014. Gait Recognition under Speed Transition. In CVPR. Mu, Z.; Castro, F. M.; Mar´ın-Jim´enez, M. J.; Guil, N.; ran Li, Y.; and Yu, S. 2021. ReSGait: The Real-Scene Gait Dataset. In IJCB 2021. Sarkar, S.; Phillips, P.; Liu, Z.; Vega, I.; Grother, P.; and Bowyer, K. 2005. The humanID gait challenge problem: data sets, performance, and analysis. IEEE TPAMI, 27(2): 162–177. Shen, C.; Fan, C.; Wu, W.; Wang, R.; Huang, G. Q.; and Yu, S. 2023. LidarGait: Benchmarking 3D Gait Recognition With Point Clouds. In CVPR, 1054–1063. Shiraga, K.; Makihara, Y.; Muramatsu, D.; Echigo, T.; and Yagi, Y. 2016. GEINet: View-invariant gait recognition using a convolutional neural network. In ICB, 1–8. Shutler, J. D.; Grant, M. G.; Nixon, M. S.; and Carter, J. N. 2004. On a large sequence-based human gaitdatabase. In Applications and Science in Soft Computing. Song, C.; Huang, Y.; Wang, W.; and Wang, L. 2022. CASIAE: a large comprehensive dataset for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3): 2801–2815. Sun, K.; Xiao, B.; Liu, D.; and Wang, J. 2019. Deep HighResolution Representation Learning for Human Pose Estimation. In CVPR. Takemura, N.; Makihara, Y.; Muramatsu, D.; Echigo, T.; and Yagi, Y. 2018. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Transactions on Computer Vision and Applications, 10. Tan, D.; Huang, K.; Yu, S.; and Tan, T. 2006. Efficient Night Gait Recognition Based on Template Matching. In ICPR), volume 3, 1000–1003. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7862 Teepe, T.; Gilg, J.; Herzog, F.; H¨ormann, S.; and Rigoll, G. 2022. Towards a Deeper Understanding of Skeleton-Based Gait Recognition. In CVPRW. Teepe, T.; Khan, A.; Gilg, J.; Herzog, F.; H¨ormann, S.; and Rigoll, G. 2021. Gaitgraph: Graph Convolutional Network for Skeleton-Based Gait Recognition. In 2021 IEEE International Conference on Image Processing (ICIP), 2314– 2318. Uddin, M. Z.; Ngo, T. T.; Makihara, Y.; Takemura, N.; Li, X.; Muramatsu, D.; and Yagi, Y. 2018. The OU-ISIR Large Population Gait Database with real-life carried object and its performance evaluation. IPSJ Transactions on Computer Vision and Applications, 10(1): 5. Xia, F.; Wang, P.; Chen, X.; and Yuille, A. L. 2017. Joint Multi-Person Pose Estimation and Semantic Part Segmentation. In CVPR. Xu, C.; Makihara, Y.; Ogi, G.; Li, X.; Yagi, Y.; and Lu, J. 2017. The OU-ISIR Gait Database Comprising the Large Population Dataset with Age and Performance Evaluation of Age Estimation. IPSJ Trans. on Computer Vision and Applications, 9(24): 1–14. Yang, L.; Song, Q.; Wang, Z.; Liu, Z.; Xu, S.; and Li, Z. 2021. Quality-Aware Network for Human Parsing. In arXiv preprint arXiv:2103.05997. Yu, S.; Tan, D.; and Tan, T. 2006. A Framework for Evaluating the Effect of View Angle, Clothing and Carrying Condition on Gait Recognition. In ICPR, volume 4, 441–444. Zhao, J.; Li, J.; Cheng, Y.; Sim, T.; Yan, S.; and Feng, J. 2018. Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing. In ACM MM, 792–800. ISBN 9781450356657. Zheng, J.; Liu, X.; Liu, W.; He, L.; Yan, C.; and Mei, T. 2022. Gait Recognition in the Wild With Dense 3D Representations and a Benchmark. In CVPR, 20228–20237. Zhu, Z.; Guo, X.; Yang, T.; Huang, J.; Deng, J.; Huang, G.; Du, D.; Lu, J.; and Zhou, J. 2021. Gait Recognition in the Wild: A Benchmark. In ICCV, 14789–14799. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7863
2024
873
18,710
Towards Efficient Diffusion-Based Image Editing with Instant Attention Masks Siyu Zou1*†, Jiji Tang2†, Yiyi Zhou1, Jing He1, Chaoyi Zhao2, Rongsheng Zhang2, Zhipeng Hu2, Xiaoshuai Sun1‡ 1 Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China 2 Fuxi AI Lab, NetEase Inc., Hangzhou, China [email protected], [email protected], [email protected], [email protected] {zhaochaoyi, zhangrongsheng, zphu}@corp.netease.com, [email protected] Abstract Diffusion-based Image Editing (DIE) is an emerging research hot-spot, which often applies a semantic mask to control the target area for diffusion-based editing. However, most existing solutions obtain these masks via manual operations or off-line processing, greatly reducing their efficiency. In this paper, we propose a novel and efficient image editing method for Text-to-Image (T2I) diffusion models, termed Instant Diffusion Editing (InstDiffEdit). In particular, InstDiffEdit aims to employ the cross-modal attention ability of existing diffusion models to achieve instant mask guidance during the diffusion steps. To reduce the noise of attention maps and realize the full automatics, we equip InstDiffEdit with a trainingfree refinement scheme to adaptively aggregate the attention distributions for the automatic yet accurate mask generation. Meanwhile, to supplement the existing evaluations of DIE, we propose a new benchmark called Editing-Mask to examine the mask accuracy and local editing ability of existing methods. To validate InstDiffEdit, we also conduct extensive experiments on ImageNet and Imagen, and compare it with a bunch of the SOTA methods. The experimental results show that InstDiffEdit not only outperforms the SOTA methods in both image quality and editing results, but also has a much faster inference speed, i.e., +5 to +6 times. Our code available at https://github.com/xiaotianqing/InstDiffEdit Introduction For a year or two, diffusion models have gradually become the mainstream paradigm in conditional image generation (Saharia et al. 2022; Ramesh et al. 2022; Rombach et al. 2022; Balaji et al. 2022; Nichol et al. 2021). Compared with Generative Adversarial Networks (GAN) (Karras, Laine, and Aila 2019; Karras et al. 2020; Xia et al. 2021), diffusion models yield a completely different generation pipeline, which can obtain more diverse and interpretable generations. The great success of diffusion models also sparks researchers to apply them to the task of semantic image editing (Meng et al. 2021; Kawar et al. 2022). Semantic image editing (Zhan et al. 2021) aims to modify the target instance of the given image according to the input *This work was done during her internship at Fuxi AI Lab. †These authors contributed equally. ‡Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Manual Mask Generation Image Input Image Generated Mask British Shorthair cat Persian cat Figure 1: Illustration of existing diffusion-based image editing methods, where a manually or off-line generated mask is often used to control the editing area. text description, while the rest image information needs to be preserved as much as possible. Although existing diffusion models (Saharia et al. 2022; Ramesh et al. 2022; Rombach et al. 2022) excel in generation quality and diversity on text-to-image generation, it still lacks precise controls. Therefore, recent diffusion-based editing methods introduce additional information to better control the image manipulation, such as reference image (Meng et al. 2021) or semantic mask (Avrahami, Lischinski, and Fried 2022). Among these solutions, padding a semantic mask is the most effective way for accurate image editing, which can precisely restrict the target image area and achieve editing via text-to-image diffusions (Avrahami, Lischinski, and Fried 2022), as shown in Fig. 1. However, the mask generation often requires manual intervention (Avrahami, Lischinski, and Fried 2022; Couairon et al. 2022b), greatly limiting the efficiency of these methods for the practical use. Recent advance has aspired to automate the editing process via reducing the manual efforts or including the mask generation in diffusion models. For instance, PtP (Hertz et al. 2022) proposes a semi-automated method, which can directly obtain mask by manually setting some parameters. More recently, DiffEdit (Couairon et al. 2022b) proposes a fully automatic method, which can embed the mask generation into the diffusion framework, but its mask generation and image editing are still time consuming. Overall, existThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7864 ing solutions still exhibit obvious shortcomings in terms of either manual intervention or computation efficiency. In this paper, we propose a novel yet efficient image editing method for diffusion models, termed Instant Diffusion Editing (InstDiffEdit). The feasibility of InstDiffEdit is attributed to the superior cross-modal alignment of existing diffusion models. In the advanced diffusion models like Stable Diffusion (Rombach et al. 2022), an effective multimodal space has been well established by learning numerous image-text pairs, and these models also involve excellent cross-attention mapping. In this case, we can leverage the hidden attention maps in diffusion steps to facilitate instant mask generation. However, these hidden attention maps are intractable to directly use, and they are often full of noise. For instance, the semantic attentions of start token are much more noisier than that of “cat” in Fig. 2. Thus, we also equip InstDiffEdit with a learning-free mask refinement scheme, which can adaptively aggregate the attention distributions according to the editing instruction. Notably, the proposed InstDiffEdit is a plug-and-play component for most diffusion models, which is also training-free. To validate InstDiffEdit, we apply it to Stable Diffusion v1.4 (Rombach et al. 2022), and conduct extensive experiments on two benchmark datasets, namely ImageNet (Deng et al. 2009) and Imagen (Saharia et al. 2022). Meanwhile, to better measure the local editing ability and mask accuracy of existing methods, we also propose a composite benchmark called Editing-Mask, as a supplementary evaluation to DIE. The experimental results on ImageNet and Imagen show that compared with existing methods, InstDiffEdit can achieve the best trade-off between computation efficiency and generation quality for semantic image editing. For instance, compared with the recently proposed DiffEdit, our method can obtain competitive editing results while improving the inference speed by 5 to 6 times. The results on Editing-Mask confirm the superiority of our method in background preservation. Furthermore, we also provide sufficient visualizations to examine the ability of InstDiffEdit. Conclusively, the contribution of this paper is three-fold: • We propose a novel and efficient image editing method for diffusion-based models, termed InstDiffEdit, which obtains instant mask guidance via exploiting the crossmodal attention in diffusion models. • As a plug-and-play component, InstDiffEdit can be applied to most diffusion models for semantic image editing without further training or human intervention, and its performance is also SOTA. • We propose a new image editing benchmark, termed Editing-Mask, containing 200 images with humanlabeled masks, which can be used for the evaluation of mask accuracy and local editing ability. Related Work Text-to-Image Diffusion In the past few years, a lot of diffusion-based methods (Rombach et al. 2022; Ramesh et al. 2022; Saharia et al. 2022) has been proposed, which also demonstrate superior performance in terms of image quality and diversity compared Start Token (a) Input Image (b) The attention maps of LDM with different Tokens Photo of cat Figure 2: The visualization of the attention maps in Stable Diffusion. The target word of “cat” has the best attention map, but it needs to be manually identified during applications. The start token is relevant but still very noisy. to GAN. (Karras et al. 2020; Xia et al. 2021). Some recent works (Avrahami, Lischinski, and Fried 2022) also explore the combination of diffusion models with Contrastive Language-Image Pre-Training (CLIP) (Radford et al. 2021). For example, Stable Diffusion (Rombach et al. 2022) leverages CLIP’s text encoder to guide the image generation process. By incorporating cross-attention between text and noisy images, the model generates images that are semantically aligned with the textual description. Semantic Image Editing A plethora of GAN-based semantic image editing approaches (Goodfellow et al. 2014; Xu et al. 2018; Xia et al. 2021) have been proposed with remarkable outcomes. The emergence of large-scale GAN networks, such as the StyleGAN family (Karras, Laine, and Aila 2019; Karras et al. 2020, 2021), significantly enhances the editing capabilities. Meanwhile, Transformer (Vaswani et al. 2017) has demonstrated remarkable performance in text-driven image editing tasks. ManiTrans (Wang et al. 2022) use Transformers to predict the content of covered regions, which enables semantic editing only performing on a certain image region. Recently, with the developments of diffusion models, practitioners also explore their application in semantic image editing. SDEdit (Meng et al. 2021) accomplishes this by retaining a portion of the reference image information during the diffusion process. CycleDiffusion (Wu and De la Torre 2022) proposes an inversion model to get a better latent from the input image, thus improving the edit quality. PtP (Hertz et al. 2022) and PnP (Tumanyan et al. 2022) operate editing via modifying attention maps in diffusion models. More recently, to prevent unbounded edits from global image editing, some methods resort to local editing techniques. For example, Blended Diffusion (Avrahami, Lischinski, and Fried 2022) and RePaint (Lugmayr et al. 2022) implement local editing on real images with manual mask. However, the acquisition of manual masks is timeconsuming and labor-intensive, and hinders the developments of automated semantic editing. Therefore, some methods have begun to explore automated mask generation. DiffEdit (Couairon et al. 2022b) is better suited to the requirements of automated editing as it obtains the mask by contrasting variations in model predictions with different text prompts. However, because of the stochastic randomness of the diffusion model, DiffEdit requires multiple iterations to stabilize the ultimate output, which leads to inefficiencies in terms of time. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7865 A photo of a Persian cat Input 𝐼 Position vector … … Edited Image … (a) Noise Process (c) Semantic Editing via Mask (b) Instant Attention Mask Generation Add Noise Refinement 𝑥𝑡 𝑥t−1 ′ 𝑥𝑡−1 𝑥1 ′ Attention Map 𝐴𝑡 Attention Map 𝐴1 𝑦𝑡−1 𝑦1 Text Embeddings 𝐶𝑒𝑑𝑖𝑡 Edit text 𝑆 × × (d) Inpainting Inpainting Model Input 𝐼 𝑀0 Figure 3: The framework of the Instant Diffusion Editing (InstDiffEdit). InstDiffEdit involves instant mask generation at each denoising step based on the attention maps. This mask can provide instant guidance for the image denoising. The left part (a) illustrates the noise process, and (b) depicts the generation of semantic mask at each step, based on which the diffusion-based image editing is performed (c). Lastly, the inpainting model is further applied to accomplish the generation (d). Preliminary Latent Diffusion Models Traditional diffusion models (Ho, Jain, and Abbeel 2020) typically operate the diffusion process on high-resolution image space, which significantly limits training and generation speed. In order to achieve more efficient training and generation, Latent Diffusion Models (LDMs) (Rombach et al. 2022) perform the diffusion process on the latent space rather than the resolution space, thereby improving the efficiency of training and inference. First of all, LDMs leverages an automatic encoder framework EI, such as VAE (Kingma and Welling 2013), to map the image features I to low-dimensional latent spaces x0 and generate noisy image features xt through the diffusion forward process: xt = √αtx0 + √ 1 −αtϵt, x0 = EI(I), (1) where t denotes the time-step, which is determined by noise strength r. The noise term ϵt is sampled from a standard normal distribution. αt is a decreasing schedule of diffusion coefficients that controls the strength of noise ateach step. Subsequently, the text sequence S is mapped to a feature space using a text encoder ET such as CLIP (Radford et al. 2021), recorded as Cedit = ET (S). The diffusion process period is operated on latent space, denoted as: xt−1 = 1 √αt (xt −1 −αt √1 −αt ϵθ(xt, c, t)) + σtz. (2) Finally, a decoder DI, which corresponds to the encoder EI, is employed to reconstruct the image from the latent dimension with Irec = DI(x0). Cross-Attention in LDMs In LDMs, text-to-image generation is accomplished by modifying the latent representations using cross-attention alignments. Specifically, for each text S which consists of N tokens, the pre-trained text encoder CLIPT is utilized to transform it into the text feature c = {c1, c2, . . . , cN}. Similarly, input image is transformed into image latent x0 and the noisy image latent xt is obtained according to Eq. 1. Subsequently, the text features and image latent are projected by three trainable linear layers, denoted as fQ, fV , and fK. Next, the spatial attention maps A is generated for each text token by: A = Softmax(QKT √dk ), Q = fQ(zt), K = fK(t), V = fV (t) (3) where dk denotes the feature dimension of K. And the attention maps A is then combined with the value matrix V to obtain the final output of the cross-attention layer with V ·A. Generally, the attention maps in Stable Diffusion can indicate the correspondence between text words and image regions. However, due to the noise contained in image latent , it is challenging to directly obtain the desired target instance from the attention maps, and these hidden attention maps are still of noisy, as shown in Fig. 2. Methodology Overview In this paper, we propose a novel and efficient image editing method based on text-to-image diffusion models, termed Instant Diffusion Editing (InstDiffEdit), of which structure is illustrated in Fig. 3. Concretely, similar to exisitng methods (Avrahami, Lischinski, and Fried 2022), we aim to achieve the target image editing by padding a semantic mask to input image, based on which the diffusion steps are conducted to achieve target edition. This process can be defined by: xt = M · x′ t + (1 −M) · yt, (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7866 where, x′ t and yt denote the predicted noisy latent and the latent representation of the noisy image at step t, and M is the mask. Then, we can get the noisy latent xt for editing. This mask-based editing is supported by recent advances in diffusion models (Avrahami, Lischinski, and Fried 2022), which can restrict editing areas using mask and replace the non-masked area of the predicted image with noise image at the current timestep. This allows mask-based methods to preserve the background in the non-masked area while editing. However, the generation of this semantic mask often requires manual efforts (Hertz et al. 2022; Patashnik et al. 2023) or off-line processing (Avrahami, Lischinski, and Fried 2022; Lugmayr et al. 2022). In this case, InstDiffEdit resorts to the attention maps in LDMs for instant mask genernation during diffusions. As shown in Fig. 2, the attention maps in LDMs capture the semantic correspondence between the image and text well. However, it also encounters some problems. To specify the attention map of the editing target, e.g., “cat” in Fig. 2, the method still requires manual efforts, since we do not know the length and content of user’s instruction during application. And directly using the map of “start token” as a trade-off is still too noisy for efficetive edition. In this case, we equip InstDiffEdit with an automatic refinement scheme for mask generation. As shown in Fig. 3, given an input image latent feature xt, and a text feature Cedit, we can get the hidden attention maps A in denoise process from Eq. 3. Then, we propose a parameter-free attention mask generation module G(·) to obtain the semantic mask Mt = G(xt, Cedit). Later, with this instant mask, we can directly perform target image editing during the diffusion steps, which can be re-written by: xt−1 = Mt · ϵθ(xt, t, Cedit) + (1 −Mt) · yt. (5) where, Mt is the mask computed by the attention mask module in timestep t and ϵθ denotes the diffusion model. Lastly, in order to achieve better generation results, we adopt a strategy of using the mask generated in the last denoising step as the final mask, and generating the final editing results through the inpainting way in LDMs. In the next subsection, we will give the detail definition of the proposed attention mask generation module. Instant Attention Mask Generation In InstDiffEdit, we use the attention maps generated in the denoising process as the information source for mask generation. However, the input text often consists of multiple tokens, and the attention information of each token has its own focus and varies vastly with the change of sentence length and word composition. Therefore, it is difficult for the model to automatically locate attention results of the target words. In practice, we use the attention maps of the start token as the base information for further attention mask refinements. To explain, in a well pre-trained T2I diffusion model, the start token often expresses the semantics of the whole sentence. As shown in Fig. 2, the focus region of attention corresponding to the start token overlaps highly with the edit region of the semantic description. However, the start token Positive pars Negative pairs Unrelated pairs Attention At Position vector Index attention Semantic Refined Mask Indexing Semantic Similarity (start token) Refining Semantic Similarity (index token) Figure 4: The proposed instant mask generation. An indexing process is first performed based on the semantic similarities between the start token and the other ones (upper left). Refinement is then operated between the index and the remaining ones (lower left). Finally, the mask is obtained via the adaptive aggregation of all attention maps. contains the whole sentence as well as part of the original image information, so its attention distribution is still messy. In this case, we adopt the idea of key information extraction to eliminate the noisy information and obtain the most relevant content with semantic information. Assuming a noise strength of r, the denoise process starts at time-step τ (τ = r ∗T, T = 1000), and the corresponding attention maps Aτ can be obtained using Eq. 3. Specifically, we leverage the attention map of the start token Aτ start ∈R16×16 as the reference information, and subsequently retrieve the attention Aτ index ∈R16×16 by computing all similarities with the reference map. This enables us to identify the location of the object that requires modification: Aτ index = argmax X i∈[1,N] cosine(Aτ i , Aτ start), (6) where cosine(·) denotes semantic similarity and N is the length of all tokens in sentence. To obtain more accurate mask information, we further aggregate the concept-related information and eliminate irrelevant information. Specifically, we compute the similarities between the obtained Aτ index and the attention maps of the text tokens to obtain a similarity vector S ∈R1×N: Si = cosinei∈[1,N](Aτ i , Aτ index). (7) In principle, the similarity of the attention maps at each token is closely related to the semantic similarity of the sentence. As the attention maps are associated with the core semantic, the similarities will be larger, and vice versa. Afterwards, we can get a position vector to weight the attention information via filtering the similarity vector with two thresholds: Pi∈[1,N] =    1 Si > γ1, −1 Si < γ2, 0 others. (8) Computing semantic similarities at each step of the denoising process can be time-consuming due to the large dimensionality of the attention maps. To mitigate this issue, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7867 we propose to compute the position vector P only in the first step τ of the denoising process. Finally, we obtain the refined attention map Aref t with attention maps At and P at timestep t ∈{τ, . . . , 0} (Aref t = P ·At), which is then processed using Gaussian filtering and binarized with a threshold φ to obtain the final mask Mt: Mt(x, y) =  1 Aref t (x, y) > φ, 0 others. (9) here, (x, y) refers to a point in the latent space of the image. Notably, the above instant attention mask generation module is training free, and thus it can be directly plugged into most existing T2I diffusion models. Meanwhile, through the refine processing, the obtained mask is much superior than the ones before refining. Semantic Editing via Mask Through the mask generation module, we obtain a mask at each step of the image denoising process. Thus, by blending the mask, guidance can be provided to denoising by Eq. 4. However, since all the information in the masked area is essentially discarded, the resulting image often has local semantic consistency but does not consider global semantics, leading to artifacts. Additionally, when the noise level is low, some editing operations cannot be achieved, such as color modification. Thus, we also equip InstDiffEdit with an inpainting based method for semantic image editting. The inpainting method (Rombach et al. 2022) initializes the information in the masked area with completely random noise and considers global information during generation, thus eliminating artifacts and editing failures caused by the original image information. Nevertheless, the performance of inpainting is highly dependent on the accuracy of mask. Therefore, we combine the advantages of the two methods by using attention maps to generate mask in the denoising process, thereby guiding image generation and obtaining more accurate mask during denoising. Finally, we use the inpainting method on the mask generated in the last step of denoising to generate an image that is artifact-free and more consistent with the remaining information in the original image. Notably, the combination of two mask editing methods only slightly increases the computation cost of semantic image editing. Experiments Experiment Setting Datasets We use ImageNet, Imagen and Editing-Mask to evaluate the performance of semantic editing task. • ImageNet Followed the evaluation of Flexit (Couairon et al. 2022a). A total of 1092 images in ImageNet (Deng et al. 2009) are included, covering 273 categories. For each image, the edit text is another similar category . • Imagen We construct an evaluation dataset for semantic editing by utilizing the generations from the Imagen (Saharia et al. 2022) model. Specifically, we randomly selected a short text which not in the input text as the edit text, such as replacing ”British shorthair cat” with ”Shiba Inu dog”, resulting in a dataset of 360 paired samples. • Editing-Mask A new dataset, which comprises 200 images randomly selected from Imagen and ImageNet. Each sample includes an image, input text, edit text, and a human-labelled mask that corresponds to the semantics of the edit text. Our proposed dataset enables direct evaluation of the performance of editing tasks, particularly in regions where editing is necessary. Metrics We evaluate the performance of editing methods in terms of time efficiency and generation quality. Specifically, we measure the average editing time of an image at a resolution of 512 to assess the time consumption of each method. Additionally, we used the Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al. 2018) metric to quantify the difference between the generated image and the original image, which reflects the degree of modification made by the editing method. Furthermore, we employed the Classwise Simplified Fr´echet Inception Distance (CSFID) (Couairon et al. 2022a) metric, which is a category FID metric that measures the distance between generated and original images. We also use CLIPScore (Hessel et al. 2021) to measure the semantic similarities between the edit texts and generated images. It is noted that all of these metrics evaluate the generated image quality rather than the editing performance. Therefore, in our proposed human-labeled mask daraset, we use Intersection over Union (IOU) to assess the quality of the generated masks, Cm and Cnon to represented the modifications of the image in the mask and non-mask areas. The metrics on Editing-Mask provides a more direct evaluation of editing performance. Implementation The framework of InstDiffEdit is based on Stable Diffusion v1.4. We use 50 steps of LDMScheduler sampler with a scale 7.5, and set noise strength to r = 0.5, threshold of binarization to φ = 0.2, and the thresholds for attention refinement defined in Eq. 8 are 0.9 and 0.6 by default, respectively. We maintain n = 3 rounds of denoising on the input image in parallel throughout the entire denoising process. Finally, we use the inpainting mode in Stable Diffusion to get the target image. Experimental Results Quantitative Analysis In this section, we present quantitative results on three datasets. Comparison With Existing Methods. To validate the effectiveness of the proposed InstDiffEdit, we compare it with five diffusion-based methods, of which results are given in Tab. 1 and Fig. 5. The latent-based methods, i.e., SDEdit (Meng et al. 2021) and CycleDiffusion (Wu and De la Torre 2022), which rely on the association between the generated image’s latent and the original image’s latent. These methods offer the advantage of low time cost for editing. However, their performance is much worse than the other methods. Meanwhile, attention-based methods, i.e., PtP (Hertz et al. 2022) and PnP (Tumanyan et al. 2022), infer on the latent representation of real images, resulting in lower time efficiency and heavy reliance on the performance of inversion. As a mask-based model, DiffEdit (Couairon et al. 2022b) achieves significant improvements over all datasets, indicating the effectiveness of generated masks in diffusion-based The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7868 Category Models Time↓ Editing-Mask ImagetNet Imagen IOU↑ Cm(%)↑ Cnon(%)↓ rate↑ LPIPS↓ CSFID↓ LPIPS↓ FID↓ CLIPScore↑ Latent SDEdit 3.0 11.6 8.4 1.38 31.1 76.5 32.1 75.2 0.238 CycleDiffusion 5.2 12.1 7.3 1.66 31.1 87.5 25.8 63.0 0.246 Attention PtP 18.2 16.8 12.9 1.30 42.8 85.67 0.240 PnP 80.0 12.1 7.8 1.56 27.3 76.8 22.2 61.6 0.240 Mask DiffEdit 64.0 33.0 19.5 8.0 2.45 27.9 70.9 29.7 58.8 0.247 InstDiffEdit 10.8 56.2 22.7 6.1 3.71 28.6 65.1 17.0 55.3 0.249 Table 1: Comparison with existing methods on three datasets. The performance of Mask-based methods are much ahead of other methods. Moreover, InstDiffEdit leads to 70.3% on IOU and 51.4% on changing rate Cm/Cnon compared with SOTA method DiffEdit. All experiment are conducted on a NVIDIA A100. 20 30 40 50 LPIPS 70 65 60 55 50 75 80 85 CSFID SDEdit DiffEdit InsDiffEdit CycleDiffusion PnP 40 0.230 0.235 0.240 0.245 0.250 0.255 0.260 0.265 0.270 Clipscore 50 60 90 80 70 FID SDEdit DiffEdit InstDiffEdit CycleDiffusion PnP PtP 35 30 25 20 15 10 0.230 0.235 0.240 0.245 0.250 0.255 0.260 0.265 0.270 Clipscore 40 45 50 LPIPS SDEdit DiffEdit InstDiffEdit CycleDiffusion PnP PtP Figure 5: The trade-offs of existing methods between different metrics. We conduct experiments by using two different metrics as the independent and dependent variables respectively. The proposed InstDiffEdit has the best trade-offs. image editing. Specifically, on our proposed Editing-Mask, DiffEdit’s changing rate Cm/Cnon far exceeds that of latentbased and attention-based methods. However, DiffEdit still requires much longer inference time. In stark contrast, our InstDiffEdit achieves up to 5 to 6 times faster inference speeds than DiffEdit, while obtaining more accurate masks. InstDiffEdit also demonstrates improvements of IOU with ground truth masks, changing rates with 70.3% and 51.4%, respectively. This strongly confirms that the proposed mask generation scheme can generate more accurate masks. Results on ImageNet show that InstDiffEdit generally outperforms DiffEdit in terms of image quality, although its LPIPS score is slightly worse . Additionally, InstDiffEdit’s performance on the CSFID benchmark significantly outperforms DiffEdit by +21.1%. Similar results are also observed on the Imagen benchmark, where InstDiffEdit excels in both image quality and image-text matching, achieving a performance increase of +44.8% compared to DiffEdit on LPIPS. We also depict the performance trade-offs between different metrics in Fig. 5. These results are achieved by tuning the hyper-parameters of each method based on the target metric. From these figures, we can first conclude that the proposed InstDiffEdit can consistently achieve the best trade-offs on all metric pairs. We observe that InstDiffEdit significantly outperforms the other methods under all conditions. These results further confirm the advantages of InstDiffEdit in terms of diffusion-based image editing. Ablation Study. Tab. 2 presents ablation results for different settings of the noise strength r in Eq. 1 and the binarization threshold φ. In the firs row, we assess the method’s performance without a mask, and the insufficient perforr φ IOU↑Cm(%) ↑ Cnon(%) ↓Cm/Cnon↑ 0.5 None 11.6 8.4 1.38 0.4 0.1 52.9 26.0 8.2 3.16 0.2 55.7 21.8 6.0 3.63 0.3 52.0 17.3 4.7 3.68 0.5 0.1 51.9 27.4 8.7 3.16 0.2 56.2 22.7 6.1 3.71 0.3 54.3 18.2 4.8 3.81 0.6 0.1 49.6 28.1 9.4 2.98 0.2 54.6 24.3 6.7 3.60 0.3 54.2 19.3 5.1 3.76 Table 2: Ablation study of noise strength r and binarization threshold φ on Editing-Mask. mance indicates that mask-free methods are inferior in for image editing. Secondly, as the noise strength r increases, the model obtains less information from the original image and tends to generate masks with larger areas, which results in an upward trend of Cm and Cnon (Line 2 vs Line 5 vs Line 8). However, the IOU with ground truth mask and change rate exhibits a trend of initially increasing and then decreasing. Additionally, as the binarization threshold φ decreases, there is a tendency for the mask to cover a larger region, resulting in a similar phenomenon as discussed previously. Therefore, we select r = 0.5 and φ = 0.2, which yields the highest IOU and superior performance on the change rate. Qualitative Analysis To obtain deep insight into InstDiffEdit, we visualize the editing results of our InstDiffEdit The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7869 Input Image Edit Text SDEdit DiffEdit InstDiffEdit Mask-gt CycleDiff PnP PtP Siberian husky speedboat indigo bunting German shepherd mountain bike tennis ball Shiba Inu dog on top of a mountain British Shorthair cat skateboarding sunglasses Figure 6: Visualizations of the generated masks and edited images of InstDiffEdit and the compared methods. Compared with DiffEdit, the masks of InstDiffEdit are closer to the human-labeled ones. Moreover, the comparisons with the latent-based and attention-based approaches also show the merit of the instant mask in our InstDiffEdit. The red boxes refers to failed editions. and other compared methods on Editing-Mask, as shown in Fig. 6. It can be first seen that both latent-based and attention-based approaches lack explicit constraints on the area to edit, which may result in unexpected generations. For instance, in the case of the “German Shepherd” image in the 4th column, DiffEdit and InstDiffEdit successfully modify the object while preserving the background, while other mask-free methods obviously change the background. However, a noteworthy disparity exists between the generated masks of DiffEdit and the human-labeled masks. Specifically, the masks produced by DiffEdit are somewhat inaccurate, and exhibits peculiar shape outlines. In contrast, our generated masks are significantly superior to those generated by DiffEdit, leading better editing results. For instance, in the case of “speedboat” image in the 3rd column, our mask accurately encompasses the primary object “boat”, whereas the mask generated by DiffEdit is nonrepresentative. Consequently, our approach achieves successful editing, whereas DiffEdit fails to do so. These results are consistent with IOU performance presented in Tab. 1. Conclusion In this paper, we propose a novel and efficient method, called InstDiffEdit for diffusion-based semantic image editing. As an plug-and-play component, InstDiffEdit can be directly applied to most diffusion models without any additional training or human intervention. Experimental results not only demonstrate the superior performance of InstDiffEdit in semantic image editing tasks, but also confirm its superiority in computation efficiency, e.g., up to 5 to 6 times faster than DiffEdit. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7870 Acknowledgments This work was supported by National Key R&D Program of China (No.2023YFB4502804) , the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U22B2051, No. U21B2037, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), the Key Research and Development Program of Zhejiang Province (No. 2022C01011), the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001), and partially sponsored by CCF-NetEase ThunderFire Innovation Research Funding (NO. CCF-Netease 202301). References Avrahami, O.; Lischinski, D.; and Fried, O. 2022. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18208–18218. Balaji, Y.; Nah, S.; Huang, X.; Vahdat, A.; Song, J.; Kreis, K.; Aittala, M.; Aila, T.; Laine, S.; Catanzaro, B.; et al. 2022. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324. Couairon, G.; Grechka, A.; Verbeek, J.; Schwenk, H.; and Cord, M. 2022a. Flexit: Towards flexible semantic image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18270–18279. Couairon, G.; Verbeek, J.; Schwenk, H.; and Cord, M. 2022b. Diffedit: Diffusion-based semantic image editing with mask guidance. arXiv preprint arXiv:2210.11427. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A. C.; and Bengio, Y. 2014. Generative Adversarial Nets. 2672–2680. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626. Hessel, J.; Holtzman, A.; Forbes, M.; Bras, R. L.; and Choi, Y. 2021. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Karras, T.; Aittala, M.; Laine, S.; H¨ark¨onen, E.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2021. Alias-free generative adversarial networks. Advances in Neural Information Processing Systems, 34: 852–863. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8110–8119. Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and Irani, M. 2022. Imagic: Text-based real image editing with diffusion models. arXiv preprint arXiv:2210.09276. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; and Van Gool, L. 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11461–11471. Meng, C.; He, Y.; Song, Y.; Song, J.; Wu, J.; Zhu, J.-Y.; and Ermon, S. 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. Patashnik, O.; Garibi, D.; Azuri, I.; Averbuch-Elor, H.; and Cohen-Or, D. 2023. Localizing Object-level Shape Variations with Text-to-Image Diffusion Models. arXiv preprint arXiv:2303.11306. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684– 10695. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. Tumanyan, N.; Geyer, M.; Bagon, S.; and Dekel, T. 2022. Plug-and-Play Diffusion Features for Text-Driven Image-toImage Translation. arXiv preprint arXiv:2211.12572. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, J.; Lu, G.; Xu, H.; Li, Z.; Xu, C.; and Fu, Y. 2022. ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation. In The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7871 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10707–10717. Wu, C. H.; and De la Torre, F. 2022. Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance. arXiv preprint arXiv:2210.05559. Xia, W.; Yang, Y.; Xue, J.-H.; and Wu, B. 2021. Tedigan: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2256–2265. Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; and He, X. 2018. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1316–1324. Zhan, F.; Yu, Y.; Wu, R.; Zhang, J.; Lu, S.; Liu, L.; Kortylewski, A.; Theobalt, C.; and Xing, E. 2021. Multimodal image synthesis and editing: A survey. arXiv preprint arXiv:2112.13592. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7872
2024
874
18,711
VQCNIR: Clearer Night Image Restoration with Vector-Quantized Codebook Wenbin Zou1, Hongxia Gao1,2*, Tian Ye3, Liang Chen4, Weipeng Yang1, Shasha Huang1, Hongshen Chen1, Sixiang Chen3 1The School of Automation Science and Engineering, South China University of Technology, Guangzhou 2Research Center for Brain-Computer Interface, Pazhou Laboratory, Guangzhou 3The Hong Kong University of Science and Technology, Guangzhou 4College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou [email protected] Abstract Night photography often struggles with challenges like low light and blurring, stemming from dark environments and prolonged exposures. Current methods either disregard priors and directly fitting end-to-end networks, leading to inconsistent illumination, or rely on unreliable handcrafted priors to constrain the network, thereby bringing the greater error to the final result. We believe in the strength of data-driven high-quality priors and strive to offer a reliable and consistent prior, circumventing the restrictions of manual priors. In this paper, we propose Clearer Night Image Restoration with Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent restoration outcomes on real-world and synthetic benchmarks. To ensure the faithful restoration of details and illumination, we propose the incorporation of two essential modules: the Adaptive Illumination Enhancement Module (AIEM) and the Deformable Bi-directional Cross-Attention (DBCA) module. The AIEM leverages the inter-channel correlation of features to dynamically maintain illumination consistency between degraded features and high-quality codebook features. Meanwhile, the DBCA module effectively integrates texture and structural information through bi-directional cross-attention and deformable convolution, resulting in enhanced fine-grained detail and structural fidelity across parallel decoders. Extensive experiments validate the remarkable benefits of VQCNIR in enhancing image quality under low-light conditions, showcasing its state-of-the-art performance on both synthetic and real-world datasets. The code is available at https://github.com/AlexZou14/VQCNIR. Introduction To obtain reliable images in night scenes, long exposure is often used to allow more available light to illuminate the image. However, images captured in this way still suffer from low visibility and color distortion issues. Moreover, long exposure is susceptible to external scene disturbances, such as camera shake and dynamic scenes, which can cause motion blur and noise in the images (2022). Therefore, night images often exhibit complex degradation problems (2022a; 2022; 2023) such as low illumination and blur, making the recov*Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. LPIPS↓ PSNR (dB) 0.1 0.15 0.2 0.25 0.3 0.35 22 23 24 25 26 27 28 VQCNIR(Ours) LEDNet (ECCV2022) LLFlow (AAAI2022) Restormer (CVPR2022) MIMO (CVPR2021) Restormer→LLFlow (a) (b) Figure 1: Quantitative comparisons with state-of-the-art methods. (a) PSNR and LPIPS results on the LOL-Blur dataset. (b) Results for five perceptual metrics on the RealLOL-Blur dataset. For PSNR, MUSIQ (2021), and NRQM (2017) higher is better, while lower is better for LPIPS (2018a), NIQE (2012), BRISQUE (2012), and PI (2018). ery of high-quality images with realistic texture and normal lighting conditions extremely challenging. With the great success of deep learning methods (2022; 2023d; 2023b; 2023c; 2023; 2023a) in image restoration, numerous deep learning-based algorithms have been proposed to tackle this challenging task. Currently, most researchers only consider the low-light problem in night images and have proposed numerous low-light image enhancement (LLIE) methods (2017; 2018; 2019; 2020; 2021; 2019; 2021). Although these LLIE methods can produce visually pleasing results, their generalization ability is limited in real night scenes. This is mainly attributed to the fact that LLIE methods focus primarily on enhancing image luminance and reducing noise while ignoring the spatial degradation caused by blur that leads to ineffective recovery of sharp images. An intuitive idea is to combine image deblurring methods with LLIE methods to address this problem. However, most existing deblurring methods (2021; 2021; 2019; 2022; 2022b) are trained on datasets captured under normal illumination conditions, which makes them not suitable for night image deblurring. In particular, due to the poor visibility in dark regions of night images, these methods may fail to effectively capture motion blur cues, resulting in unsatisfactory deblurring performance. Therefore, simply cascading The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7873 LLIE and deblurring methods do not produce satisfactory recovery results. To better handle the joint degradation process of low illumination and blur, Zhou et al. (2022) first proposed a LOL-Blur dataset and an end-to-end encoderdecoder network called LEDNet. LEDNet can achieve high performance on the synthetic LOL-Blur dataset. However, its generalization ability in real scenes is still limited. The aforementioned night restoration methods have difficulties in recovering correct textures and reliable illuminations from low-quality night images. This is due to the lack of stable and reliable priors, as most existing priors are generated from low-quality images. For instance, Retinexbased techniques (2018; 2019; 2021) employ illumination estimation through the decomposition of low-quality images, while blur kernels are estimated using the same degraded inputs. However, the biased estimation of priors leads to cumulative errors in the final outcomes. Therefore, we introduce the vector quantization (VQ) codebook as a credible and reliable external feature library to provide high-quality priors for purely data-driven image restoration, instead of relying on vulnerable handcrafted priors. The VQ codebook is an implicit prior generated by a VQGAN (2021) and trained on a vast corpus of highfidelity clean images. Hence, a well-trained VQ codebook can provide comprehensive, high-quality priors for complex degraded images, effectively addressing complex degradation. Furthermore, inconsistent illumination and incorrect matching between the degradation features of night images and the pristine features in the VQ codebook can lead to unsatisfactory visual effects when directly reconstructing using the codebook. It may even amplify blur and produce artifacts in restored images. Hence, the pivotal step towards harnessing codebook priors for the restoration of night-blurred images lies in precisely aligning the highquality codebook features. In this paper, we propose a novel method called Clearer Night Image Restoration with Vector-Quantized Codebook (VQCNIR) for night image restoration. To address the aforementioned key considerations, our proposed VQCNIR incorporates two purpose-built modules. Specifically, we design the Adaptive Illumination Enhancement Module (AIEM) that leverages inter-channel correlations of features to estimate curve parameters and adaptively enhances illumination in the features. This effectively addresses inconsistent illumination between degraded features and high-quality VQ codebook features. To ameliorate feature mismatch between degraded and high-quality features, we propose a parallel decoder integrating Deformable Bidirectional Cross-Attention (DBCA). This parallel design effectively incorporates high-quality codebook features while efficiently fusing texture and structural information from the parallel encoder. Our proposed DBCA performs context modeling between high and low-quality features, adaptively fusing them to gradually recover fine details that enhance overall quality. As depicted in Figure 1, our method not only achieves superior performance on synthetic data, but also generalizes well to real-world scenes. Extensive experiments on publicly available datasets demonstrate that our method surpasses existing state-of-the-art methods on both distortion and perceptual metrics. Our key contributions are summarized as follows: • We propose VQCNIR, a new framework that formulates night image restoration as a matching and fusion problem between degraded and high-quality features by introducing a high-quality codebook prior. This addresses limitations of previous methods that rely solely on low-quality inputs, and achieves superior performance. • We propose an adaptive illumination enhancement module that utilizes the inter-channel dependency to estimate curve parameters. This effectively addresses the inconsistency of illumination between the degraded features and high-quality VQ codebook features. • We further propose a deformable bi-directional crossattention, which utilizes a bi-directional cross-attention mechanism and deformable convolution to address the misalignment issue between features from the parallel decoder and restore the more accurate texture details. Related Work Image Deblurring Recent advances in deep learning techniques have greatly impacted the field of computer vision. A large number of deep learning methods have been proposed for both single image and video deblurring tasks (2014; 2017; 2018; 2019; 2019; 2020; 2021), and have demonstrated superior performance. With the introduction of large training datasets for deblurring tasks (2009; 2017; 2019), many researchers (2009; 2019) have adopted end-to-end networks to directly recover clear images. Despite the fact that end-to-end methods outperform traditional approaches, they may not be effective in cases with severe blurring. To improve network performance, some methods (2017; 2018; 2021) use multiscale architectures to enhance deblurring at different scales. However, the limited ability of these methods to capture the correct blur cues in low-light conditions, particularly in dark areas, has hindered their effectiveness in handling lowlight blurred images. To tackle this issue, Zhou et al. (2022) introduce a night image blurring dataset and develop an endto-end UNet architecture that incorporates a learnable nonlinear layer to effectively enhance dark regions without overexposing other areas. Low-Light Image Enhancement Recent years have witnessed the impressive success of deep learning-based low-light image enhancement (LLIE) since the first pioneering work (2022). Many end-to-end methods (2017; 2019) have been proposed for enhancing image illumination using an encoder-decoder framework. To further improve the performance of LLIEs, researchers have developed deep Retinex-based methods (2018; 2019; 2021) inspired by Retinex theory, which employs dedicated subnetworks to enhance the illuminance and reflectance components and achieve better recovery performance. However, such methods have limitations, as the enhancement results strongly depend on the characteristics of the training data. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7874 AIEM AIEM AIEM Illumination Enhancement Vector Quantization 𝒛𝒆 𝒛𝒒𝒆 Code Distance 𝒛𝒆 Codebook … 0 1 2 N Decoder G Decoder D Encoder E 0 1 2 … N 0.3 0.2 0.7 0.5 … ResBlocks Deformable Bi-directional Cross-Attention DownSample AIEM AIEM LayerNorm PConv DWConv LKA SG CA PConv LayerNorm PConv IMACon PConv AIEM Low light blurry image High-quality image Fixed Params · Figure 2: The framework of the proposed VQCNIR. It consists of an encoder, some adaptive illumination enhancement modules (AIEM), and a parallel decoder with a deformable bi-directional cross-attention (DBCA), allowing the network effectively to exploit high-quality codebook prior information. To improve the generalization ability of the network, researchers (2020; 2021; 2021) propose a number of unsupervised methods. For example, Jiang et al (2021) introduced self-regularization and unpaired training into LLIE with EnlightenGAN. Additionally, Guo et al (2020) propose a fast and flexible method for estimating image enhancement depth curves that do not require any normal illumination reference images during the training process. Verctor-Quantized Codebook VQVAE (2017) is the first to introduce vector quantization (VQ) techniques into an autoencoder-based generative model to achieve superior image generation results. Specifically, the encoded latent variables are quantized to their nearest neighbors in a learnable codebook, and the resulting quantized latent variables are used to reconstruct the data samples. Building upon VQVAE, subsequent work has proposed various improvements to codebook learning. For instance, VQGAN (2021) utilizes generative adversarial learning and refined codebook learning to further enhance the perceptual quality of reconstructed images. The well-trained codebook can serve as a high-quality prior that can be leveraged for various image restoration tasks such as image super-resolution and face restoration. To this end, Chen et al. (2022a) introduce a VQ codebook prior for blind image super-resolution, which matches distorted LR image features with distortion-free HR features from a pre-trained HR prior. Furthermore, Gu et al. (2022) explore the impact of internal codebook properties on reconstruction performance and extended discrete codebook techniques to face image restoration. Drawing inspiration from these works, we apply the high-quality codebook prior to night image restoration. Methodology Framework Overview To improve the recovery of high-quality images with realistic textures and normal illumination from night image x containing complex degradation, we introduce a VectorQuantized codebook as high-quality prior information to design a night image restoration network (VQCNIR). The overview of the VQCNIR framework is illustrated in Figure 2. VQCNIR comprises an encoder E, an adaptive illumination enhancement module, a high-quality codebook Z, and two decoders G and D. Decoder G is a pre-trained decoder from VQGAN with fixed parameters. Decoder D represents the primary decoder, which progressively recovers fine details by fusing high-quality features in decoder G. VQ Codebook for Priors VQ Codebook: We first briefly describe the VQGAN (2021) model and its codebook, and more details can be referenced in (2021). Given a high-quality image xh ∈ RH×W ×3 with normal light, the encoder E maps the image xh to its spatial latent representation ˆz = E(x) ∈Rh×w×nz, where nz is the dimension of latent vectors. Then, each element ˆzi ∈Rnz Euclidean distance nearest vector zk in the codebook is found as a VQ representation zq by the elementby-element quantization process q(·). It is shown as follows: zq = q(ˆz) := arg min zk∈Z ||ˆzi −zk||2 2 ! ∈Rh×w×nz, (1) where the codebook is Z = {zk}K k=1 ∈RK×nz with K discrete codes. Then, the decoder G maps the quantized representation zq back into sRGB space. The overall reconstruction process can be formulated as follows: ˆxh = G(zq) = G(q(E(x))) ≈xh, (2) VQ Codebook for Night Image Restoration: To fully explore the effect of the VQ codebook prior on night image restoration, several preliminary experiments were implemented to analyze the advantages and disadvantages of VQGAN. First, we use the well-trained VQGAN to reconstruct the real image. The experimental results are shown in Figure 3 top. From the figure, we can see that VQGAN can generate vivid texture details in the reconstructed images. However, some of the structural information is lost in the vector quantization process, resulting in distortion and artifacts in the reconstructed image. Therefore, the reconstruction solely depends on the quantized features in the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7875 Input high-quality image Reconstruction image Input night blurry image Reconstruction image Figure 3: VQGAN reconstruction results. On the left is the input image and on the right is the reconstructed image. VQGAN can provide rich detail for high-quality images but can cause some structural distortion. In degraded images, image distortion is worsened because the degraded features do not match the correct high-quality codebook features. codebook and does not yield satisfactory recovery results. The most intuitive idea is to combine the texture information generated by the quantized features from the codebook with the structural information of the latent representation to avoid structural distortion of the image. Subsequently, we explore the effectiveness of VQGAN trained on high-quality images for the reconstruction of degraded images at night. As shown in Figure 3 bottom, the restoration image is unable to recover to normal illumination due to the inconsistent illumination of the input image and the training set of VQGAN. Moreover, we found that VQGAN further deteriorates blurred textures and produces artifacts. This was attributed to the difficulty of the network in matching the correct VQ codebook features, which resulted in the inability of VQGAN under high-quality image training to recover from low illumination and blur. Therefore, we design an adaptive illumination enhancement module and a deformable bi-directional cross-attention for the mentioned low light and blur problems respectively. Adaptive Illumination Enhancement Based on the previous observations and analysis, we design an Adaptive Illumination Enhancement Module (AIEM) to solve the problem of illumination inconsistency between the quantized features and the latent features obtained from the encoder, as shown in Figure 2. This module consists of two parts: Hierarchical Information Extraction (HIE) and Illumination Mutual Attention Enhancement (IMAE). Hierarchical Information Extraction: Local lighting, such as light sources, is often observed in night-time environments. However global operation often over- or underenhances these local regions. Thus, we employ channel attention and large kernel convolution attention to extract spa𝒙 Split 𝒙𝟏 𝒙𝒊 𝒙𝑺 𝒙ഥ𝟏 𝐶ଵ(𝑥) Sigmoid Split 𝐶ଶ(𝑥) 𝐶௡(𝑥) Conv-3 Conv-3 Conv-3 Illum. Mutual Enhanced Concat Illum. Mutual Enhanced Illum. Mutual Enhanced 𝒙ഥ𝒊 𝒙ഥ𝒔 𝒚𝟏 𝒚𝒊 𝒚𝒔 𝑨𝟏 𝑨𝟐 𝑨𝒏 𝒛𝟏 𝒛𝒊 𝒛𝒔 Figure 4: The architecture of Illumination Mutual Attention Convolution (IMAConv). tial information at different hierarchies. Specifically, HIE first employs layer normalization to stabilize the training and then performs spatial information fusion of different receptive fields. A residual shortcut is used to facilitate training convergence. Following the normalization layer, the pointwise convolution and the 3 × 3 depth-wise convolution are used to capture spatially invariant features. Then, three parallel operators are used to aggregate channel and spatial information. The first operator uses SimpleGate (2022b) to apply non-linear activation on spatially invariant features. The second operator is channel attention (2018b) to modulate the feature channels. The third one is the large kernel convolution attention (2022) to handle spatial features. The three branches output feature maps of the same size. Point-wise multiplication is used to fuse the diverse feature from the three branches directly. Finally, the output features are adjusted by point-wise convolution. Illumination Mutual Attention Enhancement: According to the hierarchical information of different receptive fields obtained from the HIE, IMAE first utilize layer normalization to stabilize the training, and then illumination enhancement was applied to the features. Specifically, we design the novel illumination mutual attention convolution (IMAConv) that uses the dependencies between feature channels to estimate the curve parameters and thus adjust the illumination of the features. Two point-wise convolutions are used to adjust the input and output features of IMAConv. Residual connections are used to facilitate training convergence. Illumination Mutual Attention Convolution: Considering that the illumination variation is similar between feature channels, we inspired by Zero-DCE (2020) introduce curve estimation and channel mutual mapping to propose an illumination mutual attention convolution that adjusts the pixel range of the feature to enhance the illumination, as shown in Figure 4. Specifically, given the input features of IMAConv as xf ∈RCin×Hf ×Wf . we first divide x into S parts at a time along the channel as follows: x1 f, x2 f, ..., xS f = split(xf), (3) where split(·) denotes the split operation. For each part xi f ∈ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7876 + Linear Norm Linear Norm Linear Linear ஽ ீ (𝐵, 𝐶, 𝐻𝑊) T (𝐵, 𝐻𝑊, 𝐶) (𝐵, 𝐶, 𝐶) Softmax + ஽ ீ ஽ ௢ ீ ௢ (𝐵, 𝐶, 𝐻𝑊) (𝐵, 𝐶, 𝐻𝑊) C Offset Estimator * ௢௨௧ Figure 5: The architecture of deformable bi-directional cross-attention (DBCA). The offset estimator module consists of a number of large kernel convolutions that use information from the large receptive fields to help fuse the features of the two decoders. R Cin S ×Hf ×Wf , we concatenate the channel features excluding xi f together as the complimentary to xi, denoted as ¯xi f. Both xi f and ¯xi f are passed into the illumination mutual enhanced, which estimates multiple curve parameters A = {Ai}N i=1 through the curve estimation network F. Ai is used to adjust the range of pixel values from the features. The whole process is formulated as: A1, A2, ..., An = split(F( ¯xi)), (4) yi f = Cn(xi f, A1, A2, · · · An), (5) where F(·) and Cn(·) denote the curve estimation network and the high-order curve mapping function. The curve estimation network F consists of three convolutional layers with kernel sizes of 5, 3, and 1, respectively, two activation functions, and a sigmoid function. For the high-order curve mapping function Cn, it can be formulated as follows: Cn(xi f) = ( A1xi f(1 −xi f) + xi f, n = 1 An−1Cn−1(xi f)(1 −Cn−1(xi f)) + Cn−1(xi f), n > 1 (6) After illumination enhancement, for all y1 f, y2 f, ..., yS f , we use a 3 × 3 convolution layer to generate feature zi f = Convi(yi f). Finally, the different features z1 f, z2 f, ..., zS f are concatenated to form the output of IMAConv. zf = Concat(z1 f, z2 f, ..., zS f ), (7) Deformable Bi-directional Cross-Attention As previously described and analyzed, the high-quality quantized features obtained from the codebook are not flawless. The structural warping and textural distortion leads to a more severe misalignment between high-quality VQ codebook features and original degraded features. Therefore, we propose the Deformable Bi-directional Cross-Attention (DBCA) to fuse high-quality VQ codebook features and degraded features. Unlike the conventional cross-attention method (2021), our DBCA aims to integrate two different features using a bi-directional cross-attention mechanism and employs deformable convolutions to effectively correct the blurring degradation in the degraded feature. As shown in Figure 5, given the input decoder D and G features FD and FG, they are first mapped to corresponding QD = W p DLN(FD) and QG = W p GLN(FG) via normalization and linear layers. We further utilize linear layers to map these features to corresponding values VD and VG. We reshape the aforementioned QD, QG, VD, and VG into the shape of (B, C, H ∗W) and fuse the two features using the following bi-directional cross-attention formula: AD = Softmax(QDQT G/ √ C)VD, (8) AG = Softmax(QDQT G/ √ C)VG, (9) F o D = γDAG + FD, (10) F o G = γGAD + FG, (11) where Softmax(·) denotes the softmax function. AD and AG respectively represent the attention maps for feature D and feature G. γD and γG are trainable channel-wise scales and initialized with zeros for stabilizing training. To better fuse the high-quality codebook prior feature into the degraded feature, we first generate an offset by concatenating the two output features. Then, we use the generated offset in the deformable convolution to distort the texture feature and effectively remove the blurry degradation, which can be formalized as follows: offset = LKConv(Concat(F o D, F o G)), (12) Fout = DeformConv(F o D, offset), (13) where LKConv(·) and DeformConv(·) denote the 7×7 convolution and the deformable convolution, respectively. Training Objectives of VQCNIR The training objective of VQCNIR comprises four components: (1) pixel reconstruction loss Lpix that minimizes the distance between the outputs and the ground truth; (2) code alignment loss Lca enforces the codes of the night images to be aligned with the corresponding ground truth; (3) perceptual loss Lper which operates in the feature space, aims to enhance the perceptual quality of the restored images; and (4) adversarial loss Ladv for restoring realistic textures. Specifically, we adopt the commonly-used L1 loss in the pixel domain as the reconstruction loss, represented by: Lpix = ||xh −V QCNIR(xn)||1, (14) where the xh and xn denote high-quality ground truth and night image, respectively. To improve the matching performance of codes for night images with codes for high-quality images. We adopt the L2 loss to measure the distance, which can be formulated as: Lca = ||ze −ze q||2 2, (15) where ze and ze q are the night image code and the ground truth code, respectively. The total training objective is the combination of the above losses: LV QCNIR = λpixLpix + λcaLca + λperLper + λadvLadv, (16) where λpix, λca, λper, and λadv denote the scale factors of each loss function, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7877 Method PSNR↑ SSIM↑ LPIPS↓ Zero-DCE →MIMO 17.68 0.542 0.510 LLFlow →Restormer 21.50 0.746 0.357 LLFlow →Uformer 21.51 0.750 0.350 MIMO →Zero-DCE 17.52 0.570 0.498 Restormer →LLFlow 21.89 0.772 0.347 Uformer →LLFlow 21.63 0.758 0.342 KinD++∗ 21.26 0.753 0.359 DeblurGAN-v2∗ 22.30 0.745 0.356 DMPHN∗ 22.20 0.817 0.301 MIMO∗ 22.41 0.835 0.262 Restormer∗ 23.63 0.841 0.247 LLFlow∗ 24.48 0.846 0.235 LEDNet∗ 25.74 0.850 0.224 Ours 27.79 0.875 0.096 Table 1: Quantitative evaluation on the LOL-Blur dataset. The symbol ∗indicates the network is retrained on the LOLBlur dataset. The best and second-best values are indicated with Bold text and Underlined text respectively. Experiments Dataset and Training Details We train our VQCNIR network on the LOL-Blur dataset (2022), which consists of 170 sequences (10,200 pairs) of training data and 30 sequences (1,800 pairs) of test data. We use random rotations of 90, 180, 270, random flips, and random cropping to 256 × 256 size for the augmented training data. We train our network using Adam (2014) optimizer with β1=0.9, β2=0.99 for a total of 500k iterations. The mini-batch size is set to 8. The initial learning rate is set to 1 × 10−4 and adopts the MultiStepLR to adjust the learning rate progressively. We empirically set λpix, λca, λper, and λadv to {1, 1, 1, 0.1}. All experiments are performed on a PC equipped with Intel Core i7-13700K CPU, 32G RAM, and the Nvidia RTX 3090 GPU with CUDA 11.2. Results on LOL-Blur Dataset In this section, we compare our proposed VQCNIR quantitatively and qualitatively with all the above methods on the LOL-Blur test set (2022). We use the two most widely evaluated metrics: PSNR and SSIM for a fair evaluation of all methods. In addition, we employ the LPIPS metric to evaluate the perceptual quality of the restored images. Quantitative Evaluations. Table 1 shows the quantitative results of our method and other methods on the LOL-Blur dataset. As shown, our method outperforms state-of-the-art LEDNet by 2.05 dB and 0.025 in PSNR and SSIM, respectively. Also, our method demonstrates clear advantages over existing methods when evaluated by perceptual quality metrics. The effectiveness of our method is well evidenced. Qualitative Evaluations. Figure 6 shows the visual effect of all the compared methods. As the figure shows, most methods are ineffective in removing the blurring effect in severely blurred regions, inevitably introducing artifacts into the restored image. In contrast, our method can effectively recover Method MUSIQ↑ NRQM ↑ NIQE ↓ RUAS →MIMO 34.39 3.322 6.812 LLFlow →Restormer 34.45 5.341 4.803 LLFlow →Uformer 34.32 5.403 4.941 MIMO →Zero-DCE 28.36 3.697 6.892 Restormer →LLFlow 35.42 5.011 4.982 Uformer →LLFlow 34.89 4.933 5.238 KinD++∗ 31.74 3.854 7.299 DMPHN∗ 35.08 4.470 5.910 MIMO∗ 35.37 5.140 5.910 Restormer∗ 36.65 5.497 5.093 LLFlow∗ 34.87 5.312 5.202 LEDNet 39.11 5.643 4.764 Ours 51.04 7.064 4.599 Table 2: Quantitative evaluation on the Real-LOL-Blur dataset. The symbol ∗indicates the network is retrained on the LOL-Blur dataset. The best and second-best values are indicated with Bold text and Underlined text respectively. the correct texture features by using high-quality prior information. Therefore, these results provide sufficient evidence that the codebook prior proposed by our method is particularly suitable for the task of night image restoration. Results on Real Dataset To better illustrate the effectiveness of our method in the real scene, we compare our proposed VQCNIR with the above method quantitatively and qualitatively under the real RealLOL-Blur dataset (2022). Since the real scene lacks a corresponding reference image to evaluate, three non-reference evaluation metrics were used for the evaluation: MUSIQ (2021), NRQM (2017), and NIQE (2012). The MUSIQ metric assesses mainly color contrast and sharpness, which is more appropriate for this task. Quantitative Evaluations. Table 2 exhibits the quantitative results of our method and other methods on the Real-LOLBlur test set. As shown in Table 2, our method achieves the highest NIQE and NRQM scores, indicating that the restored results of our method have better image quality and are consistent with human perception. Moreover, we have the highest MUSIQ, which means that our results are the best in terms of color contrast and sharpness. Qualitative Evaluations. Figure 7 displays the visual comparison results for all evaluated methods. As evident from the figure, simple cascade deblurring and low-light enhancement techniques can cause issues such as overexposure and blurring of saturated areas in the image. Even the end-to-end method of retraining on the LOL-Blur dataset suffers from undesired severe artifacts and blurring. In contrast, our proposed VQCNIR outperforms these methods in terms of visual quality, demonstrating fewer artifacts and blurring. This improvement can be attributed to the successful integration of a high-quality codebook prior into the network, which assists in generating high-quality textures. The comparison results of a real-world image further demonstrate the superiority of our proposed method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7878 Input LLFlow→Restormer Restormer→LLFlow Uformer→LLFlow MIMO* Restormer* LLFlow* LEDNet VQCNIR (Ours) Ground Truth Figure 6: Visual comparison results on the LOL-Blur dataset (2022). The symbol ∗indicates the network is retrained on the LOL-Blur dataset. The proposed method produces visually more pleasing results. (Zoom in for the best view) Input LLFlow→Restormer Restormer→LLFlow Uformer→LLFlow DMPHN* LEDNet VQCNIR (Ours) MIMO-Unet* LLFlow* Restormer* Figure 7: Visual comparison on the Real-LOL-Blur dataset (2022). The symbol ∗indicates the network is retrained on the LOL-Blur dataset. The proposed method produces visually more pleasing results. (Zoom in for the best view) Models Configuration LOL-Blur Decoder D AIEM DBCA PSNR SSIM VQGAN 10.79 0.3028 Setting 1 ✔ 26.58 0.8486 Setting 2 ✔ ✔ 26.89 0.8599 Setting 3 ✔ ✔ 27.48 0.8692 VQCNIR ✔ ✔ ✔ 27.79 0.8750 Table 3: Ablation studies of different components. We report the PSNR and SSIM values on the LOL-Blur dataset. Ablation Study In this section, we have implemented a series of ablation experiments to better validate the effectiveness of each of our proposed modules. To verify the effectiveness of our proposed operations, a series of ablation experiments are presented and the results are shown in Table 3. Initially, we use the VQGAN as our baseline model. Table 3 shows that VQGAN does not effectively address low light and blur degradation, since VQGAN is a codebook prior learned from high-quality natural images and is unable to correctly match degraded features. By designing corresponding parallel decoders, the network can then effectively use high-quality priors to assist in the reconstruction of degraded features. However, the illumination inconsistency between the degraded features and the codebook prior can prevent accurate matching of the high-quality prior features, leading to the occurrence of artifacts. Furthermore, the degraded features are at some distance from high-quality features. Therefore, AIEM and DBCA can be used to effectively improve network performance and image quality. Conclusion In this work, we introduce high-quality codebook priors and propose a new paradigm for night image restoration called VQCNIR. Through analysis, we discover that directly applying codebook priors can result in improper matching between degraded features and high-quality codebook features. To address this, we propose an Adaptive Illumination Enhancement Module (AIEM) and a Deformable Bi-directional Cross-Attention (DBCA) module, leveraging estimated illumination curves and bi-directional crossattention. By fusing codebook priors and degraded features, VQCNIR effectively restores normal illumination and texture details from night images. Extensive experiments demonstrate the state-of-the-art performance of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7879 Acknowledgments This work was supported by the Science and Technology Project of Guangzhou under Grant 202103010003, Science and Technology Project in key areas of Foshan under Grant 2020001006285. References Blau, Y.; Mechrez, R.; Timofte, R.; Michaeli, T.; and ZelnikManor, L. 2018. The 2018 PIRM challenge on perceptual image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 0–0. Chen, C.; Shi, X.; Qin, Y.; Li, X.; Han, X.; Yang, T.; and Guo, S. 2022a. Real-world blind super-resolution via feature matching with implicit high-resolution priors. In Proceedings of the 30th ACM International Conference on Multimedia, 1329–1338. Chen, C.-F. R.; Fan, Q.; and Panda, R. 2021. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF international conference on computer vision, 357–366. Chen, L.; Chu, X.; Zhang, X.; and Sun, J. 2022b. Simple baselines for image restoration. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, 17–33. Springer. Chen, L.; Zhang, J.; Lin, S.; Fang, F.; and Ren, J. S. 2021. Blind deblurring for saturated images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6308–6316. Chen, S.; Ye, T.; Bai, J.; Chen, E.; Shi, J.; and Zhu, L. 2023a. Sparse Sampling Transformer with UncertaintyDriven Ranking for Unified Removal of Raindrops and Rain Streaks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13106–13117. Chen, S.; Ye, T.; Liu, Y.; Bai, J.; Chen, H.; Lin, Y.; Shi, J.; and Chen, E. 2023b. CPLFormer: Cross-scale Prototype Learning Transformer for Image Snow Removal. In Proceedings of the 31st ACM International Conference on Multimedia, 4228–4239. Chen, S.; Ye, T.; Liu, Y.; Liao, T.; Jiang, J.; Chen, E.; and Chen, P. 2023c. MSP-former: Multi-scale projection transformer for single image desnowing. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Chen, S.; Ye, T.; Xue, C.; Chen, H.; Liu, Y.; Chen, E.; and Zhu, L. 2023d. Uncertainty-Driven Dynamic Degradation Perceiving and Background Modeling for Efficient Single Image Desnowing. In Proceedings of the 31st ACM International Conference on Multimedia, 4269–4280. Cho, S.-J.; Ji, S.-W.; Hong, J.-P.; Jung, S.-W.; and Ko, S.-J. 2021. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, 4641–4650. Esser, P.; Rombach, R.; and Ommer, B. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12873–12883. Gu, Y.; Wang, X.; Xie, L.; Dong, C.; Li, G.; Shan, Y.; and Cheng, M.-M. 2022. Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII, 126–143. Springer. Guo, C.; Li, C.; Guo, J.; Loy, C. C.; Hou, J.; Kwong, S.; and Cong, R. 2020. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1780–1789. Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; and Hu, S.-M. 2022. Visual attention network. arXiv preprint arXiv:2202.09741. Hu, Z.; Cho, S.; Wang, J.; and Yang, M.-H. 2014. Deblurring low-light images with light streaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3382–3389. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; and Wang, Z. 2021. Enlightengan: Deep light enhancement without paired supervision. IEEE transactions on image processing, 30: 2340–2349. Ke, J.; Wang, Q.; Wang, Y.; Milanfar, P.; and Yang, F. 2021. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5148–5157. Kingma, D.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. Computer ence. Kupyn, O.; Martyniuk, T.; Wu, J.; and Wang, Z. 2019. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF international conference on computer vision, 8878–8887. Levin, A.; Weiss, Y.; Durand, F.; and Freeman, W. T. 2009. Understanding and evaluating blind deconvolution algorithms. In 2009 IEEE conference on computer vision and pattern recognition, 1964–1971. IEEE. Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.-M.; Gu, J.; and Loy, C. C. 2022. Low-Light Image and Video Enhancement Using Deep Learning: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 9396–9416. Li, C.; Guo, C.; and Loy, C. C. 2021. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4225–4238. Liu, Y.; Yan, Z.; Chen, S.; Ye, T.; Ren, W.; and Chen, E. 2023. Nighthazeformer: Single nighttime haze removal using prior query transformer. In Proceedings of the 31st ACM International Conference on Multimedia, 4119–4128. Liu, Y.; Yan, Z.; Wu, A.; Ye, T.; and Li, Y. 2022. Nighttime image dehazing based on variational decomposition model. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 640–649. Lore, K. G.; Akintayo, A.; and Sarkar, S. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61: 650–662. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7880 Ma, C.; Yang, C.-Y.; Yang, X.; and Yang, M.-H. 2017. Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding, 158: 1–16. Mittal, A.; Moorthy, A. K.; and Bovik, A. C. 2012. Noreference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12): 4695– 4708. Mittal, A.; Soundararajan, R.; and Bovik, A. C. 2012. Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 20(3): 209–212. Nah, S.; Hyun Kim, T.; and Mu Lee, K. 2017. Deep multiscale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3883–3891. Shen, Z.; Wang, W.; Lu, X.; Shen, J.; Ling, H.; Xu, T.; and Shao, L. 2019. Human-aware motion deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5572–5581. Tao, X.; Gao, H.; Shen, X.; Wang, J.; and Jia, J. 2018. Scalerecurrent network for deep image deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8174–8182. Van Den Oord, A.; Vinyals, O.; et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30. Wang, R.; Zhang, Q.; Fu, C.-W.; Shen, X.; Zheng, W.-S.; and Jia, J. 2019. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6849–6857. Wang, Y.; Wan, R.; Yang, W.; Li, H.; Chau, L.-P.; and Kot, A. 2022a. Low-light image enhancement with normalizing flow. In Proceedings of the AAAI conference on artificial intelligence, volume 36, 2604–2612. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; and Li, H. 2022b. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 17683–17693. Wei, C.; Wang, W.; Yang, W.; and Liu, J. 2018. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560. Ye, T.; Chen, S.; Liu, Y.; Chai, W.; Bai, J.; Zou, W.; Zhang, Y.; Jiang, M.; Chen, E.; and Xue, C. 2023. Sequential Affinity Learning for Video Restoration. In Proceedings of the 31st ACM International Conference on Multimedia, 4147– 4156. Ye, T.; Zhang, Y.; Jiang, M.; Chen, L.; Liu, Y.; Chen, S.; and Chen, E. 2022. Perceiving and modeling density for image dehazing. In European Conference on Computer Vision, 130–145. Springer. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739. Zhang, H.; Dai, Y.; Li, H.; and Koniusz, P. 2019. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5978–5986. Zhang, K.; Luo, W.; Zhong, Y.; Ma, L.; Stenger, B.; Liu, W.; and Li, H. 2020. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2737–2746. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018a. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; and Zhang, J. 2021. Beyond brightening low-light images. International Journal of Computer Vision, 129: 1013–1037. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018b. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), 286–301. Zhang, Y.; Zhang, J.; and Guo, X. 2019. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM international conference on multimedia, 1632–1640. Zheng, C.; Shi, D.; and Shi, W. 2021. Adaptive unfolding total variation network for low-light image enhancement. In Proceedings of the IEEE/CVF international conference on computer vision, 4439–4448. Zhou, S.; Li, C.; and Change Loy, C. 2022. Lednet: Joint low-light enhancement and deblurring in the dark. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VI, 573–589. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7881
2024
875
18,712
Enhancing Neural Radiance Fields with Adaptive Multi-Exposure Fusion: A Bilevel Optimization Approach for Novel View Synthesis Yang Zou1*, Xingyuan Li2*, Zhiying Jiang2, Jinyuan Liu2† 1School of Computer Science, The University of Sydney 2School of Software Technology, Dalian University of Technology [email protected], xingyuan [email protected], [email protected], [email protected] Abstract Neural Radiance Fields (NeRF) have made significant strides in the modeling and rendering of 3D scenes. However, due to the complexity of luminance information, existing NeRF methods often struggle to produce satisfactory renderings when dealing with high and low exposure images. To address this issue, we propose an innovative approach capable of effectively modeling and rendering images under multiple exposure conditions. Our method adaptively learns the characteristics of images under different exposure conditions through an unsupervised evaluator-simulator structure for HDR (High Dynamic Range) fusion. This approach enhances NeRF’s comprehension and handling of light variations, leading to the generation of images with appropriate brightness. Simultaneously, we present a bilevel optimization method tailored for novel view synthesis, aiming to harmonize the luminance information of input images while preserving their structural and content consistency. This approach facilitates the concurrent optimization of multi-exposure correction and novel view synthesis, in an unsupervised manner. Through comprehensive experiments conducted on the LOM and LOL datasets, our approach surpasses existing methods, markedly enhancing the task of novel view synthesis for multi-exposure environments and attaining state-of-the-art results. The source code can be found at https://github.com/Archer-204/AME-NeRF. Introduction Deep learning has revolutionized numerous fields, with computer vision being one of the most prominent beneficiaries. Techniques such as Neural Radiance Fields (NeRF)(Mildenhall et al. 2020) have shown remarkable promise in synthesizing novel views of complex scenes from sparse 2D observations. However, similar to most computer vision algorithms such as semantic segmentation, object detection, and etc., the performance of these techniques is often compromised under varying exposure conditions(Zhang and Ma 2023; Xu, Haochen, and Ma 2023; Zhang et al. 2023), a common occurrence in real-world image capture. The challenge lies in the fact that images captured under different exposure conditions can exhibit significant varia*These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Visual comparison of our method with SOTA enhancement technique. Existing approaches primarily focus on either image enhancement(indicated by the green line) or novel view synthesis refinement(denoted by the yellow line). In contrast, our method concurrently optimizes both the NeRF network(highlighted by the blue line) and image enhancement(represented by the red line), leading to a significantly reduced loss. tions in color, brightness, and global detail(Wu et al. 2022; Wu, Chen, and Ma 2022; Han et al. 2022). This variation poses a significant problem for NeRF, which heavily rely on consistent and high-quality input data to generate accurate and realistic outputs. The limitations of the NeRF stem from its viewer-centric approach, which calculates the light emission from a specific location to the viewer, neglecting the interplay between illumination and the scene itself. This emitted light is actually a result of environmental light reflecting off the scene. This reflected light then undergoes further refraction and absorption, leading to attenuation as it travels through the environment once more(Srinivasan et al. 2021). As a result, the NeRF algorithm perceives a dimly lit scene as a result of inadequate radiation from the 3D particles that depict objects within the scene. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7882 Existing solutions to this problem have primarily focused on pre-processing techniques such as image enhancement(Mildenhall et al. 2022; Liu et al. 2023b; Ma et al. 2023; Liu et al. 2022b), brightness adjustment(Jiang et al. 2021), exposure correction(Nguyen et al. 2023) and etc. Although these techniques have demonstrated some effectiveness, enhancing dark or overexposure images using 2D enhancement methods does not ensure precise NeRF estimation. This is because the independent and inconsistent enhancement of 2D images across multiple views could disrupt the consistency of 3D geometry. Another approach, AlethNeRF(Cui et al. 2023), seeks to restructure the volume rendering pipeline. It learns from dark images to comprehend the volumetric object representation and concealing field under priors in an unsupervised manner. However, Aleth-NeRF is only applicable to dark environments and not suitable for overexposure conditions due to the design of the concealing field (Cui et al. 2023). In this paper, we propose a proactive approach to this problem. We present a novel method for NeRF that is designed to operate under multiple exposures in an unsupervised manner, effectively mitigating the impact of exposure variation at the source. Our method employs a evaluator and simulator structure for High Dynamic Range (HDR)(Le et al. 2023) fusion, enabling the NeRF network to perceive and adapt to a wider range of exposure conditions. Furthermore, we introduce a bilevel optimization method(Li, Gu, and Huang 2022) in our framework which simultaneously optimizes multi-exposure correction and the synthesis of new perspectives in NeRF, leading to superior performance of our method compared to traditional NeRF models. Experiments compared with the state-of-the-art methods show the superior performance of our proposed methods. Ablation studies also prove our hypothesis. Our main contributions are as follows: • We propose a unsupervised enhancement framework, train on sRGB images under multiple exposure conditions, effectively mitigating the impact of exposure variation. Our method not only enhances image quality but also ensures consistent visual fidelity. • We introduce a bilevel optimization model that builds a relationship between multi-exposure correction and the synthesis of new perspectives. Consequently, the two tasks mutually reinforce each other, outperforming traditional methods. • Through extensive experiments, we demonstrate that our approach significantly outperforms state-of-the-art methods on the LOM(Cui et al. 2023) and LOL(Wei et al. 2018) datasets, achieving outstanding results by substantial margins. Related Work Low-Light Image Enhancement. Low-Light Image Enhancement (LLIE) is aiming to improve the visibility of images captured under poorly lit conditions. Traditional LLIE methods, such as histogram equalization(Patel, Maravi, and Sharma 2013) and gamma correction(Cao and Bermak 2011), are simple and computationally efficient. With the advent of deep learning, learning-based LLIE methods have emerged(Wang et al. 2022b; Huang et al. 2022). A series of supervised methods, such as the LLNet(Lore, Akintayo, and Sarkar 2017), MBLLEN(Lv et al. 2018), LPNet(Li et al. 2021), and etc. leverage convolutional neural networks to model the complex mapping from lowlight images to normal-light images(Yang et al. 2021a) and have shown promising results(Yang et al. 2021b). Overexposure Correction. High Dynamic Range imaging(Mertens, Kautz, and Van Reeth 2007) and exposure fusion(Liu et al. 2023c,a, 2022a; Li et al. 2023) combines multiple images taken at different exposure levels and can effectively recover details in both underexposed and overexposed regions. However, this methods is frequently constrained because of the dearth of the overall and local complex gray distribution within an image. Recently, learning-based methods like SID(Chen et al. 2018) enhanced images from raw data. View Synthesis with NeRF Neural Radiance Fields (NeRF)(Mildenhall et al. 2022; Hong et al. 2022; Wang et al. 2022a) offers a unique approach to generate new views of intricate 3D scenes from limited 2D images. Utilizing a multilayer perceptron (MLP), NeRF directly translates spatial location x and view direction d into RGB color c and density σ. The term σ(x) represents the minuscule likelihood of a ray concluding at a particle at x. In the process of rendering an image with a neural radiance field, a camera ray r(t) = o + t · d (where r ∈R) is projected from the camera’s center of projection o in the direction d, denoted as C(r). The expected color C(r) constrained by near and far bounds tn and tf is typically executed using the volume rendering function. Formally, C(r) = Z tf tn T(t)σ(r(t))c(r(t), d)dt, (1) where T(t) = exp  − Z t tn σ(r(s))ds  . (2) T(t) represents the accumulated transmittance along the ray from tn to t. The network is fine-tuned by minimizing: L = X r∈R  ˆCc(r) −C(r) 2 2 + ˆCf(r) −C(r) 2 2  (3) Where R is the set of rays in each batch. The C(r), ˆCc(r), and ˆCf(r) represent the ground truth, coarse volume predicted, and fine volume predicted RGB colors for the ray r, respectively. Methods We initiate our discussion by outlining the architecture of the multi-exposure correction process. Subsequently, we provide a detailed explanation of our innovative bilevel optimization model. Building on this foundation, we illustrate how this model is incorporated into the Neural Radiance Fields framework. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7883 Figure 2: Overview of the architecture of our method. The general pipeline of our model is visualized in (a). The detailed optimization process is demonstrated in (b). Dynamic Multi-Exposure Correction The exposure evaluator discriminates the severity of the exposure deviation of the input image. The simulator then utilizes gamma mapping to adjust the exposure level. Subsequently, we employ HDR to fuse a well-exposed image. Exposure Evaluator. The first step in our exposure evaluator is to analyze the histogram of the image. We compute the histogram H(i) of the grayscale image as: H(i) = X [gray(x, y) = i] for i ∈[0, 255], (4) where H(i) represents the number of pixels at brightness level i, and gray(x, y) is the pixel value at location (x, y) in the grayscale input image X. The peak of the histogram P represents the most common brightness level, and the skewness S indicates the asymmetry of the brightness levels. They are computed as: P = argmax H(i), (5) S = 1 N X (i −µ)3 σ3  , (6) where µ is the mean brightness, σ is the standard deviation of brightness, and i ranges over the set {0, 1, . . . , 255}. Gamma correction is a nonlinear operation used to correct the exposure of an image. Based on the histogram’s peak and skewness, we dynamically select the gamma value γ as follows: γ = 1 + e−(b·P +c·S+d) a , (7) where the parameters are carefully chosen to reflect the exposure characteristics of the image. Specifically, parameter a modulates the overall amplitude of the gamma correction, while b and c allow for fine-tuning the influence of the histogram’s peak and skewness on the gamma value, respectively. Parameter d offers the flexibility to shift the entire gamma value selection curve either upward or downward. In our implementation, we select the weights within specific ranges:a ∼U(2, 3), b ∼U(−0.2, −0.1), c ∼U(0.01, 0.1), and d ∼U(0.5, 2). This exposure evaluator dynamically selects the gamma value according to the exposure condition of the image, and can be adapted to different needs and scenarios by adjusting the parameters. Dynamic Multi-Exposure Image Simulator. Based on the gamma value γ selected above and input image X, the simulator employs gamma mapping to emulate images at varying exposure levels. The gamma mapping function M is defined as: M(X, γ) = Xγ, (8) where γ is the gamma value selected by the exposure evaluator. For each input image X, the exposure evaluator computes two gamma values, γ1 and γ2, based on the exposure condition detected from the histogram analysis (as described in Equation. 7). These gamma values are then used to create two images X1, X2 with different exposure levels. Exposure Correction with HDR. The HDR Image Fusion is for synthesizing a well-exposed image from a set of differently exposed images. The input consists of the original network input image X, denoted as X0, two differently exposed images generated by the multi-exposure image simulator X1 and X2(described in Equation. 9), and the output from the previous epoch Y′, represented as X3. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7884 The weights for each input image, signifying the contribution of individual images to the final HDR image, are computed based on their brightness level. The weight calculation is expressed as: wi = exp(−α · (Xi −µ)2), (9) where Xi represents the average brightness of the i-th input image, and µ is the mean brightness across all input images. The parameter α is a tunable factor that controls the distribution of the weights. In our experiment, we have set α to 0.8, allowing for a balanced contribution from images with varying exposure levels. We then apply the fusion of all input images into a single high dynamic range image. The fusion is performed using a weighted sum of the input images: XHDR = Pn i=0 wi · Xi Pn i=0 wi , (10) where wi is the weight of the i-th image. The output of the HDR generator is the well-exposed image XHDR. Our refinement network is designed with a foundation in the U-Net architecture(Ronneberger, Fischer, and Brox 2015). To generate the exposure corrected image Y, the predicted gamma map γ and the original input image X are used as inputs: Y = Xγ. (11) The model is optimized by minimizing a loss function that measures the discrepancy between the network’s predicted image Y and the HDR fusion image XHDR. The loss function is defined as: Ldis = 1 W × H W −1 X x=0 H−1 X y=0 (Y(x, y) −XHDR(x, y))2 , (12) where W and H represent the width and height of the images, respectively. Y(x, y) denotes the pixel value at location (x, y) in the network’s predicted image, and XHDR(x, y) is the corresponding pixel value in the HDR image. NeRF with Bilevel Optimizatoin We introduce a bilevel optimization model which is designed to flexibly incorporate implicit constraints. The bilevel optimization consists of two interconnected subproblems. The upper subproblem is a fundamental component of Neural Radiance Fields (NeRF), responsible for synthesizing an information-rich novel view image that encapsulates the essential visual details. The lower subproblem, on the other hand, focuses on adjusting the exposure of the input image. It ensures the preservation of the source image’s inherent characteristics, thereby providing a refined feasibility domain for the upper subproblem’s solution. Let the input image, exposure-corrected image, and the synthesized novel view image be of size m × n, denoted as X, Y, Z ∈Rm×n, where m and n correspond to the height and width of the image, respectively. The bilevel optimization function is formulated as: min F(X, Y, Z) s.t. Z ∈argmin Z (A(Z, X), B(Z, Y)). (13) Here, F(·) denotes the data fidelity term, encapsulating the relationship between the synthesized novel view image Z, the input image X, and the exposure-corrected image Y. The terms A(·) and B(·) act as feasibility constraints for upper subproblem and lower subproblem, representing prior functions that are oriented towards the intrinsic features of modalities X and Y, respectively. The content consistency constraint A(·) is formally expressed as: A(Z, X) = X r∈R h ∥Zc(r) −X(r)∥2 2 + ∥Zf(r) −X(r)∥2 2 i , (14) where R is the set of rays in each batch, and Zc and Zf correspond to the coarse and fine renderings, respectively. This constraint ensures that the synthesized image maintains content consistency with the target viewpoint, preserving both geometric and optical properties. The feasibility constraint B(·) is composed of a luminance consistency loss Llc and a structure loss Lstr, weighted by λ1 and λ2 respectively, the function can be expressed as: B(Z, Y) = λ1 · Llc(Z, Y) + λ2 · Lstr(Z, Y), (15) where Llc(Z, Y) = 1 mn m X i=1 n X j=1 (L(Z(i, j)) −L(Y(i, j)))2, (16) and Lstr(Z, Y) = X l 1 Nl ∥ϕl(Z) −ϕl(Y )∥2 2 , (17) where ϕl(·) represents the feature map extracted from the l-th layer of a pre-trained deep neural network (in our experiment, VGG16), and Nl is the number of elements in the l-th feature map. L(·) represent the luminance values at pixel location in the image. Since pixels taken under extreme exposure conditions would lose some luminance information, Llc is purposed to regularize the luminance of the predicted normal light images with generated novel view images. To maintain the structure information of the generated novel view images, we add a structure loss Lstr to captures higherlevel visual features, ensuring that the generated image appears natural and realistic to the human visual system. To fully integrate the information from different exposure of images, we employs the Sequential Average Method(SAM)(Sabach and Shtern 2017). Specifically, the upper and lower subproblems are solved separately at first. Then, a balancing parameter α is introduced to harmonize these two subproblems, expressed as: Z = α ⊙U + (1 −α) ⊙L. (18) Where ⊙denotes element-wise multiplication. U and L are the solutions to the upper subproblem A(·) and lower subproblem B(·), respectively. α is a sequence of real numbers chosen from the interval [0, 1]. Experiments We commence by detailing our training environment, the parameters employed, and the dataset utilized for our experiments. Subsequently, we exhibit the comparative performance of our approach against methods that amalgamate The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7885 NeRF SCI+NeRF LIME+NeRF LIESMG+NeRF Ours GT NeRF AHE+NeRF DAConvUNet+NeRF LRS+NeRF Ours GT Figure 3: Example of the enhancement results on LOM(Cui et al. 2023) dataset under extreme exposure conditions. Both sRGB and HSV images are compared to show the content consistency. The results are from methods of NeRF, SCI(Ma et al. 2022), LIME(Guo, Li, and Ling 2017), LIESMG(Xu, Wang, and Lu 2023), AHE, LRS(Srinivasan and Balram 2006), DAConv(Wang et al. 2023) as well as the ground truth. NeRF LIME Retinex EnGAN Ours GT Figure 4: Comparison of the enhancement results with NeRF, LIME, RetinexNet(Wei et al. 2018), and EnGAN(Jiang et al. 2021) on the LOL dataset(Wei et al. 2018). low-light enhancement with NeRF and those that integrate exposure adjustment with NeRF. Lastly, we present the findings of our ablation study, which effectively underscore the superior benefits of our proposed method. Implementation Details Our approach is evaluated on the LOM dataset(Cui et al. 2023), which encompasses five real-world scenes. Each scene comprises between 25 to 65 pairs of low-light and normal-light conditions. We specifically made a overexposure version of LOM dataset to simulate the overexposure conditions. The single image low-light enhancement experiments are done on the LOL dataset(Wei et al. 2018). The training of our network is conducted on a NVIDIA GeForce RTX 3080Ti GPU using the Adam optimizer. The model undergoes a training regimen of 100 epochs with a batch size of 1024 for 62500 iters training. The learning rate is 5e−4 and is reduced every 2500 iters with the cosine learning rate decay strategy. For detail, the coefficient of the number of coarse and fine samples are configured to 64 and 128. Generation Quality Assessment In this section, we showcase the results of multi-view rendering for both low-light and overexposure scenarios, utilizing the LOM and LOL datasets. Our comparison includes the original NeRF and five low-light enhancement methods: HE(Patel, Maravi, and Sharma 2013), LIME(Guo, Li, and Ling 2017), LIESMG(Xu, Wang, and Lu 2023), SCI(Ma et al. 2022), and IAT(Cui et al. 2022). Among these, HE and LIME represent traditional enhancement methods, while IAT, SCI and LIESMG are state-of-the-art network-based 2D enhancement methods that have emerged recently. We then compare with the vanilla NeRF and four exposure correction methods: DAConv based UNet(Wang et al. 2023), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7886 Data Method ”bike” ”buu” ”chair” ”shrub” ”sofa” PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS Low Exposure NeRF 6.36 0.072 0.633 7.51 0.292 0.443 6.04 0.147 0.603 8.01 0.028 0.716 6.27 0.209 0.557 HE + NeRF 15.29 0.693 0.441 15.52 0.781 0.517 15.41 0.747 0.554 14.74 0.441 0.567 17.87 0.811 0.508 IAT + NeRF 13.49 0.607 0.541 14.49 0.705 0.401 18.79 0.781 0.671 13.81 0.286 0.565 17.61 0.829 0.545 LIESMG + NeRF 18.02 0.708 0.479 16.21 0.781 0.392 16.86 0.759 0.526 14.83 0.281 0.517 16.81 0.808 0.565 LIME + NeRF 11.31 0.572 0.471 13.91 0.786 0.316 11.27 0.677 0.533 13.88 0.357 0.521 12.21 0.755 0.445 SCI + NeRF 13.56 0.651 0.459 7.78 0.693 0.528 11.71 0.741 0.595 17.63 0.441 0.523 10.08 0.765 0.518 Ours 18.14 0.732 0.437 19.89 0.854 0.312 17.05 0.751 0.381 15.23 0.462 0.518 17.93 0.847 0.378 High Exposure NeRF 5.61 0.501 0.725 5.54 0.603 0.715 6.11 0.592 0.713 4.14 0.092 0.753 6.26 0.673 0.694 DAConvUNet + NeRF 12.62 0.641 0.449 12.59 0.606 0.611 13.23 0.627 0.607 11.31 0.399 0.601 13.27 0.714 0.587 LRS + NeRF 8.44 0.573 0.541 7.67 0.654 0.655 8.82 0.659 0.651 6.03 0.211 0.714 8.45 0.667 0.621 CLAHE + NeRF 7.41 0.573 0.596 7.64 0.662 0.592 8.37 0.652 0.602 8.42 0.287 0.616 7.71 0.711 0.634 AHE + NeRF 10.94 0.552 0.468 9.69 0.395 0.674 11.77 0.499 0.606 11.21 0.399 0.604 9.76 0.603 0.605 Ours 20.79 0.761 0.432 17.29 0.865 0.304 24.88 0.846 0.376 16.87 0.404 0.534 23.72 0.894 0.395 Table 1: Quantitative comparison of various methods on LOM dataset under low and high exposure conditions. Figure 5: Comparison of enhancement results using different methods on the LOM dataset. Figure 6: Example of using solely HDR images in optimization process (row one) and combined use of both multiexposure inputs and HDR images (row two). AHE, CLAHE, and LRS(Srinivasan and Balram 2006). Partial results are shown in Fig. 3, the upper part is the result comparing with methods of low-light enhancement, the lower part is the comparison with exposure correction methods. The full comparisons are shown in Table. 1. The results presented in Fig. 4 demonstrate the superior performance of our method compared to LIME, RetinexNet(Wei et al. 2018), and EnGAN(Jiang et al. 2021) on the LOL dataset. By using our approach, we effectively restore both color information and texture while preserving the overall structural consistency and content consistency of the images. We also evaluated various enhancement techniques as part of the post-processing stage. This involved rendering multiexposure scenes using NeRF, followed by the application of 2D enhancement methods to post-process these novel views, a process we refer to as ”NeRF + *”. Conversely, the notation ”* + NeRF” indicates the use of various pre-processing methods, followed by the synthesis of the novel view utilizing NeRF. As demonstrated in Fig. 5, our methods significantly outperform these techniques, showcasing remarkable performance improvements. Ablation Study Exposure Evaluator Analysis. As shown in Fig. 8 and Table. 2, we first analyze the influence of our exposure evaluator by replacing the gamma γ (Introduced in Equation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7887 Model Components ”bike” ”buu” ”chair” ”shrub” ”sofa” HDR Eval PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS M1 ✗ ✗ 5.99 0.287 0.679 6.53 0.448 0.579 6.08 0.370 0.658 6.08 0.061 0.735 6.27 0.441 0.626 M2 ✗ ✓ 10.02 0.532 0.582 11.01 0.638 0.455 11.36 0.584 0.517 10.41 0.283 0.685 12.33 0.601 0.472 M3 ✓ ✗ 16.37 0.659 0.477 15.36 0.729 0.396 16.23 0.672 0.436 13.23 0.372 0.544 17.61 0.732 0.395 M4 ✓ ✓ 19.47 0.747 0.435 18.59 0.860 0.308 20.97 0.799 0.379 16.05 0.433 0.526 20.83 0.871 0.387 Table 2: Ablation study result on LOM dataset. Here ”HDR” denotes using HDR in optimization process, ”Eval” means using exposure evaluator for gamma mapping. The presented data represent the average across both low-light and overexposure conditions. Method Exp. ”buu” ”chair” PSNR SSIM LPIPS PSNR SSIM LPIPS Direct Low 17.11 0.723 0.433 15.32 0.677 0.479 High 15.74 0.654 0.479 20.93 0.721 0.441 Bilevel Low 19.89 0.854 0.312 17.05 0.751 0.381 High 17.29 0.865 0.304 24.88 0.846 0.376 Table 3: Results of the ablation study for the bilevel optimization model on the LOM dataset under both low and high exposure conditions. GT direct bilevel Figure 7: Comparison between the application (column 3) and non-application (column 2) of the proposed bilevel optimization under both low and high exposure conditions. 7) with randomly selected values and replacing the HDR fusion with evaluator selected gamma mapping. Using our proposed exposure evaluator with HDR fusion significantly improves the overall performance. Either using HDR fusion without the proposed evaluator or employing gamma mapping without incorporating multi-exposure image HDR fusion, can adversely affect the performance. Experiments on Different Inputs. We further demonstrate the importance of using multi-exposure images as input by comparing our framework, which utilizes both multiexposure and HDR fusion images, to an alternative approach where our framework solely relies on HDR images, as illustrated in Fig. 6. The result shows that utilization of multiexposure input is crucial for preserving the content and structure consistency. Bilevel Optimization or Joint Training. Fig. 7 showcases the enhancements achieved using our bilevel optimization approach relative to the direct joint training with the (a). NeRF (b). w HDR (c). w Eval (d). Ours Figure 8: Visual comparison of our method(d) in contrast to the vanilla NeRF(a), our framework utilizing randomly selected gamma generated HDR(b), and our framework employing gamma mapping without the integration of multiexposure image HDR fusion(c). NeRF network. Our strategy not only facilitates the synthesis of novel view images but also sustains superior image quality under severe exposure conditions, as detailed in Table. 3. It effectively preserves and models both proximity and texture information. Conclusion We have introduced a innovative self-enhancement network for novel view synthesis that is robust to a variety of severe extreme lighting conditions, including under-exposure and over-exposure. In particular, we have developed a bilevel optimization method that enables the concurrent optimization of multi-exposure correction and the synthesis of new perspectives within NeRF in an unsupervised manner. The results demonstrate that compared to existing methods, our approach significantly improves the task of novel view synthesis for multi-exposure images, achieving state-of-the-art performance. These results justify the importance of our network in improving machine perception and visual understanding. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7888 Acknowledgments This work was partially supported by China Postdoctoral Science Foundation (2023M730741), and the National Natural Science Foundation of China (No.62302078). References Cao, Y.; and Bermak, A. 2011. An analog gamma correction method for high dynamic range applications. In Proceedings of the IEEE International SOC Conference, 318–322. Chen, C.; Chen, Q.; Xu, J.; and Koltun, V. 2018. Learning to See in the Dark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3291– 3300. Cui, Z.; Gu, L.; Sun, X.; Qiao, Y.; and Harada, T. 2023. Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields. arXiv preprint arXiv:2303.05807. Cui, Z.; Li, K.; Gu, L.; Su, S.; Gao, P.; Jiang, Z.; Qiao, Y.; and Harada, T. 2022. You Only Need 90K Parameters to Adapt Light: a Light Weight Transformer for Image Enhancement and Exposure Correction. In Proceedings of the British Machine Vision Conference, 238. Guo, X.; Li, Y.; and Ling, H. 2017. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Transactions on Image Processing, 26(2): 982–993. Han, D.; Li, L.; Guo, X.; and Ma, J. 2022. Multi-exposure image fusion via deep perceptual enhancement. Information Fusion, 79: 248–262. Hong, Y.; Peng, B.; Xiao, H.; Liu, L.; and Zhang, J. 2022. Headnerf: A real-time nerf-based parametric head model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20374–20384. Huang, H.; Yang, W.; Hu, Y.; Liu, J.; and Duan, L.-Y. 2022. Towards low light enhancement with raw images. IEEE Transactions on Image Processing, 31: 1391–1405. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; and Wang, Z. 2021. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE transactions on image processing, 30: 2340–2349. Le, P.-H.; Le, Q.; Nguyen, R.; and Hua, B.-S. 2023. Singleimage hdr reconstruction by multi-exposure generation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 4063–4072. Li, G.; Liu, J.; Ma, L.; Jiang, Z.; Fan, X.; and Liu, R. 2023. Fearless Luminance Adaptation: A Macro-MicroHierarchical Transformer for Exposure Correction. In Proceedings of the 31st ACM International Conference on Multimedia, 7304–7313. Li, J.; Gu, B.; and Huang, H. 2022. A fully single loop algorithm for bilevel optimization without hessian inverse. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7426–7434. Li, J.; Li, J.; Fang, F.; Li, F.; and Zhang, G. 2021. Luminance-Aware Pyramid Network for Low-Light Image Enhancement. IEEE Transactions on Multimedia, 23: 3153– 3165. Liu, J.; Shang, J.; Liu, R.; and Fan, X. 2022a. Attention-guided global-local adversarial learning for detailpreserving multi-exposure image fusion. IEEE Transactions on Circuits and Systems for Video Technology, 32(8): 5026– 5040. Liu, J.; Wu, G.; Luan, J.; Jiang, Z.; Liu, R.; and Fan, X. 2023a. HoLoCo: Holistic and local contrastive learning network for multi-exposure image fusion. Information Fusion, 95: 237–249. Liu, R.; Ma, L.; Ma, T.; Fan, X.; and Luo, Z. 2022b. Learning with nested scene modeling and cooperative architecture search for low-light vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5): 5953–5969. Liu, Y.; Liu, Z.; Ma, L.; Liu, J.; Fan, X.; Luo, Z.; and Liu, R. 2023b. Bilevel Generative Learning for Low-Light Vision. In Proceedings of the 31st ACM International Conference on Multimedia, 7758–7766. Liu, Z.; Liu, J.; Wu, G.; Fan, X.; and Liu, R. 2023c. Embracing Compact and Robust Architectures for Multi-Exposure Image Fusion. arXiv preprint arXiv:2305.12236. Lore, K. G.; Akintayo, A.; and Sarkar, S. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61: 127–141. Lv, F.; Lu, F.; Wu, J.; and Lim, C. 2018. MBLLEN: LowLight Image/Video Enhancement Using CNNs. In Proceedings of the British Machine Vision Conference, volume 220, 4. Ma, L.; Jin, D.; An, N.; Liu, J.; Fan, X.; Luo, Z.; and Liu, R. 2023. Bilevel fast scene adaptation for low-light image enhancement. International Journal of Computer Vision, 1– 19. Ma, L.; Ma, T.; Liu, R.; Fan, X.; and Luo, Z. 2022. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5637–5646. Mertens, T.; Kautz, J.; and Van Reeth, F. 2007. Exposure Fusion. In Proceedings of the Pacific Conference on Computer Graphics and Applications, 382–390. Mildenhall, B.; Hedman, P.; Martin-Brualla, R.; Srinivasan, P. P.; and Barron, J. T. 2022. Nerf in the dark: High dynamic range view synthesis from noisy raw images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16190–16199. Mildenhall, B.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; Ng, R.; and Martin-Brualla, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proceedings of the European Conference on Computer Vision, 405–421. Nguyen, H.; Tran, D.; Nguyen, K.; and Nguyen, R. 2023. PSENet: Progressive Self-Enhancement Network for Unsupervised Extreme-Light Image Enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1756–1765. Patel, O.; Maravi, Y. P.; and Sharma, S. 2013. A comparative study of histogram equalization based image enhancement techniques for brightness preservation and contrast enhancement. arXiv preprint arXiv:1311.4033. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7889 Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention, 234–241. Sabach, S.; and Shtern, S. 2017. A First Order Method for Solving Convex Bi-Level Optimization Problems. arXiv preprint arXiv:1702.03999. Srinivasan, P. P.; Deng, B.; Zhang, X.; Tancik, M.; Mildenhall, B.; and Barron, J. T. 2021. NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7495–7504. Srinivasan, S.; and Balram, N. 2006. Adaptive contrast enhancement using local region stretching. In Proceedings of the 9th Asian symposium on information display, 152–155. Wang, C.; Chai, M.; He, M.; Chen, D.; and Liao, J. 2022a. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3835–3844. Wang, W.; Xu, Z.; Huang, H.; and Liu, J. 2022b. Selfaligned concave curve: Illumination enhancement for unsupervised adaptation. In Proceedings of the 30th ACM International Conference on Multimedia, 2617–2626. Wang, Y.; Peng, L.; Li, L.; Cao, Y.; and Zha, Z.-J. 2023. Decoupling-and-Aggregating for Image Exposure Correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18115–18124. Wei, C.; Wang, W.; Yang, W.; and Liu, J. 2018. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560. Wu, K.; Chen, J.; and Ma, J. 2022. DMEF: Multi-exposure image fusion based on a novel deep decomposition method. IEEE Transactions on Multimedia. Wu, K.; Chen, J.; Yu, Y.; and Ma, J. 2022. ACE-MEF: adaptive clarity evaluation-guided network with illumination correction for multi-exposure image fusion. IEEE Transactions on Multimedia. Xu, H.; Haochen, L.; and Ma, J. 2023. Unsupervised multiexposure image fusion breaking exposure limits via contrastive learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 3010–3017. Xu, X.; Wang, R.; and Lu, J. 2023. Low-Light Image Enhancement via Structure Modeling and Guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9893–9903. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; and Liu, J. 2021a. Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality. IEEE Transactions on Image Processing, 30: 3461–3473. Yang, W.; Wang, W.; Huang, H.; Wang, S.; and Liu, J. 2021b. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing, 30: 2072–2086. Zhang, H.; and Ma, J. 2023. IID-MEF: A multi-exposure fusion network based on intrinsic image decomposition. Information Fusion, 95: 326–340. Zhang, J.; Luo, Y.; Huang, J.; Liu, Y.; and Ma, J. 2023. Multi-exposure image fusion via perception enhanced structural patch decomposition. Information Fusion, 101895. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7890
2024
876
18,713
Improved MLP Point Cloud Processing with High-Dimensional Positional Encoding Yanmei Zou1, Hongshan Yu1*, Zhengeng Yang2*, Zechuan Li1, Naveed Akhtar3 1College of Electrical and Information Engineering, Quanzhou Innovation Institute, Hunan University, Changsha, China 2College of Engineering and Design, Hunan Normal University, Changsha, China 3School of Computing and Information Systems, The University of Melbourne, 3052 Victoria, Australia [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Multi-Layer Perceptron (MLP) models are the bedrock of contemporary point cloud processing. However, their complex network architectures obscure the source of their strength. We first develop an “abstraction and refinement” (ABS-REF) view for the neural modeling of point clouds. This view elucidates that whereas the early models focused on the ABS stage, the more recent techniques devise sophisticated REF stages to attain performance advantage in point cloud processing. We then borrow the concept of “positional encoding” from transformer literature, and propose a Highdimensional Positional Encoding (HPE) module, which can be readily deployed to MLP based architectures. We leverage our module to develop a suite of HPENet, which are MLP networks that follow ABS-REF paradigm, albeit with a sophisticated HPE based REF stage. The developed technique is extensively evaluated for 3D object classification, object part segmentation, semantic segmentation and object detection. We establish new state-of-the-art results of 87.6 mAcc on ScanObjectNN for object classification, and 85.5 class mIoU on ShapeNetPart for object part segmentation, and 72.7 and 78.7 mIoU on Area-5 and 6-fold experiments with S3DIS for semantic segmentation. The source code for this work is available at https://github.com/zouyanmei/HPENet. Introduction The increasing popularity of 3D sensors is currently fueling a wide use of 3D point clouds in numerous application domains, such as autonomous driving (Zheng et al. 2021; Shi et al. 2022), robotics (Li et al. 2022) and geological surveying (Kong, Wu, and Saroglou 2020). Unlike digital images with regular 2D grid structures, 3D points in a typical point cloud are irregularly located in 3D space. This intrinsic irregularity causes considerable challenges in processing point clouds with neural networks. Existing neural network based point cloud processing methods can be categorized into two broad categories: voxel based (Huang and You 2016; Choy, Gwak, and Savarese 2019) and point based methods (Zhao et al. 2021; Qian et al. 2022; Qi et al. 2017a). The former discretize the underlying 3D space into volumetric units before processing the point *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. cloud. This generally helps in making the methods computationally efficient. However, the discretization process also results in a noticeable loss of fine-grained geometric information. The seminal work of PointNet (Qi et al. 2017a) originally demonstrated the possibility of directly processing point clouds with Multi-Layer Perceptron (MLP) based neural models. Since PointNet, numerous point based methods have surfaced, e.g., PointNet++ (Qi et al. 2017b), PointConv (Wu, Qi, and Fuxin 2019), PointNeXt (Qian et al. 2022). A key attribute of such methods is that they employ sophisticated local feature aggregation schemes to encode strong representations of the point clouds. For instance, PointNet++ uses a hierarchical network structure for that purpose, whereas PointConv employs a density-aware discrete convolution for high-quality local feature aggregation. The more recent PointNeXt proposes an inverted residual bottleneck module to improve PointNet++ scalability. In recent years, the success of transformers in the natural language processing (Vaswani et al. 2017; Devlin et al. 2018) and computer vision domains (Dosovitskiy et al. 2020; Liu et al. 2021a) has also motivated transformer based neural models for directly processing 3D point clouds. To that end, Point Transformer (Zhao et al. 2021) and other recent methods, e.g., (Lai et al. 2022; Zhang et al. 2022), use transformer architectures for an even more sophisticated feature aggregation. These efforts are emerging in parallel to the MLP networks for point clouds (Choe et al. 2022; Ma et al. 2022; Qian et al. 2022). One of the intended contributions of this paper is to show that the key feature extraction modules used by the conventional MLP based methods and the emerging transformer based techniques essentially follow the same two-stage “abstraction and refinement” (ABSREF) paradigm. We discuss this unified view of the latest techniques in detail in the Proposed Method Section. Under our ABS-REF perspective, it becomes clear that whereas the early works, e.g., PointNet++ (Qi et al. 2017b) and PointConv (Wu, Qi, and Fuxin 2019), employ sophisticated local feature aggregation strategies at the ABS stage, they generally lack the REF stage. As compared to them, success of the more recent techniques can be attributed to the REF stage, which enables an increased receptive field of the network and a greater extent of context information considerations. These factors are crucial for discriminative The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7891 Figure 1: HPENet architecture for semantic segmentation. The network delineates between Abstraction (ABS) and Refinement (REF) stages of feature extraction, and uses the proposed High-dimensional Positional Encoding (HPE) module in both stages. feature learning, which leads to better performance. Positional information is the key intrinsic property of point clouds. However, point based methods often treat the point positions as an added information by concatenating other features and relative point positions, e.g., PointNet++ (Qi et al. 2017b). Though useful, this strategy lacks in giving the point positional information its due attention. Fortunately, the notion of positional encoding, which originated in the transformer literature (Vaswani et al. 2017), potentially provides an algorithmic solution to this problem by enabling positional information embedding in a feature space. Inspired, we propose positional encoding for MLP based point cloud modeling, thereby allowing explicit incorporation of the positional information along with the relative local point relations in the models. Indeed, we can find existing instances of leveraging positional encoding in point based models. However, those approaches are either transformer (not MLP) based (Zhao et al. 2021) or they use non-learnable encodings which is not adaptive, e.g., Position Pooling in (Liu et al. 2020). Our technique enables exploiting adaptive positional encoding in MLP based architectures. Due to a low-dimensional representation, the relative geometric relationships in a point cloud are often not sufficiently encoded by point coordinates for the modeling purpose. Hence, we enrich the geometric relationship representation by first projecting the point coordinates onto a high-dimensional space. We allow this in both data-driven and parameter-free manners. The enrichment is followed by an MLP to align the high-dimensional vectors to their corresponding feature space. This process is packed in a High-dimensional Positional Encoding (HPE) module. This module is used to devise our HPENets, see Fig. 1. Our key contributions are summarised as follows. • We identify a unified “abstraction and refinement” paradigm underpinning the current high-performing point cloud modeling techniques, which allows an intuitive delineation of the key strengths of the methods. • We propose a High-dimensional Positional Encoding (HPE) scheme for effective point cloud geometric representation with positional information. The HPE scheme can be generically used to enhance MLP architectures. • We propose HPENets, which are ABS-REF stage inspired MLP networks that leverages our HPE modules in both ABS and REF stages. • With an extensive evaluation of our technique, we establish state-of-the-art (SOTA) results1 of 87.6 mAcc on ScanObjectNN for object classification, 85.5 class mIoU on ShapeNetPart for object part segmentation, and 72.7 and 78.7 mIoU on Area-5 and 6-fold experiments with S3DIS for semantic segmentation. Related Work Due to the intrinsic limitations of voxel based methods (Choy, Gwak, and Savarese 2019; Thomas et al. 2019), point based methods (Qi et al. 2017a; Wu, Qi, and Fuxin 2019; Hu et al. 2020) have attracted considerable attention of the research community in recent years for point cloud processing. Existing point based methods can be broadly categorized into four groups, namely; MLP based (Tolstikhin et al. 2021; Lian et al. 2021; Tang et al. 2022; Wang et al. 2022), convolution based (Engelmann, Kontogianni, and Leibe 2020; Xu et al. 2021), attention based (Zhao et al. 2021; Lai et al. 2022) and graph based (Shen et al. 2018; Wang et al. 2019) methods. Key contributions along these categories are discussed below. MLP based methods apply MLPs to extract pointwise features and then use a symmetric operation such as maxpooling or average-pooling on the point groups to obtain high-level features. After the pioneering work of PointNet (Qi et al. 2017a), numerous MLP-based techniques have emerged. Most of them focus on devising sophisticated modules to extract the local geometric structure (Qi et al. 2017b). Inspired by the widely used SIFT descriptor (Lowe 2004), PointSIFT (Jiang et al. 2018) develops a 3D SIFT descriptor that considers eight crucial orientations and scales for local scale-invariant feature transform. To improve the generalisation and performance of MLP-based networks, 1Our claim is limited to the techniques that, similar to our approach, do not benefit from pre-training, voting or ensembling. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7892 PointMLP (Ma et al. 2022) proposes a local geometric affine module to transform point features in local regions adaptively. More recently, PointNeXt (Qian et al. 2022) proposes an inverted residual MLP module for improved scalability. Convolution based methods focus on designing a local convolution kernel suitable for point cloud processing. For instance, PointConv (Wu, Qi, and Fuxin 2019) proposes a density-aware discrete convolution kernel which comprises weight and density functions, whereas KPConv (Thomas et al. 2019) presents a kernel point convolution which uses any number of kernel points to process various point clouds. Engelmann, Kontogianni, and Leibe (2020) employ a dilated point convolution to increase the receptive field size of point convolutional networks. Lei, Akhtar, and Mian (2019) proposed a spherical kernel that uses an Octree-guided CNN for point cloud process. Their method is further enhanced in (Lei, Akhtar, and Mian 2020) for graph convolution. Attention based methods exploit attention mechanisms to model long-range dependency between point pairs in a set. These methods are mainly inspired by the attention mechanism (Vaswani et al. 2017), which was first introduced in natural language processing. In order to efficiently process large-scale point clouds, RandLA-Net (Hu et al. 2020) uses random point sampling to guarantee efficiency and attention-based local feature aggregation for better performance. Both in natural language processing (Vaswani et al. 2017; Devlin et al. 2018) and computer vision domains (Dosovitskiy et al. 2020; Liu et al. 2021a), attention mechanism is currently causing a paradigm shift. Since Point Transformer (Zhao et al. 2021), point cloud processing has also started to benefit from this mechanism considerably using the transformer architectures (Park et al. 2022; Lai et al. 2022; Yu et al. 2022; Guo et al. 2021). Graph based methods employ graph structure to extract features, which generally treat points as nodes and feature relations as edges. Landrieu and Simonovsky (2018) proposed the superpoint graph to deal with large-scale 3D semantic segmentation tasks. CurveNet (Xiang et al. 2021) pays attention to graph structure and employs a shape descriptor, termed “curves”, using guided walks in point clouds. Other examples of graph-based methods also include (Lei, Akhtar, and Mian 2020). Proposed Method Image processing models are currently experiencing and paradigm shift at the hands of transformers (Dosovitskiy et al. 2020; Liu et al. 2021a). Following the suite, many recent works are directly importing the transformer architectures to point cloud modeling (Zhao et al. 2021; Lai et al. 2022). However, point cloud data has its peculiar nature. We envisage that a more systematic delineation of the strengths of the existing point cloud techniques can better guide the adoption of relevant concepts of transformers in the point cloud domain. Hence, we first provide a new perspective on the existing point-based methods, and then propose a high-dimensional positional encoding enhancement for MLP-based methods. Below, we first briefly introduce the mathematical notions used in the remaining paper. A point cloud with N points can be considered comprising two sets of distinct elements, namely; the point set P =  pm ∈R1×3 N m=1 and the feature set F =  fm ∈R1×c N m=1, where pm is the position of the m-th point and fm is the corresponding feature with c channels. In a typical neural model, after a sampling layer, a smaller point cloud is generated with N l+1 points, such that N l+1 < N l. Here, l is the index of the sampling layer. By using a grouping operation to group k points neighboring a sampled point in a local region, we get grouped point sets K =  km ∈Rk×3 N m=1 and the corresponding feature sets D =  dm ∈Rk×c N m=1. Abstraction and Refinement View In Fig. 2, we illustrate the two-stage “abstraction and refinement” (ABS-REF) view of the major existing and the proposed technique. This view is largely inspired by the intuitions behind the subsampling and convolution blocks in the image processing domain. We find that point cloud literature currently generally lacks in a clear delineation between the adopted abstraction and refinement processes, which adversely contributes to developing effective techniques. Abstraction (ABS) stage: Analogous to the subsampling operation performed in the image processing networks, we can identify an abstraction (ABS) stage for the point cloud networks. Effectively, this stage eventually abstracts features from input point cloud and produces a new point cloud with fewer points. The stage can be composed of multiple operations, including a sampling operation (Eq. 1), a grouping operation (Eq. 2), and an intra-set feature aggregation operation (Eq. 3). Commonly, the sampling operation selects a new point set with fewer elements using Farthest Point Sampling (FPS), which leverages the centroids of local regions for subsampling. The grouping operation generally selects neighboring points around the centroids to define local region sets using, e.g., k-Nearest Neighbors (KNNs). Since the aggregation operation in ABS stage abstracts local context information from a set to the corresponding centroid, we call it intra-set operation. Concretely, given a point set Pl and its corresponding feature set Fl, we get the point set Pl+1, grouped point sets Kl+1 ABS, and feature sets Dl+1 ABS after the sampling and grouping operations. We use the subscript ABS to emphasize the ABStraction stage. In this stage, the intra-set feature aggregation operation hABS encodes local region patterns into the feature vectors and aggregates local context information intra set. Overall, the abstraction stage can be mathematically expressed as Pl+1 = FPS Pl , pl+1 m ∈Pl+1, (1) Dl+1 ABS(pl+1 m ), Kl+1 ABS(pl+1 m ) = KNN(pl+1 m , Pl, Fl), (2) f l+1 m = hABS Dl+1 ABS pl+1 m  , Kl+1 ABS pl+1 m  , (3) where Dl+1 ABS(pl+1 m ) and Kl+1 ABS(pl+1 m ) are the neighbor feature and point sets of the centroid pl+1 m , respectively. Refinement (REF) stage: Inspired by the underlying objective of the convolution block in image processing networks, we can identify a refinement (REF) stage in point The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7893 Figure 2: “Abstraction and Refinement” (ABS-REF) perspective. Left: The proposed ABS-REF view of point cloud models is analogous to subsampling and convolution block view in image models. The shown “ABS-REF” column expands abstraction and refinement stages. Right: Representative instantiations of the ABS-REF framework. Whereas early methods, e.g., PointNet++, PointConv, ignore REF stage, more recent techniques, e.g., Point Transformer, achieve higher performance by accounting for REF stage in point cloud models. Abbreviations include SOP: Symmetric OPeration, OP: aggregation OPeration, PT: Point Transformer, HPE: proposed High-dimensional Positional Encoding. cloud networks. This stage aims to refine the centroid features by gathering local context information. Specifically, the REF stage further processes the point set Pl+1 and features set Fl+1 ABS generated by the ABS stage. In Fig. 2 (left), we illustrate a simplified architecture of the refinement stage in the adopted “ABS-REF” view of the techniques. In the refinement stage, a grouping operation (Eq. 4) is first used to group the local sets in centroid point cloud. Later, an interset feature aggregation operation hREF is employed to extract and aggregate the inter-set context information. Mathematically, the REF stage can be expressed as Dl+1 REF (pl+1 m ), Kl+1 REF (pl+1 m ) = KNN(pl+1 m , Pl+1, Fl+1 ABS), (4) f l+1 m = hREF Dl+1 REF pl+1 m  , Kl+1 REF pl+1 m  , (5) where Dl+1 REF (pl+1 m ) and Kl+1 REF (pl+1 m ) are the neighbor feature set and point set of the centroid pl+1 m , respectively. The benefits of joint application of ABS and REF stages in a network are two-fold. First, the effective receptive field of the network gains from the REF stage. Ideally, a centroid’s receptive field is kABS in the ABS stage, while kABS × kREF in the REF stage, where kABS and kREF are the number of neighbor points of a set in the ABS and REF stages. Second, the REF stage helps improving the scalability by increasing the network depth by stacking REF stages, similar to stacking convolutional blocks for images. Instantiation of ABS-REF framework: To exemplify systematic understanding of point cloud models under our ABS-REF perspective, we provide representative examples in Fig. 2 (right). It can be seen that PointNet++ (Qi et al. 2017b) and PointConv (Wu, Qi, and Fuxin 2019) only have the ABS stage. Although the two models use different intraset operations for local feature aggregation, both are single stage models under our perspective. PointNet++ employs MLPs, while PointConv uses the density-aware discrete convolution. Nevertheless, both models are essentially void of the REF stage. More recently, Point Transformer (Zhao et al. 2021) has reported impressive results. Incidentally, we can easily identify an additional REF stage in Point Transformer. In what follows, we first develop High-dimensional Positional Encoding (HPE), which is beneficial for both ABS and REF stages. Thereafter, we leverage HPE to develop HPENets, which are conveniently designed suite of networks for MLP based point cloud processing. Particularly unique to our models is the inter-set OPeration sub-stage in the REF component, which also distinguishes our technique from the transformer based methods that employ a REF stage, e.g., Point Transformer (Zhao et al. 2021). High-Dimensional Positional Encoding Positional information is the most important feature of points clouds. It encodes robust geometric details of a scene. Hence, we propose to leverage it fully in both ABS and REF stages of point cloud modeling using explicit positional encoding (PE). The notion of PE originated in the transformer literature (Vaswani et al. 2017). In the point cloud context, PE can encode a point coordinate pm = [px m, py m, pz m] ∈ R1×3 into the space of corresponding feature fm ∈R1×c to embed geometric information. For a transformer based neural architecture for 3D modeling, sinusoidal PE (PESIN) and learnable PE (PEMLP ) can be formulated as below. PESIN                (pm, 6i + 0) = sin 100px m/10006i/c (pm, 6i + 1) = cos 100px m/10006i/c (pm, 6i + 2) = sin 100py m/10006i/c , (pm, 6i + 3) = cos 100py m/10006i/c (pm, 6i + 4) = sin 100pz m/10006i/c (pm, 6i + 5) = cos 100pz m/10006i/c (6) PEMLP (pm) = θ3,c (Norm (δ3,3 (pm))) . (7) In the above equations, i = c/6 is the index of the subgroup PE vector. The θ and δ denote MLP-based transformations, with subscripts denoting the channel dimensions The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7894 of their input and output. Norm denotes the normalization, e.g., batch/layer normalization for restricting the PE to [0, 1]. The sine and cosine functions in PESIN inherently restrict the values in [-1, 1]. Though potentially useful, both PESIN and PEMLP provide low-dimensional encodings, which is inadequate to effectively capture the complex geometric relations among the points of unstructured point clouds. Moreover, PESIN is not adaptive. To overcome the inadequacy, we propose a Highdimensional Positional Encoding (HPE) module. Our module first transforms the point coordinates to a highdimension space for a more comprehensive encoding of geometric details. Then, it employs an MLP to align the highdimensional encoding with the feature space, which also makes its use flexible. We propose methods for generating the high-dimensional codes using sinusoidal and learnable encoding, termed HPESIN and HPEMLP . Our HPESIN uses sine and cosine functions to extend the channel dimensions from 3 to (⌊c/6⌋× 6) to get a highdimensional vector, followed by an MLP to align the vector to the feature space. Following the notational conventions from above, HPESIN can be formulated as HPESIN (pm) = θ(⌊c/6⌋×6),c (PESIN (pm)) . (8) Our HPEMLP generates the high-dimensional vector in a data-driven manner. Specifically, it uses an MLP to extend channel dimensions from 3 to c and then uses an MLP to transform the high-dimensional vector, formulated as HPEMLP (pm) = θc,c (Norm (δ3,c (pm))) . (9) The channel dimensions of the high-dimensional vectors in our encoding can be any suitable value. In our approach, we pack the encoding scheme in HPE(SIN) and HPE(MLP) modules, as shown in Fig. 1. These module are readily usable for the ABS and REF stages of MLP based networks. HPENets for Point Cloud Processing Based on our “ABS-REF” view and HPE module, we develop MLP point cloud processing networks, termed HPENets. To explain, we focus on the more comprehensive encoder-decoder architecture for the semantic segmentation task, as shown in Fig. 1. Other networks are easily deduced from this explanation. In Fig. 1, the encoder consists of a single point embedding layer and four blocks that follow “ABS-REF” view, while incorporating the proposed HPE modules. The point embedding layer is used to enrich the input representation. We denote the channels of point embedding layer as Ce, which can be varied. The number of REF layers can also vary in the ABS-REF blocks for different tasks. We denote these numbers by a set B that consists of four elements. To exemplify, our HPENet applied for the segmentation task on S3DIS (Armeni et al. 2016) can use B = [3,6,3,3], which means the number of REF layers in the four “ABS-REF” blocks are 3, 6, 3 and 3, respectively. The value in B can decrease to 0 to degenerate HPENet into a single-stage method, e.g., for object classification. As shown in Fig. 2, in the ABS stage, we introduce our HPE module after the grouping layer, which uses the grouped points set Kl+1 ABS as the inputs. We first use the Method Params.(M) OA(%) mAcc(%) PointNet (Qi et al. 2017a) 3.5 68.2 63.4 PointNet++ (Qi et al. 2017b) 1.5 77.9 75.4 DGCNN (Wang et al. 2019) 1.8 78.1 73.6 PointMLP (Ma et al. 2022) 12.6 85.7 84.4 PointNeXt (Qian et al. 2022) 1.4 88.2 86.8 PointMetaBase (Lin et al. 2023) 1.4 88.2 86.8 HPENet(SIN) 1.7 88.4 86.9 HPENet(MLP) 1.7 88.9 87.6 Table 1: 3D object classification on ScanObjectNN (Uy et al. 2019). The best and second-best results are boldfaced and underlined, respectively. grouped feature set Dl+1 ABS to add high-dimensional positional encodings and then follow it by a concatenation of grouped points set as the input of MLPs. We opt a similar strategy in the REF stage. As illustrated in Fig. 1, the obvious difference between the ABS and REF stages is the existence of the sampling layer and the design of local aggregation operation (MLPs). In ABS, the MLPs are used before the Symmetric OPeration (SOP), as they aim to aggregate the local features. In contrast, the SOP is embedded between the MLPs in the REF stage. Specifically, the MLP before the SOP pays attention to capturing inter-set context information, while the MLPs following SOP focus on refining the point-wise features. By varying the hyper-parameters B and Ce, we conveniently construct a range of “HPENets” with different model sizes to match the training data scales. We develop HPENets with the following configurations in our experiments. • ScanObjectNN: Ce = 32, B = [0, 0, 0 ,0]. • ModelNet40: Ce = 64, B = [0, 0, 0 ,0]. • ShapeNetPart: Ce = 160, B = [0, 0, 0 ,0]. • S3DIS: Ce = 64, B = [3, 6, 3 ,3]. • ScanNet V2: Ce = 64, B =[5, 8, 5 ,5]. Experiments Our technique is extensively evaluated on five datasets for four different tasks of object classification, object part segmentation, semantic segmentation and object detection. 3D Object Classification ScanObjectNN (Uy et al. 2019) collects real-world objects from 700 unique scenes of the SOTA mesh datasets SceneNN (Hua et al. 2016) and ScanNet (Dai et al. 2017). It contains about 15,000 real scanned objects, categorized into 15 classes with 2,902 unique object instances. Because of occlusions and noise, ScanObjectNN is a highly challenging dataset for the current methods. Following Ma et al. (2022), we evaluate HPENet on PB T50 RS, the hardest and most commonly used variant of ScanObjectNN, using the standard metrics of mean accuracy (mAcc) and overall accuracy (OA). As reported in Tab. 1, HPENet outperforms the existing techniques and HPENet(MLP) achieves the SOTA performance with 88.9% OA and 87.6% mAcc. The HPENet(MLP) outperforms the existing best MLPbased method PointNeXt (Qian et al. 2022) by 0.7% OA and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7895 Method S3DIS Area-5 S3DIS 6-fold ScanNet V2 OA mAcc mIoU OA mAcc mIoU Val mIoU PointNet (Qi et al. 2017a) 49.0 41.1 78.5 66.2 47.6 PointNet++ (Qi et al. 2017b) 83.0 53.5 81.0 54.5 53.5 KPConv (Thomas et al. 2019) 72.8 67.1 79.1 70.6 69.2 Point Transformer (Zhao et al. 2021) 90.8 76.5 70.4 90.2 81.9 73.5 70.6 Stratified Transformer (Lai et al. 2022) 91.5 78.1 72.0 74.3∗ PointNeXt (Qian et al. 2022) 91.0 77.2 71.1 90.3 83.0 74.9 71.5 PointMetaBase (Lin et al. 2023) 91.3 78.0 72.3 91.3 77.0 72.8 HPENet(SIN) 91.0 78.9 72.4 91.7 86.1 78.2 72.5 HPENet(MLP) 91.5 78.5 72.7 91.9 86.2 78.7 74.0∗ Table 2: 3D semantic segmentation results on S3DIS and ScanNet V2. For ScanNet V2 results are on validation set. *Stratified Transformer requires 211 hours training while requiring 120,000 point input to achieve the results. Our HPENet(MLP) needs only 82 hours training and uses 64,000 point input. Method Cls. mIoU Ins. mIoU Point Transformer (Zhao et al. 2021) 83.7 86.6 Stratified Transformer (Lai et al. 2022) 85.1 86.6 PointNeXt-S (Qian et al. 2022) 85.2 87.0 HPENet(SIN) 85.5 87.1 HPENet(MLP) 85.3 87.0 Table 3: 3D object part segmentation on ShapeNetPart. 0.8% mAcc, which indicated that HPE is effective for MLPbased point cloud processing. It is emphasized that we do not employ any pre-training or voting strategies to outperform the current SOTA methods. In the table, we also report the model sizes as parameters in millions. It is notable that our model sizes are also on the lower side of the spectrum. ModelNet40 (Wu et al. 2015) is a widely popular dataset for synthetic object classification with standard evaluation protocols. Our HPENet(SIN) variant equals the SOTA performance of 91.3 mAcc on this dataset with PointMLP (Ma et al. 2022). Moreover, our model achieves these results with only 5.9M parameters as compared to 12.6M parameters of PointMLP. 3D Object Part Segmentation ShapeNetPart (Yi et al. 2016) is an object-level dataset for object part segmentation, consisting of 16,881 objects with 16 shape categories belonging to 50 parts labels. Following Qi et al. (2017b), we randomly select 2,048 points as input and use class mean IoU (Cls. mIoU) and instance mean IoU (Ins. mIoU) for evaluation. In Tab. 3, we report the results of the top performing approaches. Our method outperforms the SOTA method on this dataset as well. Notably, HPENet also outperforms the strong transformer-based method Stratified Transformer (Lai et al. 2022). 3D Semantic Segmentation Semantic segmentation aims to assign a semantic label to each point in scene point clouds. In general, this task is much more challenging than object classification. We evaluate HPENet on two popular large-scale datasets, S3DIS (Armeni et al. 2016) and ScanNet (Dai et al. 2017). The results are summarised in Tab. 2. We discuss them below. Figure 3: Representative qualitative results of HPENet (MLP) and the strong MLP-based method PointNeXt (Qian et al. 2022) on S3DIS Area-5. S3DIS (Armeni et al. 2016) comprises 6 large-scale indoor areas and 271 rooms, which are captured from 3 different buildings. In total, 273 million points are annotated and classified into 13 semantic categories. Following PointNeXt (Qian et al. 2022), we use two evaluation protocols. The first uses Area-5 as the test scene and all other scenes for training and the second strategy is the standard 6-fold cross-validation. For evaluation, we use the popular metrics of mean IoU (mIoU), mAcc, and OA. From Tab. 2, it can be observed that HPENet establishes new state-of-the-art performances of 72.7% mIoU on S3DIS Area-5 and 78.7% mIoU on S3DIS (6-fold cross-validation). Again, we do not use any pre-training or voting strategies to gain performance boost in our results. Despite being an MLP-based approach, HPENet performs at par or better than transformer methods. Our HPENet outperforms PointNext - the strong MLP-based method - by absolute gains of 0.5%, 1.7%, and 1.6% in terms of OA, mAcc, and mIoU on the Area-5 test; and by 1.6%, 3.2%, and 3.8% in term of OA, mAcc, and mIoU for the 6-fold experiments, respectively. We provide a representative example of qualitative results for our method on S3DIS in Fig. 3 along with the strong MLP based method, PointNeXt. ScanNet V2 (Dai et al. 2017) consists of 3D indoor scenes with 2.5 million RGB-D frames in more than 1,500 scans, annotated with 20 semantic classes. We follow the standard training and validation splits of 1,201 and 312 scenes, respectively. As shown in the last column of Tab. 2, HPENet The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7896 Method [email protected] [email protected] VoteNet (Qi et al. 2019) 63.8 44.2 3DETR (Misra, Girdhar, and Joulin 2021) 65.0 47.0 GroupFree3D (Liu et al. 2021b) 68.2 52.6 VoteNet + HPE(MLP) 65.0 45.6 GroupFree3D + HPE(MLP) 69.1 53.0 Table 4: 3D object detection on ScanNet V2. VoteNet and GroupFree3D use MMDetection3D (Contributors 2020). Networks(Ce, B) HPE Param. mIoU △ TP HPENet dv(32,[0,0,0,0]) 0.8M 64.2 232 HPENet dv(32,[0,0,0,0]) SIN 0.9M 65.0 +0.8 199 HPENet dv(32,[0,0,0,0]) MLP 0.9M 65.3 +1.1 205 HPENet dv(32,[1,1,1,1]) 3.7M 66.7 +2.5 161 HPENet dv(32,[1,1,1,1]) SIN 4.1M 69.3 +5.1 115 HPENet dv(32,[1,1,1,1]) MLP 4.1M 69.9 +5.7 125 HPENet dv* 3.7M 63.7 -0.4 228 Table 5: Ablation study on S3DIS Area-5 demonstrating efficacy of ABS-REF view, and contribution of HPE modules. △is increment from previous row. TP denotes throughput in instance/second. achieves highly competitive performance of 74.0% mIoU which outperforms PointNeXt by 2.5% mIoU. According to the released files of the best transformer-based method Stratified Transformer, HPENet uses less than half of the training time (211 hours vs 82 hours) but still achieves comparable performance. The reported performance of HPENet is achieved with 64,000 points, whereas the Stratified Transformer requires 120,000 points to achieve these results. 3D Object Detection The key building block of HPENet, i.e., HPE module is inherently compatible to MLP based backbones. To demonstrate its flexibility, we also extend the competitive techniques of VoteNet (Qi et al. 2019) and GroupFree3D (Liu et al. 2021b) with HPE(MLP). In Tab. 4, we summarize the results of our extension following the standard evaluation protocols on ScanNet V2 dataset (Dai et al. 2017). A consistent across the board gain is achieved with our HPE(MLP) extension. Ablation & Further Discussion ABS-REF efficacy: In Tab. 5, we establish the contribution of REF stage in our HPENet that follows ABS-REF paradigm. By removing the REF stage, HPENet degenerates to a single-stage method. We call this degenerated version HPENet-dv in the table. We chose HPENet-dv as the baseline and expanded it by adding a REF behind each ABS. This obtained 2.5% mIoU performance gain. Further using our HPE schemes, we eventually achieve a performance of 69.9% mIoU, which is already comparable to 70.4% mIoU of Point Transformer (PT). Due to the simple local aggregation strategy used in REF, the size of our model is much smaller (4.1M vs 7.8M) than that of PT. Moreover, our model has 3.6 times better Through Put (TP) than PT. To verify the impact of parameters, we remove the grouping Type ABS REF OA mAcc △mAcc mIoU △mIoU HPESIN 90.7 76.0 70.2 √ 90.8 77.2 +1.2 71.2 +1.0 √ 90.9 77.4 +1.4 71.4 +1.2 √ √ 91.0 78.9 +2.8 72.4 +2.2 HPEMLP √ 90.8 77.7 +1.7 71.4 +1.2 √ 91.1 77.6 +1.6 70.8 +0.6 √ √ 91.5 78.5 +2.5 72.7 +2.5 PEMLP √ √ 90.9 76.8 +0.8 71.2 +1.0 HPESIN(mul) √ √ 90.8 76.9 +0.9 70.8 +0.6 HPESIN(abs) √ √ 89.7 75.3 -0.7 69.1 -1.1 Table 6: Ablation study for positional encoding on S3DIS Area-5 justifying HPE use in both ABS and REF stages. Dimension 3 c//8 c//4 c//2 c mIoU 71.2 71.6 71.9 72.0 72.7 Table 7: Ablation study on dimension of HPE(MLP) on S3DIS Area-5. ‘c’ denotes feature channel number. operation in the REF stage of HPENet-dv(32,[1,1,1,1]) to get a model with only the ABS stage and the same number of parameters, termed HPENet-dv*. However, HPENet-dv* only achieves 63.7% mIoU. These results validate that the REF stage is an important component under our ABS-REF view and our HPE effectively supports this view. More on positional encoding: In Tab. 6, we evaluate the influence of different positional encodings in different stages of HPENet on S3DIS Area-5. We use the high-dimensional positional encoding (HPEMLP and HPESIN) and learnable positional encoding (PEMLP ). In the experiments, we also study the effect of absolute positional encoding by replacing the input of HPESIN with absolute point coordinates, named HPESIN(abs). Moreover, we replace the regular element-wise addition with element-wise multiplication HPESIN(mul), which treats the positional encoding as a dynamic feature weight. These results clearly justify the proposed HPEMLP and HPESIN. Moreover, these results support our unique idea that both ABS and REF should use positional encoding. In Tab. 7, we analyze the effects of dimension variation of high-dimensional projected space with HPE(MLP) on S3DIS Area-5. The results indicate that highdimensional representation is crucial for position encoding. Conclusion Inspired by the distinct subsampling and convolution stages in image processing models, we provide a two-stage “abstraction and refinement” (ABS-REF) view for point cloud neural processing. This view allows an intuitive delineation of the key strengths of the existing methods. We also propose a high-dimensional positional encoding (HPE) scheme that is compatible with the “ABS-REF” paradigm. Based on ABS-REF view and HPE, we devise a suite of HPENets that leverage HPE for MLP based modeling for object classification, object part segmentation, semantic segmentation and object detection, mostly improving SOTA performance across the board. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7897 Acknowledgments This work was supported by the NSFC (61973106, U2013203,62103137, U1913202, U21A20487); the Natural Science Fund of Hunan Province (2021JJ10024, 2022JJ30024, 2022JJ40100); the Key Research and Development Project of Science and the Technology Plan of Hunan Province(2022GK2014). References Armeni, I.; Sener, O.; Zamir, A. R.; Jiang, H.; Brilakis, I.; Fischer, M.; and Savarese, S. 2016. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1534– 1543. Choe, J.; Park, C.; Rameau, F.; Park, J.; and Kweon, I. S. 2022. Pointmixer: Mlp-mixer for point cloud understanding. In European Conference on Computer Vision, 620–640. Springer. Choy, C.; Gwak, J.; and Savarese, S. 2019. 4d spatiotemporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3075–3084. Contributors, M. 2020. MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https://github.com/open-mmlab/mmdetection3d. Accessed: 2023-04-07. Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5828–5839. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Engelmann, F.; Kontogianni, T.; and Leibe, B. 2020. Dilated point convolutions: On the receptive field size of point convolutions on 3d point clouds. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 9463–9469. IEEE. Guo, M.-H.; Cai, J.-X.; Liu, Z.-N.; Mu, T.-J.; Martin, R. R.; and Hu, S.-M. 2021. Pct: Point cloud transformer. Computational Visual Media, 7(2): 187–199. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; and Markham, A. 2020. Randla-net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11108–11117. Hua, B.-S.; Pham, Q.-H.; Nguyen, D. T.; Tran, M.-K.; Yu, L.-F.; and Yeung, S.-K. 2016. Scenenn: A scene meshes dataset with annotations. In 2016 fourth international conference on 3D vision (3DV), 92–101. Ieee. Huang, J.; and You, S. 2016. Point cloud labeling using 3d convolutional neural network. In 2016 23rd International Conference on Pattern Recognition (ICPR), 2670– 2675. IEEE. Jiang, M.; Wu, Y.; Zhao, T.; Zhao, Z.; and Lu, C. 2018. Pointsift: A sift-like network module for 3d point cloud semantic segmentation. arXiv preprint arXiv:1807.00652. Kong, D.; Wu, F.; and Saroglou, C. 2020. Automatic identification and characterization of discontinuities in rock masses from 3D point clouds. Engineering Geology, 265: 105442. Lai, X.; Liu, J.; Jiang, L.; Wang, L.; Zhao, H.; Liu, S.; Qi, X.; and Jia, J. 2022. Stratified Transformer for 3D Point Cloud Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8500–8509. Landrieu, L.; and Simonovsky, M. 2018. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4558–4567. Lei, H.; Akhtar, N.; and Mian, A. 2019. Octree guided cnn with spherical kernels for 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9631–9640. Lei, H.; Akhtar, N.; and Mian, A. 2020. Spherical kernel for efficient graph convolution on 3d point clouds. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3664–3680. Li, R.; Li, J.; Wang, J.; Wu, Q.; and Liu, X. 2022. Dualview 3D object recognition and detection via Lidar point cloud and camera image. Robotics and Autonomous Systems, (150-): 150. Lian, D.; Yu, Z.; Sun, X.; and Gao, S. 2021. As-mlp: An axial shifted mlp architecture for vision. arXiv preprint arXiv:2107.08391. Lin, H.; Zheng, X.; Li, L.; Chao, F.; Wang, S.; Wang, Y.; Tian, Y.; and Ji, R. 2023. Meta Architecture for Point Cloud Analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17682–17691. Liu, Z.; Hu, H.; Cao, Y.; Zhang, Z.; and Tong, X. 2020. A closer look at local aggregation operators in point cloud analysis. In European Conference on Computer Vision, 326– 342. Springer. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021a. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10012–10022. Liu, Z.; Zhang, Z.; Cao, Y.; Hu, H.; and Tong, X. 2021b. Group-free 3d object detection via transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2949–2958. Lowe, D. G. 2004. Distinctive image features from scaleinvariant keypoints. International journal of computer vision, 60: 91–110. Ma, X.; Qin, C.; You, H.; Ran, H.; and Fu, Y. 2022. Rethinking network design and local geometry in point The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7898 cloud: A simple residual mlp framework. arXiv preprint arXiv:2202.07123. Misra, I.; Girdhar, R.; and Joulin, A. 2021. An end-to-end transformer model for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2906–2917. Park, C.; Jeong, Y.; Cho, M.; and Park, J. 2022. Fast Point Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16949–16958. Qi, C. R.; Litany, O.; He, K.; and Guibas, L. J. 2019. Deep hough voting for 3d object detection in point clouds. In proceedings of the IEEE/CVF International Conference on Computer Vision, 9277–9286. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30. Qian, G.; Li, Y.; Peng, H.; Mai, J.; Hammoud, H.; Elhoseiny, M.; and Ghanem, B. 2022. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Advances in Neural Information Processing Systems, 35: 23192–23204. Shen, Y.; Feng, C.; Yang, Y.; and Tian, D. 2018. Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4548–4557. Shi, H.; Wei, J.; Li, R.; Liu, F.; and Lin, G. 2022. Weakly supervised segmentation on outdoor 4d point clouds with temporal matching and spatial graph propagation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11840–11849. Tang, Y.; Han, K.; Guo, J.; Xu, C.; Li, Y.; Xu, C.; and Wang, Y. 2022. An image patch is a wave: Phase-aware vision mlp. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10935–10944. Thomas, H.; Qi, C. R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; and Guibas, L. J. 2019. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision, 6411–6420. Tolstikhin, I. O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. 2021. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34: 24261–24272. Uy, M. A.; Pham, Q.-H.; Hua, B.-S.; Nguyen, T.; and Yeung, S.-K. 2019. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, 1588–1597. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5): 1–12. Wang, Z.; Jiang, W.; Zhu, Y. M.; Yuan, L.; Song, Y.; and Liu, W. 2022. Dynamixer: a vision mlp architecture with dynamic mixing. In International Conference on Machine Learning, 22691–22701. PMLR. Wu, W.; Qi, Z.; and Fuxin, L. 2019. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9621–9630. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1912–1920. Xiang, T.; Zhang, C.; Song, Y.; Yu, J.; and Cai, W. 2021. Walk in the cloud: Learning curves for point clouds shape analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 915–924. Xu, M.; Ding, R.; Zhao, H.; and Qi, X. 2021. Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3173– 3182. Yi, L.; Kim, V. G.; Ceylan, D.; Shen, I.-C.; Yan, M.; Su, H.; Lu, C.; Huang, Q.; Sheffer, A.; and Guibas, L. 2016. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6): 1–12. Yu, X.; Tang, L.; Rao, Y.; Huang, T.; Zhou, J.; and Lu, J. 2022. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19313–19322. Zhang, C.; Wan, H.; Shen, X.; and Wu, Z. 2022. PatchFormer: An Efficient Point Transformer With Patch Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11799–11808. Zhao, H.; Jiang, L.; Jia, J.; Torr, P. H.; and Koltun, V. 2021. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16259–16268. Zheng, W.; Tang, W.; Jiang, L.; and Fu, C.-W. 2021. SESSD: Self-ensembling single-stage object detector from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14494–14503. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7899
2024
877
18,714
Sparse3D: Distilling Multiview-Consistent Diffusion for Object Reconstruction from Sparse Views Zixin Zou1, Weihao Cheng2, Yan-Pei Cao2, Shi-Sheng Huang3, Ying Shan2, Song-Hai Zhang1† 1BNRist, Tsinghua University 2ARC Lab, Tencent PCG 3Beijing Normal University {zouzx19@mails.,shz@}tsinghua.edu.cn, [email protected], {caoyanpei,shishenghuang.net}@gmail.com, [email protected] Abstract Reconstructing 3D objects from extremely sparse views is a long-standing and challenging problem. While recent techniques employ image diffusion models for generating plausible images at novel viewpoints or for distilling pre-trained diffusion priors into 3D representations using score distillation sampling (SDS), these methods often struggle to simultaneously achieve high-quality, consistent, and detailed results for both novel-view synthesis (NVS) and geometry. In this work, we present Sparse3D, a novel 3D reconstruction method tailored for sparse view inputs. Our approach distills robust priors from a multiview-consistent diffusion model to refine a neural radiance field. Specifically, we employ a controller that harnesses epipolar features from input views, guiding a pre-trained diffusion model, such as Stable Diffusion, to produce novel-view images that maintain 3D consistency with the input. By tapping into 2D priors from powerful image diffusion models, our integrated model consistently delivers high-quality results, even when faced with open-world objects. To address the blurriness introduced by conventional SDS, we introduce category-score distillation sampling (C-SDS) to enhance detail. We conduct experiments on CO3DV2 which is a multi-view dataset of real-world objects. Both quantitative and qualitative evaluations demonstrate that our approach outperforms previous state-of-the-art works on the metrics regarding NVS and geometry reconstruction. Introduction Reconstructing 3D objects from sparse-view images remains a pivotal challenge in the realms of computer graphics and computer vision. This technique has a wide range of applications such as Augmented and Virtual Reality (AR/VR). The advent of the Neural Radiance Field (NeRF) and its subsequent variants has catalyzed significant strides in geometry reconstruction and novel-view synthesis, as delineated in recent studies (Mildenhall et al. 2020; Wang et al. 2021a; Yariv et al. 2021). However, NeRFs exhibit limitations when operating on extremely sparse views, specifically with as few as 2 or 3 images. In these scenarios, the synthesized novel views often suffer in quality due to the limited input observations. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Ours SparseFusion Unseen Instances Unseen Categories Input Views Figure 1: Novel-view synthesis from two input views using our Sparse3D and SparseFusion. Our approach can achieve higher-quality images with more details for unseen instances, especially for the unobserved regions of them (e.g., the left face of the teddybear). Furthermore, our approach can generalize to some unseen categories without any further finetuning, while SparseFusion fails. Existing methods for sparse-view reconstruction typically leverage a generalizable NeRF model, pre-trained on multiview datasets, to infer 3D representations from projected image features (Yu et al. 2021; Chibane et al. 2021). However, these approaches tend to regress to the mean, failing to produce perceptually sharp outputs, especially in intricate details. To produce plausible results, either in terms of geometry or appearance, from limited observations, sev† corresponding author ArXiv version with supplementary materials is available at https://arxiv.org/abs/2308.14078 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7900 eral studies have turned to image generation models, such as the diffusion model (Rombach et al. 2022), to “imagine” unseen views based on provided images (Chan et al. 2023; Zhou and Tulsiani 2023). For example, Zero123 (Liu et al. 2023) trains a view-conditioned diffusion model on a large synthetic dataset and achieves impressive results. However, their generated images across different views may not be consistent. Thus, while these view-conditioned diffusion models can produce satisfactory images, their quality and generalization ability are often constrained by the scarcity of posed image datasets. Large-scale image diffusion models (Ramesh et al. 2021; Saharia et al. 2022; Rombach et al. 2022), which are pre-trained on billions of 2D images (Schuhmann et al. 2022), excel in generating high-quality and diverse images. However, despite the diverse, general capability of such models, in 3D reconstruction tasks, users need to synthesize specific instances that are coherent with user-provided input images. Even with recent model customization methods (Kumari et al. 2023; Ruiz et al. 2023; Gal et al. 2022), they prove unwieldy and often fail to produce the specific concept with sufficient fidelity. Consequently, the potential of merging the capabilities of pre-trained large image diffusion models with the viewpoint and appearance perception of specific instances remains an open avenue of exploration. In contrast to directly generating images at novel views, some recent works explore distilling the priors of pre-trained diffusion models into a NeRF (neural radiance field) framework. This approach facilitates 3D-consistent novel-view synthesis and allows for mesh extraction from the NeRF. Notable works such as DreamFusion (Poole et al. 2023) and SJC (Wang et al. 2023a) employ score distillation sampling (SDS) to harness off-the-shelf diffusion models for text-to3D generation. However, a persistent challenge with SDS is the production of blurry and oversaturated outputs, attributed to noisy gradients, which in turn compromises the quality of NeRF reconstructions. In this work, we present Sparse3D, a novel 3D reconstruction approach designed to reconstruct high-fidelity 3D objects from sparse and posed input views. Our method hinges on two pivotal components: (1) a diffusion model that ensures both multiview consistency and fidelity to userprovided input images while retaining the powerful generalization capabilities of Stable Diffusion (Rombach et al. 2022), and (2) a category-score distillation sampling (CSDS) strategy. At its core, we distill the priors from our fidelity-preserving, multiview-consistent diffusion model into the NeRF reconstruction using an enhanced categoryscore distillation sampling. Specifically, for the multiviewconsistent diffusion model, we propose to utilize an epipolar controller to guide the off-the-shelf Stable Diffusion model to generate novel-view images that are 3D consistent with the content of input images. Notably, by fully harnessing the 2D priors present in Stable Diffusion, our model exhibits robust generalization capabilities, producing high-quality images even when confronted with open-world, unseen objects. To overcome the problem of blurry, oversaturated, and non-detailed results caused by SDS during NeRF reconstruction, we draw inspiration from VSD (Wang et al. 2023b) and propose a category-score distillation sampling strategy (C-SDS). We evaluate Sparse3D on the Common Object in 3D (CO3DV2) dataset and benchmark it against existing approaches. The results show that our approach outperforms state-of-the-art techniques in terms of the quality of both synthesized novel views and reconstructed geometry. Importantly, Sparse3D exhibits superior generalization capabilities, particularly for object categories not present in the training domain. Related Works Multi-view 3D Reconstruction Multi-view 3D reconstruction is a long-standing problem with impressive works such as traditional Structure-fromMotion (SfM) (Sch¨onberger and Frahm 2016) or Multiview-Stereo (MVS) (Sch¨onberger et al. 2016), and recent learning based approaches (Yao et al. 2018; Yu and Gao 2020). The success of NeRF (Mildenhall et al. 2020; M¨uller et al. 2022) has led to impressive outcomes in novel-view synthesis and geometric reconstruction. However, these methods still struggle to produce satisfactory results for extremely sparse view scenarios. Subsequent works proposed to use regularization (semantic (Jain, Tancik, and Abbeel 2021), frequency (Yang, Pavone, and Wang 2023), geometry and appearance (Niemeyer et al. 2022)) and geometric priors (e.g. depth (Deng et al. 2022; Roessle et al. 2022) or normal (Yu et al. 2022)) but remain to be inadequate for view generation in unobserved regions, due to the essential lack of scene priors. Generalizable Novel-view Synthesis For generalizable novel-view synthesis using NeRF, some approaches utilize projected features of the sampling points in volumetric rendering (Yu et al. 2021; Wang et al. 2021b; Chibane et al. 2021), or new neural scene representations, such as Light Field Network (Suhail et al. 2022b,a) or Scene Representation Transformer (Sajjadi et al. 2022) for better generalizable novel-view synthesis. Subsequent researches (Kulh´anek et al. 2022; Chan et al. 2023; Yoo et al. 2023) propose to further utilize generative models (e.g. VQVAE (van den Oord, Vinyals, and Kavukcuoglu 2017) and diffusion model (Rombach et al. 2022)) to generate unseen images. However, these methods didn’t have any 3D-aware scene priors, which limits their potential applications. In this paper, we leverage the feature map from a generalizable renderer to guide a pre-trained diffusion model to generate multiview-consistent images, and then distill the diffusion prior into NeRF reconstruction for both novel-view synthesis and geometry reconstruction. 3D Generation with 2D Diffusion Model Diffusion-denoising probabilistic models have brought a boom of generation tasks for 2D images and 3D contents in recent years. Inspired by early works which use CLIP embedding (Jain, Tancik, and Abbeel 2021; Wang et al. 2022; Jain et al. 2022) or GAN (Pan et al. 2021) to regularize the NeRF, DreamFusion (Poole et al. 2023) and SJC (Wang The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7901 Category-Score Distillation Sampling Rendering Image Gaussian Noise Stable Diffusion Multiview-Consistent Diffusion Model Back Propagation Stable Diffusion Epipolar Controller Epipolar Feature Map Noisy Image Multiview-Consistent Diffusion Model NeRF Renderer Input Views NeRF Feature Renderer Feature Renderer Denoised Image Figure 2: Overview of Sparse3D. Our approach consists of two key components: a multiview-consistent diffusion model and a category-score distillation sampling. We utilize epipolar feature map to control the Stable Diffusion model to generate images consistent with the content of input images, serving as a multiview-consistent diffusion model. Based on such a model, we propose a category-score distillation sampling (C-SDS) strategy to achieve more detailed results during NeRF reconstruction. et al. 2023a) propose a score distillation sampling (SDS) strategy to guide the NeRF optimization for impressive textto-3D generation. ProlificDreamer (Wang et al. 2023b) proposes variational score distillation (VSD) for more highfidelity and diverse text-to-3D generation. Magic3D (Lin et al. 2023) improves the 3D generation quality by a twostage coarse-to-fine strategy. To generate 3D results consistent with the input image observation, subsequent works leverage textual-inversion (Melas-Kyriazi et al. 2023) or denoised-CLIP loss with depth prior (Tang et al. 2023). When additional geometry prior are available (e.g. point clouds from Point-E (Nichol et al. 2022)), some works (Seo et al. 2023; Yu et al. 2023) can produce more 3D consistent creation. In addition to lifting a pre-trained diffusion model, Zero123 (Liu et al. 2023), SparseFusion (Zhou and Tulsiani 2023) and NerfDiff (Gu et al. 2023) train a viewpointconditioned diffusion model and achieve impressive results. Instead of training a diffusion model or directly lifting a pretrained diffusion model, our approach leverages both the advantages of them to train a multiview-consistent diffusion model, with a category-score distillation sampling to improve the results of SDS for more details. Method Given N input images {In} of an object with corresponding camera poses {Tn}, where N can be as few as 2, our goal is to reconstruct a neural radiance field (NeRF), enabling generalizable novel view synthesis and high-quality surface reconstruction. To realize this goal, we propose Sparse3D, which distills a multiview-consistent diffusion model prior into the NeRF representation of an object, using a categoryscore distillation sampling (C-SDS) strategy. Figure 2 shows the overview of our approach. The multiview-consistent diffusion model extracts epipolar features from sparse input views and uses a control network to guide the Stable Diffusion model to generate novel-view images that are faithful to the object shown in the images. A NeRF is then reconstructed with the guidance of the diffusion model. To overcome the blurry problem that occurred in SDS, we propose C-SDS. Benefiting from it, the gradients conditioned on the category prior maintain the optimization with a tightened region of the search space, leading to more detailed results. Finally, our method achieves more consistent and high-quality results of novel-view synthesis and geometry reconstruction. Multiview-Consistent Diffusion Model Our diffusion model consists of a feature renderer, an epipolar controller, and a Stable Diffusion model, where the epipolar controller and the Stable Diffusion model together constitute the noise predictor ϵβ, as shown in Figure 3. The feature renderer gψ takes a set of posed images and viewpoint π as input, subsequently outputting an epipolar feature map fc = gψ(π, I1, ..., In, T1, ..., Tn), which serves as the input for the epipolar controller. To unify the pre-trained diffusion model and multiview-consistent perception ability for a specific object, we draw inspiration from ControlNet (Zhang and Agrawala 2023). ControlNet enables image generation controlled by conditional inputs (such as depth maps). Instead, we use the epipolar feature map to guide a pre-trained diffusion model to generate images consistent with the content of input images from various viewpoints. Feature Renderer. Previous works acquire the feature map fc through rendering from Triplane (Gu et al. 2023), 3D Volume (Chan et al. 2023) or epipolar feature transformer (Zhou and Tulsiani 2023). In this paper, we adapt epipolar feature transformer (EFT) following (Zhou and Tulsiani 2023). The EFT, derived from GPNR (Suhail et al. 2022a), learns a network gψ to predict color of given ray r from input images. The rendering process primarily involves three transformers, which output attention weights used to blend colors over input views and epipolar lines for the final prediction. We implement two modifications to the EFT for improved results: (1) a mask embedding and a relative camThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7902 Feature Renderer Stable Diffusion Epipolar Controller GroundTruth Figure 3: Multiview-consistent diffusion model. Our multiview-consistent diffusion model comprises a feature renderer, an epipolar controller, and a Stable Diffusion model. era transformation embedding are concatenated with other transformer token features. (2) To enhance generalizability and achieve better geometry awareness, we also obtain the aggregated color Iagg and depth images Dagg by attention weights of transformers to compute loss. Epipolar Controller. Given feature maps fc rendered at arbitrary viewpoints, we propose to learn an epipolar controller to guide a pre-trained diffusion model to generate multiview-consistent images with high quality. Our epipolar controller takes epipolar feature map fc and category text prompt ct as input, subsequently outputting the latent features that are fused with the latent features of Stable Diffusion. Rather than training a new diffusion model, we hope to retain the rich 2D priors from Stable Diffusion. Consequently, we jointly train our epipolar controller and feature renderer, while keeping the parameters of Stable Diffusion fixed. On the one hand, by utilizing the feature map, which contains implicit information about the appearance of the specific object and perception of the observation viewpoint, we can control a pre-trained diffusion model to generate images consistent with the content of input images from different viewpoints. On the other hand, our diffusion model inherits the high-quality image generation capabilities from Stable Diffusion, and the additional category prior in the text domain can also enhance the multiview consistency. Furthermore, these priors also enable our model to generalize to open-world unseen categories. Training. Finally, we jointly train the feature renderer and the epipolar controller by the following objective function: L = Lfeat + Ldiff (1) where Lfeat is the loss for feature renderer and Ldiff is the loss for epipolar controller. While the feature map primarily serves as input for the controller in our pipeline, we also supervise it with color images and depth images to enhance its perception of appearance, observation viewpoints, and geometry awareness. For a query ray r from novel view when given input images, we decode the color If from the feature map and supervise it using ground-truth color values. Additionally, to improve generalizability and geometry awareness, we employ an MSE loss on aggregated color Iagg and depth Dagg. We formulate the objective function as follows: Lfeat = X r ||If(r) −I(r)||2 + ||Iagg(r) −I(r)||2 + ||Dagg(r) −D(r)||2 (2) where I(r) and D(r) are ground-truth color and depth image respectively. The diffusion model learns a conditional noise predictor to estimate the denoising score by adding Guassian-noise ϵ to clean data in T timesteps. We minimize the noise prediction error at randomly sampled timestep t. The objective of the diffusion model conditioned on text prompt ct (we use the category name as the conditioned text prompt, e.g. “hydrant”) and feature map fc is given by: Ldiff = Eϵ∼N (0,1)||ϵ −ϵβ(zt, t, ct, fc)||2 (3) where ϵβ is the conditional noise predictor of our diffusion model. NeRF Reconstruction with C-SDS Building on our multiview-consistent diffusion model, we aim to optimize a neural radiance field (NeRF) parameterized with θ, from which more 3D-consistent novel-view synthesis and underlying explicit geometry can be derived. Then to overcome the problem of blurry and non-detailed results in SDS, we propose a category-score distillation sampling (C-SDS) strategy. Category-Score Distillation Sampling. We draw inspiration from VSD (Wang et al. 2023b) and propose a C-SDS for more detailed outcomes as follows: ∇θLC−SDS(θ) ≈Et,ϵ  ω(t) (ϵmc −ϵcat) ∂zt ∂x ∂x ∂θ  (4) where ϵmc = ϵβ(zt, t, ct, fc) is the predicted noise by our multiview-consistent diffusion model, ϵcat = ϵsd (zt; t, ct) is the predicted noise by Stable Diffusion conditioned text prompt of category ct. And ω(t) is a weighting function that depends on the timestep t. Instead of employing a Gaussian noise as SDS does, we replace it with an estimation ϵcat incorporating category prior from Stable Diffusion. By providing an approximation of the estimation of the score function of the distribution on rendering images with category prior, our C-SDS can deliver a better gradient with a tightened region of the search space, resulting in more detailed outputs. SDS relies on high classifier-free guidance (CFG, i.e. 100) to achieve a better convergence, but such high CFG may lead to over-saturation and over-smooth problems (Poole et al. 2023). In our experiment, when using a more multiview-consistent diffusion model, it can work with a small CFG (i.e. 7.5). However, the results still suffer from blurry and non-detailed outputs, as the update gradient is not accurate enough. ProlificDreamer utilizes a low-rank adaption (LoRA) of a pre-trained diffusion model to estimate the score function of the distribution The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7903 EFT VF SF Ours GT Unseen Instances Unseen Categories Input Views Figure 4: Qualitative comparison of novel-view synthesis when given 2 input views. Our approach achieves both high quality and more details of novel-view images compared to the others (e.g., the face of the teddybear), whenever with unseen instances and unseen categories. on rendered images. We find that it is hard for LoRA to provide good estimation during our instance-specific optimization. Therefore, our proposed C-SDS offers a simple yet effective way to estimate the score function of the distribution on rendered images for more detailed results. One-step Estimation from Diffusion Model. The predicted noise from the diffusion model can be used not only in C-SDS but also to estimate its one-step denoising image without requiring much extra computation: z1step = 1 √¯αt zt − √ 1 −¯αtϵβ (zt, t, ct, fc)  , x1step = D(z1step) (5) where D is the decoder of Stable Diffusion. We leverage its one-step estimation to provide an additional regularization term by using perceptual distance and find that perception regularization improves the metrics of results. Specifically, we employ two perceptual losses, which include LPIPS loss (Zhang et al. 2018) and contextual loss (Mechrez, Talmi, and Zelnik-Manor 2018) to formulate the perception regularization from one-step estimation image: Lperp = λpLlpips(I, x1step) + λcLcontextual(I, x1step) (6) Reference Supervision. Additionally, we use the reference input images I with their masks M to encourage a consistent appearance with the input images: Lref = λr||(ˆI −I) ∗ˆ M||2 2 + λm|| ˆ M −M||2 2 (7) where ˆI and ˆ M are rendering image and mask, respectively. Overall Training. We combine all of the losses, including LC−SDS, Lperp, Lref, to formulate the objective function of NeRF reconstruction for a specific object. Once NeRF reconstruction is complete, we can perform volume rendering for novel-view synthesis, and the underlying mesh can be extracted using Marching Cubes (Lorensen and Cline 1987). Experiment In this section, we conduct a qualitative and quantitative evaluation of our approach on the 3D object dataset, CO3Dv2 dataset (Reizenstein et al. 2021), to demonstrate its effectiveness. CO3Dv2 dataset is a real-world dataset, which contains 51 common object categories. We first show the superior quality of novel-view synthesis and 3D reconstruction for unseen object instances in category-specific scenarios with varying numbers of input and then out-of-domain generalization ability for unseen categories. Implementation details. For the feature renderer, we follow SparseFusion (Zhou and Tulsiani 2023) to use three groups of transformer encoders with four 256-dimensional layers to aggregate epipolar features. For the multiviewconsistent model, we adopt the Stable Diffusion model v1.5 as our priors. For NeRF reconstruction, we adapt the threestudio (Guo et al. 2023), which is a unified framework for 3D content creation from various inputs, to implement the NeRF reconstruction for specific objects. We set the weights of the losses with λp = 100, λc = 10, λr = 1000 and λm = 50. NeRF optimization runs for 10,000 steps, which The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7904 Unseen Instances - 2 views Unseen Instance - 3 views PSNR SSIM LPIPS FID CLIP DISTS PSNR SSIM LPIPS FID CLIP DISTS PN 15.33 0.29 0.59 371.23 0.83 0.44 15.50 0.31 0.58 363.68 0.83 0.43 EFT 21.28 0.69 0.34 293.36 0.87 0.33 22.62 0.74 0.29 242.87 0.89 0.30 VF 18.42 0.71 0.29 248.23 0.82 0.29 18.91 0.72 0.28 240.21 0.87 0.29 SF 21.28 0.76 0.23 187.22 0.91 0.26 22.31 0.78 0.22 175.02 0.92 0.24 Ours 20.95 0.77 0.22 147.65 0.93 0.23 22.06 0.79 0.20 134.22 0.94 0.21 Unseen Instances - 6 views Unseen Categories - 2 views PSNR SSIM LPIPS FID CLIP DISTS PSNR SSIM LPIPS FID CLIP DISTS PN 15.65 0.33 0.55 344.58 0.85 0.42 14.82 0.31 0.50 314.45 0.81 0.44 EFT 24.47 0.80 0.23 161.78 0.93 0.25 19.31 0.56 0.41 318.64 0.87 0.38 VF 19.77 0.74 0.27 232.30 0.89 0.28 15.43 0.63 0.34 301.19 0.85 0.36 SF 23.69 0.80 0.20 154.20 0.93 0.22 18.83 0.70 0.28 290.45 0.88 0.34 Ours 23.92 0.82 0.18 116.10 0.95 0.19 18.83 0.72 0.23 164.30 0.93 0.26 Table 1: Quantitative comparisons of novel-view synthesis. We evaluate methods on unseen instances with varying numbers of input images, such as 2, 3, and 6, and on unseen categories with 2 input views. We report the average results across categories for each block. SparseFusion Ours GT Figure 5: Geometry reconstruction using SparseFusion and Ours. The last column shows the ground-truth point cloud. takes about 45 minutes on a single 3090 GPU. Experimental Settings Dataset. We follow the fewview-train and fewview-dev splits provided by CO3Dv2 dataset (Reizenstein et al. 2021) for training and evaluation purposes, respectively. For the evaluation of unseen object instances within the same categories, we use the core subset with 10 categories to train the category-specific diffusion model for each category. To assess the out-of-domain generalization ability on unseen categories, we select 10 categories for evaluation and use the remaining 41 categories together for training. Due to the hourlong computation time required for our method, we evaluate only the first 10 object instances of each test split. Baselines. We compare our approach with previous stateof-the-art baselines, including PixelNeRF (PN) (Yu et al. 2021), ViewFormer (VF) (Kulh´anek et al. 2022), EFT and SparseFusion (SF) (Zhou and Tulsiani 2023). PixelNeRF and EFT are regression-based methods that deduce images at novel view by projection feature, where EFT is adapted from GPNR for sparse views settings by (Zhou and Tulsiani 2023). ViewFormer is a generative model that employs a VQ-VAE codebook and a transformer module for image generation. SparseFusion is the most relevant baseline to our approach, as it distills the diffusion model prior into NeRF Unseen Instances Unseen Categories CD ↓ F-score ↑ CD ↓ F-score ↑ SF 0.27 0.23 0.37 0.18 Ours 0.21 0.32 0.27 0.28 Table 2: Quantitative comparison of geometry reconstruction. Since other baselines only produce images at novel views without 3D representation, we only report the results of ours and SparseFusion. reconstruction. Metrics. We adopt several popular image quality assessments (IQA) to evaluate the quality of novel-view synthesis, including PSNR, SSIM, LPIPS (Zhang et al. 2018), FID (Heusel et al. 2017) and DISTS (Ding et al. 2022). Additionally, since our method can generate plausible results for unobserved regions, the evaluation between them and GT images may not be fair. Thus, we also adopt CLIP embedding similarity (Radford et al. 2021) of generated images with input images. Additionally, we evaluate the most commonly used 3D reconstruction quality metrics, including Chamfer Distance and F-score. Qualitative and Quantitative Evaluation Unseen Instances: 2 Views. We first evaluate our approach with extremely sparse views (i.e. 2 views) for unseen object instances within the same categories. Table 1 demonstrates the quantitative comparison of ours and other baselines, with metrics averaged across 10 categories. Although ours has a slightly lower PSNR compared to the others, due to its formulation of pixel-wise MSE which favors mean color rendering results (e.g., blurry images), our approach outperforms all of the others in perception metrics (e.g. LPIPS, FID, etc.). As the qualitative results are shown in Figure 4, benefiting from two proposed key components, our approach achieves both high-quality and more detailed results with 3D consistency. In addition to novel-view synthesis, we evaluate the quality of geometry reconstruction by extracting underlying mesh from NeRF. We only comThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7905 Unseen Instances Unseen Categories (a) (b) Figure 6: Effect of Stable Diffusion priors. (a) diffusion model from SparseFusion; (b) our diffusion model with Stable Diffusion priors. pare ours with SparseFusion, while the others lack 3D representation. Table 2 shows that our approach significantly outperforms SparseFusion by a wide margin. Figure 5 also illustrates the mesh extracted from NeRF, where our results achieve sharper geometry with more details. Unseen Instances: Varying Views. It’s obvious that as the number of input views increases, the results of novelview synthesis and geometry reconstruction improve. Table 1 shows the comparison of novel-view synthesis on 3 and 6 input views, which demonstrates that our approach consistently outperforms the others with varying input views. More detailed evaluation results for each category and more qualitative results of novel-view synthesis and explicit geometry can be found in supplementary materials. Unseen Categories. We experiment to evaluate the generalization ability to unseen categories. Table 1 and Table 2 show the quantitative results of novel-view synthesis and geometry reconstruction. When confronted with the unseen categories that are out of the training domain, the performance of the other methods has a significant drop, while ours still maintains good performance, achieving the best results among them. The priors from Stable Diffusion enable our diffusion model to faithfully generate images of unseen categories. The last two columns of Figure 4 show the novel-view synthesis of these methods. Our approach still can achieve high-quality images with more details, while the others are blurry and somewhat meaningless. More evaluation of unseen categories can be found in supplementary materials. Ablation Studies Stable Diffusion Priors. To evaluate the effect of Stable Diffusion priors, we compare ours and SparseFusion in directly generating novel view images without performing NeRF reconstruction, as shown in Figure 6. In unseen instances scenario, the diffusion model of SparseFusion can generate images at novel viewpoints consistent with the appearance of input images in a certain way (e.g. the blue hydrant with white head) but fails to achieve high-quality image generation. When the feature map is not reliable in some views, SparseFusion fails to generate a multiview-consistent SDS C-SDS Figure 7: Effect of C-SDS to the quality of NVS from NeRF reconstruction. We can find the results of SDS are blurry and non-detailed in unobserved regions, while ours can generate more details with the same diffusion model. image (e.g. the bench). However, our diffusion model can achieve higher-quality image generation. In the unseen categories scenario, the diffusion model of SparseFusion fails to generate meaningful images, while our method can be generalized to these objects (the last two columns in Figure 6). C-SDS. We also investigate the effect of our distillation strategy on the quality of NeRF reconstruction, by implementing a version of using SDS. When using our multiviewconsistent diffusion model with SDS, which can provide a more accurate gradient update direction, there is no need for a large CFG, but it’s still not enough for detailed results. In our experiment with setting the CFG value as 7.5, it can achieve plausible results with successful convergence, but the blur problem is still unsolved, as shown in the first row of Figure 7. When applying our proposed C-SDS with the same CFG, it’s evident that the results show more details, which demonstrates the effectiveness of the method. Limitations The primary failure cases include (1) extremely partial observation of an object in input views; (2) the Janus problem and (3) sometimes thin structures or self-occlusion parts. Furthermore, our approach relies on accurate camera poses, which can be challenging to estimate directly from extremely sparse views, resulting in noisy estimates. Conclusion In this paper, we introduce Sparse3D, a new approach to reconstructing high-quality 3D objects from sparse input views with camera poses. We utilize an epipolar controller to guide a pre-trained diffusion model to generate highquality images that are 3D consistent with the content of input images, leading to a multiview-consistent diffusion model. Then, we distill the diffusion priors into NeRF optimization in a better way by using a category-score distillation sampling (C-SDS) strategy, resulting in more detailed results. Experiments demonstrate that our approach can achieve state-of-the-art results with higher quality and more details, even when confronted with open-world, unseen objects. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7906 Acknowledgments We sincerely thank the reviewers for their valuable comments. This work was supported by the National Key Research and Development Program of China (No. 2023YFF0905104), the Natural Science Foundation of China (No. 62132012), Beijing Municipal Science and Technology Project (No. Z221100007722001) and Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology. Shi-Sheng Huang was supported by the Natural Science Foundation of China (Project Number 62202057), State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (No.VRLAB2022B03). References Chan, E. R.; Nagano, K.; Chan, M. A.; Bergman, A. W.; Park, J. J.; Levy, A.; Aittala, M.; Mello, S. D.; Karras, T.; and Wetzstein, G. 2023. Generative Novel View Synthesis with 3D-Aware Diffusion Models. CoRR, abs/2304.02602. Chibane, J.; Bansal, A.; Lazova, V.; and Pons-Moll, G. 2021. Stereo Radiance Fields (SRF): Learning View Synthesis from Sparse Views of Novel Scenes. In IEEE (CVPR). Deng, K.; Liu, A.; Zhu, J.; and Ramanan, D. 2022. Depthsupervised NeRF: Fewer Views and Faster Training for Free. In IEEE CVPR, 12872–12881. Ding, K.; Ma, K.; Wang, S.; and Simoncelli, E. P. 2022. Image Quality Assessment: Unifying Structure and Texture Similarity. IEEE TPAMI., 44(5): 2567–2581. Gal, R.; Alaluf, Y.; Atzmon, Y.; Patashnik, O.; Bermano, A. H.; Chechik, G.; and Cohen-Or, D. 2022. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618. Gu, J.; Trevithick, A.; Lin, K.; Susskind, J. M.; Theobalt, C.; Liu, L.; and Ramamoorthi, R. 2023. NerfDiff: Singleimage View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion. CoRR, abs/2302.10109. Guo, Y.-C.; Liu, Y.-T.; Shao, R.; Laforte, C.; Voleti, V.; Luo, G.; Chen, C.-H.; Zou, Z.-X.; Wang, C.; Cao, Y.-P.; and Zhang, S.-H. 2023. threestudio: A unified framework for 3D content generation. https://github.com/threestudioproject/threestudio. Accessed: 2023-05-01. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In NeurIPS, 6626–6637. Jain, A.; Mildenhall, B.; Barron, J. T.; Abbeel, P.; and Poole, B. 2022. Zero-Shot Text-Guided Object Generation with Dream Fields. In IEEE CVPR, 857–866. Jain, A.; Tancik, M.; and Abbeel, P. 2021. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. In ICCV, 5865–5874. Kulh´anek, J.; Derner, E.; Sattler, T.; and Babuska, R. 2022. ViewFormer: NeRF-Free Neural Rendering from Few Images Using Transformers. In ECCV, volume 13675, 198– 216. Kumari, N.; Zhang, B.; Zhang, R.; Shechtman, E.; and Zhu, J.-Y. 2023. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1931–1941. Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; and Lin, T.-Y. 2023. Magic3D: High-Resolution Text-to-3D Content Creation. In IEEE CVPR. Liu, R.; Wu, R.; Hoorick, B. V.; Tokmakov, P.; Zakharov, S.; and Vondrick, C. 2023. Zero-1-to-3: Zero-shot One Image to 3D Object. arXiv:2303.11328. Lorensen, W. E.; and Cline, H. E. 1987. Marching cubes: A high resolution 3D surface construction algorithm. In Stone, M. C., ed., SIGGRAPH, 163–169. Mechrez, R.; Talmi, I.; and Zelnik-Manor, L. 2018. The Contextual Loss for Image Transformation with Nonaligned Data. In ECCV, volume 11218, 800–815. Melas-Kyriazi, L.; Rupprecht, C.; Laina, I.; and Vedaldi, A. 2023. RealFusion: 360 Reconstruction of Any Object from a Single Image. In IEEE CVPR. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV, 405–421. M¨uller, T.; Evans, A.; Schied, C.; and Keller, A. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM TOG, 41(4): 102:1–102:15. Nichol, A.; Jun, H.; Dhariwal, P.; Mishkin, P.; and Chen, M. 2022. Point-E: A System for Generating 3D Point Clouds from Complex Prompts. abs/2212.08751. Niemeyer, M.; Barron, J. T.; Mildenhall, B.; Sajjadi, M. S. M.; Geiger, A.; and Radwan, N. 2022. RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs. In IEEE CVPR, 5470–5480. Pan, X.; Dai, B.; Liu, Z.; Loy, C. C.; and Luo, P. 2021. Do 2D GANs Know 3D Shape? Unsupervised 3D Shape Reconstruction from 2D Image GANs. In ICLR. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2023. DreamFusion: Text-to-3D using 2D Diffusion. In ICLR. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML, volume 139, 8748–8763. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-Shot Textto-Image Generation. In ICML, volume 139, 8821–8831. Reizenstein, J.; Shapovalov, R.; Henzler, P.; Sbordone, L.; Labatut, P.; and Novotny, D. 2021. Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction. In ICCV. Roessle, B.; Barron, J. T.; Mildenhall, B.; Srinivasan, P. P.; and Nießner, M. 2022. Dense Depth Priors for Neural Radiance Fields from Sparse Input Views. In IEEE CVPR, 12882–12891. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In IEEE CVPR, 10674–10685. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7907 Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22500–22510. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, S. K. S.; Lopes, R. G.; Ayan, B. K.; Salimans, T.; Ho, J.; Fleet, D. J.; and Norouzi, M. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In NeurIPS. Sajjadi, M. S. M.; Meyer, H.; Pot, E.; Bergmann, U.; Greff, K.; Radwan, N.; Vora, S.; Lucic, M.; Duckworth, D.; Dosovitskiy, A.; Uszkoreit, J.; Funkhouser, T. A.; and Tagliasacchi, A. 2022. Scene Representation Transformer: GeometryFree Novel View Synthesis Through Set-Latent Scene Representations. In IEEE CVPR, 6219–6228. Sch¨onberger, J. L.; and Frahm, J. 2016. Structure-fromMotion Revisited. In IEEE CVPR, 4104–4113. Sch¨onberger, J. L.; Zheng, E.; Frahm, J.; and Pollefeys, M. 2016. Pixelwise View Selection for Unstructured MultiView Stereo. In ECCV, volume 9907, 501–518. Schuhmann, C.; Beaumont, R.; Vencu, R.; Gordon, C.; Wightman, R.; Cherti, M.; Coombes, T.; Katta, A.; Mullis, C.; Wortsman, M.; Schramowski, P.; Kundurthy, S.; Crowson, K.; Schmidt, L.; Kaczmarczyk, R.; and Jitsev, J. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. In NeurIPS. Seo, J.; Jang, W.; Kwak, M.; Ko, J.; Kim, H.; Kim, J.; Kim, J.; Lee, J.; and Kim, S. 2023. Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation. abs/2303.07937. Suhail, M.; Esteves, C.; Sigal, L.; and Makadia, A. 2022a. Generalizable Patch-Based Neural Rendering. In ECCV, volume 13692, 156–174. Suhail, M.; Esteves, C.; Sigal, L.; and Makadia, A. 2022b. Light Field Neural Rendering. In IEEE CVPR, 8259–8269. Tang, J.; Wang, T.; Zhang, B.; Zhang, T.; Yi, R.; Ma, L.; and Chen, D. 2023. Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior. arXiv preprint arXiv:2303.14184. van den Oord, A.; Vinyals, O.; and Kavukcuoglu, K. 2017. Neural Discrete Representation Learning. In NeurIPS, 6306–6315. Wang, C.; Chai, M.; He, M.; Chen, D.; and Liao, J. 2022. CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields. In IEEE CVPR, 3825–3834. Wang, H.; Du, X.; Li, J.; Yeh, R. A.; and Shakhnarovich, G. 2023a. Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation. IEEE CVPR. Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; and Wang, W. 2021a. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. In NeurIPS, 27171–27183. Wang, Q.; Wang, Z.; Genova, K.; Srinivasan, P. P.; Zhou, H.; Barron, J. T.; Martin-Brualla, R.; Snavely, N.; and Funkhouser, T. A. 2021b. IBRNet: Learning Multi-View Image-Based Rendering. In IEEE CVPR, 4690–4699. Wang, Z.; Lu, C.; Wang, Y.; Bao, F.; Li, C.; Su, H.; and Zhu, J. 2023b. ProlificDreamer: High-Fidelity and Diverse Textto-3D Generation with Variational Score Distillation. CoRR, abs/2305.16213. Yang, J.; Pavone, M.; and Wang, Y. 2023. FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization. In IEEE CVPR. Yao, Y.; Luo, Z.; Li, S.; Fang, T.; and Quan, L. 2018. MVSNet: Depth Inference for Unstructured Multi-view Stereo. In ECCV, 785–801. Yariv, L.; Gu, J.; Kasten, Y.; and Lipman, Y. 2021. Volume Rendering of Neural Implicit Surfaces. In NeurIPS. Yoo, P.; Guo, J.; Matsuo, Y.; and Gu, S. S. 2023. DreamSparse: Escaping from Plato’s Cave with 2D Frozen Diffusion Model Given Sparse Views. CoRR, abs/2306.03414. Yu, A.; Ye, V.; Tancik, M.; and Kanazawa, A. 2021. pixelNeRF: Neural Radiance Fields From One or Few Images. In IEEE CVPR, 4578–4587. Yu, C.; Zhou, Q.; Li, J.; Zhang, Z.; Wang, Z.; and Wang, F. 2023. Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation. abs/2307.13908. Yu, Z.; and Gao, S. 2020. Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and GaussNewton Refinement. In IEEE CVPR, 1946–1955. Yu, Z.; Peng, S.; Niemeyer, M.; Sattler, T.; and Geiger, A. 2022. MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction. In NeurIPS. Zhang, L.; and Agrawala, M. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. arXiv:2302.05543. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In IEEE CVPR, 586–595. Zhou, Z.; and Tulsiani, S. 2023. SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7908
2024
878
18,715
CEDFlow: Latent Contour Enhancement for Dark Optical Flow Estimation Fengyuan Zuo1, Zhaolin Xiao1, 2*, Haiyan Jin1, 2, Haonan Su1, 2 1Xi’an University of Technology, China, 710048 2Shaanxi Key Laboratory for Network Computing and Security Technology, China, 710048 [email protected], [email protected] Abstract Accurately computing optical flow in low-contrast and noisy dark images is challenging, especially when contour information is degraded or difficult to extract. This paper proposes CEDFlow, a latent space contour enhancement for estimating optical flow in dark environments. By leveraging spatial frequency feature decomposition, CEDFlow effectively encodes local and global motion features. Importantly, we introduce the 2nd-order Gaussian difference operation to select salient contour features in the latent space precisely. It is specifically designed for large-scale contour components essential in dark optical flow estimation. Experimental results on the FCDN and VBOF datasets demonstrate that CEDFlow outperforms state-of-the-art methods in terms of the EPE index and produces more accurate and robust flow estimation. Our code is available at: https://github.com/xautstuzfy. Introduction Optical flow estimation is a crucial technique of numerous computer vision applications, such as autonomous driving (Takumi et al. 2017), object tracking (Peng et al. 2020), video enhancement (She and Xu 2022), etc.. Under a global scene smoothness assumption, researchers propose to estimate the optical flow by solving a global energy minimization problem (Horn and Schunck 1981). If assuming the key points have brightness constancy, the flow estimation can also be formulated as a local energy minimization (Lucas, Kanade et al. 1981). However, in dark illumination scenarios, these assumptions can be violated due to low contrast, strong noise, and deterioration of brightness constancy. As shown in Fig. 1, the contour information in low-contrast dark images is degraded by intense noise, leading to ambiguous contour matching. This presents a significant challenge in achieving precise flow estimation in such conditions. Pre-stage image feature enhancement has emerged as a promising approach to address the challenge of Dark Optical Flow Estimation (DOFE). While deep learning-based solutions have made remarkable progress in enhancing low-light or dark images (Li et al. 2021), existing methods primarily focus on improving visual perceptual quality by adjusting brightness and contrast. Nevertheless, these enhancements *Corresponding author:Zhaolin Xiao, Haiyan Jin Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Flow estimation in a challenging low-light condition. The proposed CEDFlow outperforms state-of-the-art methods RAFT (Teed and Deng 2020), GMA (Jiang et al. 2021a) and AGFlow(Luo et al. 2022b). often introduce inconsistencies and blurry boundaries, providing limited benefits to specific vision tasks. In contrast, task-specific feature-level enhancement has shown effectiveness in applications like face detection (Wang et al. 2022), image deblurring (Zhou, Li, and Change Loy 2022), and image or video super-resolution (Chan et al. 2021). This paper proposes a novel feature enhancement framework explicitly designed for DOFE, distinguishing it from existing low-light image enhancements. Large-scale background motion poses challenges for DOFE, where both local and global features play crucial roles. Global feature extraction usually requires a larger receptive field size, which is computationally expensive. Meanwhile, local feature extraction is scale-sensitive, especially in the presence of low-light noises. Therefore, choosing an appropriate receptive field size that enables the simultaneous extraction of local and global motion features is difficult. Furthermore, large-scale salient contour semantics are very important for precise DOFE, but accurately picking the salient contour semantics from a low-light image is also challenging. To address these issues, we propose CEDFlow, an efficient latent contour encoding and enhancement in DOFE. Our contributions can be summarized as follows. • A spatial frequency decomposition for local and global motion encoding. We propose encoding frequency-based features through local and global The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7909 motion encoders, which can be integrated after feature attention has been computed by MLP. • A latent space contour enhancement. We suggest computing the 2nd-order Gaussian difference of the feature map to select large-scale contour semantics. This process enables the direct enhancement of contour features in the latent space while accentuating local discrimination and smoothness. • The state-of-the-art performance on widely used benchmarks. The proposed CEDFlow outperforms state-of-the-art approaches regarding the End-Point Error index on the public FCDN and VBOF benchmarks. Previous Works on the DOFE To address optical flow estimation with small moving objects and occlusions, convolutional neural networks (CNNs) have been successfully applied in methods like FlowNet (Dosovitskiy et al. 2015), Pyramid-networks (Sun et al. 2018), RAFT (Teed and Deng 2020), GMA (Jiang et al. 2021a), and GMFlow (Xu et al. 2022a). However, these state-of-the-art methods heavily rely on high-contrast image textures, which can be significantly degraded in DOFE. A straightforward solution for DOFE is to enhance low-light input images using computational enhancements, which are dominated by learning-based solutions, like (Guo, Li, and Ling 2016; Cai et al. 2017; Wei et al. 2018; Wang et al. 2020). Recent solutions effectively improve the visual perceptual quality of low-light images by using frequency adaptive operations (Xu et al. 2020), (Xu et al. 2022b). However, none of these works are specifically designed for the DOFE problem. Aiming at solving the DOFE problem, Zheng et al. propose a synthetic optical flow benchmark by adding dark image noise to the FlyChairs dataset, called FlyingChairs Dark&Noise (FCDN) dataset (Zheng, Zhang, and Lu 2020). They also introduce the Various Brightness Optical Flow (VBOF) dataset, which includes multiple exposure levels and optical flow pseudo labels (Zhang, Zheng, and Lu 2021). Few works currently focus on designing specific DOFEoriented learning networks, which is more helpful than applying general-purpose low-light enhancements. Therefore, our proposed CEDFlow framework explores enhancing salient contour semantics, which is essential for large-scale motion understanding, specifically addressing the DOFE. The Proposed CEDFlow Algorithm Fig. 2 illustrates the decomposition of consecutive frames into high- and low-frequency parts, then enabling the extraction of fine-grained and large-scale motion information through local and global encoders. Furthermore, we suggest computing the 2nd-order Gaussian difference of the latent feature map to select and to enhance the salient contour semantics. Motion Feature Encoding Different from mainstream motion encoders (Teed and Deng 2020; Luo et al. 2022c; Xu et al. 2022a) as shown in Fig. 3(a), we suggest encoding high- and low-frequency components with local and global encoders after spatial frequency decomposition, i.e.Fig. 3(b). The challenge of longrange pixel connections in DOFE arises from noise hindering pixel matching under low-light conditions. While increasing the receptive field with larger convolution kernels can be a solution, it may introduce pixel similarity uncertainty and feature-matching ambiguity. Instead, we propose a context-adaptive motion reasoning approach to construct long-term and short-term pixel correlations. The motion encoder in our method consists of a spatial frequency decomposition, a dual-branch motion encoder(DBME), and a Multilayer Perceptron (MLP)-based feature aggregation. The Frequency-based Decomposition. To begin with, we introduce a spatial frequency decomposition module that first utilizes three downsampling blocks to extract a feature map volume f(H/8×W/8×N) from the input frames, where H and W represent the height and width of the frames, respectively, and N denotes the number of channels. This downsampling step helps reduce the computational cost and compress the motion representation. To decompose the motion feature information, we use two groups of dilated convolutions (with kernel size/dilation rate of 1/1 and 3/2), denoted as d1 and d2. By computing the convolutional difference between d1 and d2, a contrast-aware attention map ωs can be defined as, ωs = sigmoid(d1(f) −d2(f)), (1) With the weight map ωs, the extracted feature volume f can be roughly divided as the high-frequency part f H and the low-frequency part f L. < f L, f H >=< (1 −ωs) · f, ωs · f >, (2) where “·” denotes an element-wise dot-product. An example feature map visualization can be found in Fig. 4. After the feature decomposition, the f L represents the spatial low-frequency properties, and the f H concerns the highfrequency feature of the given dark scene. The Dual-branch Motion Feature Encoder. Instead of extracting multiple-scale features, we introduce a dual-branch structure to encode motion features. Our feature extraction consists of two key components: the global and local encoders. The decomposed high-frequency part f H contains high-contrast shape and regional motion, as shown in Fig.4, but often mixed with strong dark noise. To effectively capture local and structural motion while mitigating additional noise interference, the local encoder is equipped with three 2D convolutional blocks (with a small receptive field). For the low-frequency part f L, we perform large-kernel 1-D convolutional blocks (with a large receptive field) to learn X-direction and Y-direction motions, followed by layer normalization to ensure robust propagation of motion information. The architecture of the local encoder is depicted in Fig. 2. The learned feature contains local and structural motion features, which are utilized to make more accurate predictions for the DOFE problem. Inspired by constructing the channel attention (Hu, Shen, and Sun 2018), we suggest an MLP-based aggregation module to bridge the gap between The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7910 Figure 2: The proposed latent contour enhancement architecture. It mainly consists of a feature decomposition-based motion encoder and a latent contour enhancement module. The core of our contour enhancement adopts the D2oG operation that directly selects the large-scale contours in the latent space. “D” is a subtraction operation; “T” denotes the sigmoid function; “·” represents the dot product; “F” means a motion-encoded feature map; “F ′” is a contour-enhanced feature map. Figure 3: Highlight the feature extractions of the proposed CEDFlow. Most methods extract motion features using a single feature encoder. In contrast, CEDFlow utilizes distinctively structured encoders to separately encode local and global motion, which are aggregated using an MLP. local and global motion representation, which can be formulated as: F = MLP(G(f L), L(f H)), (3) where MLP(·) denotes the aggregation module, F is the aggregated feature map, the G(·) and L(·) indicate the global encoder and the local encoder. In general, G(f L) and L(f H) are used to extract the long-range and short-range connections from low- and high-frequency parts, respectively. Then, an MLP(·) is applied for the feature aggregation. Figure 4: A feature map visualization of frequency-based components. Histograms show different feature distributions between f L and f H. The high-intensity pixels in f H concentrate on areas with structural significance, i.e., shape and region with motion. Latent Contour Enhanced Flow Estimation To improve the motion reasoning of the DOFE, we propose a novel approach for latent contour selection and enhancement. Unlike traditional methods operating in the spatial or frequency domain, our approach focuses on contour selection and strengthening in the latent space directly. Rather than trivial contours, we specifically target large-scale contours, which are more critical to estimation reliability. Large-scale Latent Contour Selection. In the DOFE computation, large-scale contour plays an important role in conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7911 Figure 5: Two sets of input frame pairs and their corresponding feature maps are depicted in (a) and (b) respectively. The enhanced feature map F ′ exhibits improved preservation of object contours compared to RAFT’s(Teed and Deng 2020) corresponding layered feature map. Moreover, the saliency of large-scale contours is enhanced in F ′, obtained from the feature map F using the D2oG contour selection method. straining the motion correlation to consecutive image areas with the same motion. We propose using pre-defined Gaussian kernels to compute the difference between the extracted feature vector and its neighborhood and to select these salient contour semantics. While Gaussian-like difference computation is commonly considered practical in the image spatial domain, we explore using this computation directly on the feature embedding maps, where the network can learn the latent features. Specifically, we apply Gaussian blurring to the feature map F with different standard deviations σn and radius r to obtain the Gaussian-blurred feature maps Fσn as follows: Fσn = F ∗Gn(r, σn), n = 1, 2, 3. (4) Instead of using the 1st-order Gaussian difference, we propose to use the 2nd-order Gaussian difference of the feature map as a weighting function, ωσ = Fσ1 −2Fσ2 + Fσ3, (5) the ωσ represents the saliency of feature vectors in F, where a large ωσ value indicates that the corresponding feature refers to a more salient large-scale contour semantics. By setting different σn and r, we can determine the scale and saliency of contour selection. Empirically, a large r and significant difference between σn values can filter out more trivial contours, leaving only large-scale and high-contrast contours. The suggested parameter settings for the FCDN and VBOF datasets are r = 3.0, resulting in a Gaussian kernel size of 7 × 7, and σ1 = 3, σ2 = 9, σ3 = 27 respectively. A detailed comparison of different parameter settings can also be found in the experiments section. In summary, by using Gaussian blurring with different standard deviations and radii, we compute the 2nd-order Gaussian difference of Figure 6: The proposed double-pass filtering contour enhancement in the latent space. the feature map to enhance the saliency of large-scale contours, called D2oG contour selection, which offers a robust and accurate optional process for incorporating flow estimation. Fig. 5 displays the encoded and aggregated motion feature map F. By observing this figure, it can be seen that the motion features have almost the same intensity. This implies that the weight or saliency of these features cannot be distinguished when computing the optical flow using the feature map F. However, after contour enhancement with suitable computation using F and ωσ in latent space, the large-scale contours become more visible than other features in the F ′. Latent Enhancement and Flow Estimation. After the large-scale latent contour selection, we present a latent space enhancement that enables a more precise DOFE. The double-pass filtering is applied on the salient contour weighting ωσ to highlight the large-scale contours and alleviate the trivial high-contrast feature. Instead of directly multiplying ωσ to F, the double-pass filtering is performed in the latent space with sigmoid normalization. This process is shown in Fig. 6, where the enhanced large-scale contours are more visible while the trivial high-contrast contours are reduced. Eq. 4 demonstrates how this filtering is applied to the latent feature map F. F ′ = T(T(ωσ) · F) · F, (6) where T(·) stands for the sigmoid normalization. Fig. 5 visually illustrates the proposed large-scale contour enhancement. We suggest using double-pass filtering to enhance large-scale contours (pointing by the red arrows) and suppress trivial contours (pointing by the black arrows). Since the proposed T(·) transformation-based contour enhancement directly operates the aggregated feature map F in the latent space, our computational cost is significantly lower than other spatial or frequency image processing operations. The proposed latent space operations enable end-toend learning-based DOFE training and prediction. To establish feature correspondences, we adopt the approach from previous successful work (Teed and Deng 2020) and compute the 4D correlation volume (4D-CV ), representing the pixel-wise correspondence between two feature maps of the input paired frames. The following equation constructs visual similarity between all pairs of feature vectors in the two contour-enhanced features F ′ 1 and F ′ 2, CV = correlation(F ′ 1, F ′ 2). (7) By pooling the last two dimensions of the original correlation volume, both large and small displacements of pixel The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7912 correlation can be better encoded and searched using a pyramid structural multiple-layered 4D-CV . The flow estimation module in CEDFlow is based on the flow update module of RAFT (Teed and Deng 2020). A GRU-based update operator is employed to iteratively update the flow estimation result by looking up values from the 4D-CV . We initialize the flow field p0 to zero, and in the k-th iteration, it produces an update flow pk ∆, which is added to the current estimate: pk = pk ∆+ pk−1. To compute the update flow pk ∆, we utilize the current flow estimate pk to retrieve correlation features from the correlation pyramid 4D-CV , pk ∆= GRU(pk−1, Fc) = (1 −ρ) · pk−1 + ρ · ϕ(pk−1, Fc). (8) In equation (8), ϕ(·) is a Tanh()-based activation of the current flow update increment, which jointly considers the context feature Fc, as shown in Fig. 2 and the last flow estimation pk−1. The parameter ρ is an automatically computed weighting factor that balances the update state and the reset state of the GRU flow estimation. Here, ρ is calculated by sequentially applying concatenation, convolution, and sigmoid activation on the pk−1 and Fc, ρ = Sigmoid ◦Conv ◦Concat(pk−1, Fc). (9) A more detailed explanation of the GRU computation can be found in (Teed and Deng 2020) or in our code. To train the proposed model in a supervised manner, we employ a simple L −1 loss to constraint the differences between the predicted optical flow pk and corresponding ground truth pgt: L = K X k=1 γK−k pk −pgt 1 . (10) In our experiments, we set γ = 0.9 cooperating with many flow prediction iterations (K = 12), enabling a better coarse-to-fine flow updating. Experiments Analysis on Different Parameters Settings Since latent D2oG operation is sensitive to the settings of the Gaussian kernel parameters, we first studied the different setting combinations of the parameters σn and radius r. As shown in Tab. 1, on the FCDN and VBOF (Fuji2) datasets, we first analyzed the performance of the CEDFlow by switching r when using a fixed σn = {3, 9, 27}. The best choice of the radius is r = 3.0, i.e., using a 7 × 7 kernel size, our CEDFlow achieves the mean values of EPE metric are 1.08 and 13.94 on the FCDN and VBOF datasets respectively. Further, in the σn analysis, we found that more extensive settings of σn accompanied by a larger receptive field lead to better performance in constructing long-range pixel correlation. However, increasing the perceptive field is computationally expensive, e.g. , when using a Gaussian kernel with r = 3.0, the network in CEDFlow consists of approximately 7.7 million parameters. However, if we increase the Parameter EPE(Trained FCDN) Settings FCDN VBOF(Fuji2) 1 1.27 14.22 r 2 1.17 13.99 3 1.08 13.94 4 1.21 14.15 (1, 3, 9) 1.17 14.14 σn (2, 6, 18) 1.18 14.33 (3, 9, 27) 1.08 13.94 Table 1: EPE comparison of different parameters setting. kernel size to r = 4.0, corresponding to a 9 × 9 kernel, the parameter count rises to 8.7 million, nearly 1. million the number of parameters compared to r = 3.0. Additionally, when setting r = 4.0, each training iteration takes approximately 7.4% more time than r = 3.0. Comparison with State-of-the-Arts We evaluated the CEDFlow against eight state-of-the-art methods that have achieved top-performing results on the Sintel (Butler et al. 2012) and KITTI (Menze, Heipke, and Geiger 2015) leaderboards. These methods include RAFT (Teed and Deng 2020), GMFlow (Xu et al. 2022a), GMFlowNet (Zhao et al. 2022), GMA (Jiang et al. 2021a), AGFlow (Luo et al. 2022c), KPAFlow (Luo et al. 2022a), SCV (Jiang et al. 2021b), and Flow1D (Xu et al. 2021). All models are fairly trained on the FCDN and Mix (FCDN + VBOF) datasets. Please see details of the experiment implementation in the supplementary. Training on the FCDN Dataset. Tab. 2 presents the evaluation results of nine models on FCDN and VBOF (Fuji2 part only). Our CEDFlow achieved the best average end-point error (EPE) of 1.08 on the FCDN (the 2nd column). The CEDFlow outperforms the second-ranked AGFlow (Luo et al. 2022c) about 7% in EPE (1.15 →1.08), and outperforms GMFlowNet (Zhao et al. 2022) near to 30.7% in EPE (1.56 →1.08). These results indicate that CEDFlow outperforms other models in solving the DOFE problem, thanks to the support of feature decomposition-based motion learning and latent space contour enhancement. In Tab. 2 (3rd and 4th columns), our proposed CEDFlow achieved the best performance on VBOF (Fuji2) and VBOF (All) datasets, outperforming eight state-of-theart approaches. It improved by 3.2%, 4.3%, and 5.2% over GMA, GMFlow, and KPAFlow on Fuji2, respectively. AGFlow obtained the second-best scores, and RAFT and GMA ranked third. CEDFlow demonstrated excellent crossdata capabilities and superior performance. Training on the Mixed Datasets. In Tab. 2 (5th, 6th and 7th columns), we trained all models using Mixed (FCDN + VBOF) datasets, then the flow estimation evaluations are presented on the FCDN, Fuji2 and VBOF datasets respectively. The proposed CEDFlow achieved an EPE index of 1.23 on the FCDN, 4.69 on the Fuji2 and 6.52 on the VBOF, outperforming the other eight models. On the FCDN dataset, GMA (Jiang et al. 2021a) won second place with an The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7913 Figure 7: Visual comparison of different flow estimations. In strong noise conditions, our CEDFlow outperforms state-of-theart methods in terms of precision (1st row). Furthermore, the 2nd and 3rd rows demonstrate CEDFlow’s distinct and accurate contour structures, closely resembling the ground truth (highlighted in boxes). EPE Trained on FCDN Trained on Mixed FCDN VBOF(Fuji2) VBOF(All) FCDN VBOF(Fuji2) VBOF(All) RAFT(Teed and Deng 2020) 1.23 14.20 21.84 1.38 7.34 8.89 GMFlowNet(Zhao et al. 2022) 1.56 14.51 22.87 1.70 7.71 8.66 GMFlow(Xu et al. 2022a) 1.18 14.56 22.72 1.31 5.76 7.23 AGFlow(Luo et al. 2022c) 1.15 14.16 21.05 1.27 4.97 6.75 GMA (Jiang et al. 2021a) 1.18 14.40 21.77 1.26 4.90 6.81 KPAFlow(Luo et al. 2022a) 1.24 14.71 23.10 1.39 6.11 7.47 SCV (Jiang et al. 2021b) 1.29 14.96 24.13 1.27 6.48 7.76 Flow1D (Xu et al. 2021) 1.22 14.25 21.79 1.30 5.13 6.93 CEDFlow(Ours) 1.08 13.94 20.89 1.23 4.69 6.52 Table 2: EPE comparison of different flow evaluations on FCDN and VBOF datasets. Underlining denotes second rank. EPE index of 1.26, 2.4% higher than our CEDFlow’s 1.23 score. CEDFlow outperforms the GMFlowNet by nearly 28% in EPE, which indicates the effectiveness of CEDFlow in addressing the DOFE problem. On the Fuji2, GMA also achieved the second-best performance with an EPE index of 4.90, close to CEDFlow. Due to the more complex composition of the Mix dataset, models trained on Mix generally had higher EPE indexes compared to models trained on FCDN only. In the 6th column of Tab. 2, we present a comparison of the entire VBOF dataset, which includes a wide range of scenarios with illumination changes. The CEDFlow model remains the best-performing one with an EPE index of 6.52. The AGFlow approach follows closely behind with a score of 6.75. However, RAFT and GMFlowNet models achieved the lowest scores, indicating that most state-of-theart flow estimations are less effective for significant illumination variations. Visual Comparison. Fig. 7 visually compares our flow estimation method, CEDFlow, with RAFT, GMA and KPAFlow algorithms on the VBOF dataset (Trained on FCDN). In low-light conditions, the results of three representative scenarios demonstrate that the proposed CEDFlow provides accurate and robust DOFE. The proposed CEDFlow outperFigure 8: Visualization results on FLIR ADAS dataset. CEDFlow demonstrates superior performance in handling complex motion. forms other algorithms in generating accurate contours as in Fig. 7 first row, it can be observed that the results obtained from GMA and RAFT are not as accurate as the proposed CEDFlow method, especially in terms of the generated contours. For instance, the chair’s cushion in the middle of the image appears jagged and blurry in the results obtained from GMA and RAFT, whereas CEDFlow generThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7914 Framework EPE(Trained on FCDN) FCDN VBOF(Fuji2) RAFT 1.23 14.20 RAFT+DBME 1.14 14.13 RAFT+LCE 1.12 14.03 CEDFlow+RAFT Encoder 1.17 14.38 CEDFlow 1.08 13.94 Table 3: EPE comparisons by switch encoders between RAFT and CEDFlow frameworks. ates a more precise contour closer to the ground truth. We compared DOFE visual results on the FLIR ADAS dataset (available at github), which includes many real-world driving scenes in the dark. Although the FLIR ADAS has no optical flow ground truth, we can see that CEDFlow performs better in this challenging scenario, which is with moving objects and dynamic lighting conditions, as shown in Fig. 8. In general, the proposed CEDFlow outperforms SOTAs in terms of precision under low-light conditions. Ablation Studies Comparison when using Different Encoders. To validate the effectiveness of our proposed Dual-Branch Motion Encoder (DBME), we conducted an ablation experiment by switching encoders in both CEDFlow and RAFT frameworks. All tested models were trained on the FCDN dataset. As shown in Tab. 3, we replaced the original encoder of the RAFT with the proposed DBME (2nd row). Furthermore, we added our Latent Contour Enhancement (LCE) module in the RAFT framework (3rd row). We have selected the parameter r to achieve optimal performance. For the 4th row, we replaced the DBME in CEDFlow with the RAFT encoder. Tab. 3 demonstrates that the proposed DBME and LCE perform best in CEDFlow and are also effective in other flow estimations, e.g. , the RAFT. Ablation With CEDFlow Components. A qualitative comparison is provided in Tab. 4, presenting the analysis results of the DBME and LCE components. When removing the feature decomposition module (1st row of Tab. 4), we observe that the EPE index of CEDFlow increases by 9.6%, indicating a significant performance degradation. A larger increase in EPE signifies a more significant impact of the removed module or component on performance improvement. By removing the global or local encoder separately (2nd and 3rd rows), we demonstrate that the global encoder contributes more precision than the local encoder. The results in Tab. 4 (5th row) highlight the substantial contribution of our LCE module compared to other modules. This further emphasizes the effectiveness of the proposed latent contour enhancement in improving flow estimation performance. Computation Analysis Parameters. In the 2nd column of Tab. 5, we compare the parameter capacity of different SOTAs. Our CEDFlow has 7.7 million parameters, the second largest model. It is because CEDFlow employs the DBME that encodes local and Module EPE (Trained on FCDN) FCDN VBOF(Fuji2) VBOF(All) w/o Decom. 1.19 14.70 23.13 w/o Glo. Enc. 1.17 14.54 22.64 w/o Loc. Enc. 1.15 14.23 21.88 w/o MLP 1.22 15.01 24.02 w/o LCE 1.26 14.79 23.44 Whole 1.08 13.94 20.89 Table 4: Ablation analysis for different parts of the DBME and the LCE in CEDFlow framework. Models Param(M) Time(ms) Memory(GB) RAFT 5.3 42 1.7 GMFlowNet 9.3 112 3.4 GMFlow 4.7 67 1.8 AGFlow 5.6 46 1.9 GMA 5.9 63 1.8 KPAFlow 5.8 89 2.6 SCV 5.3 40 1.6 Flow1D 5.7 45 1.7 Ours 7.7 76 2.1 Table 5: Comparisons of the EPE and computational costs with the state-of-the-art methods. global motion features separately, and the additional parameters of DBME have proved valuable for performance. Runtime & Memory. We also show the runtime and memory requirements of different models in Tab. 5. As inputting images at 736×480 resolution, our CEDFlow requires 76ms in runtime and 2.1GB in memory. Considering the significant improvement the precision, its computational costs are acceptable for dealing with the challenging DOFE problem. Conclusion This paper proposes a novel CEDFlow framework for dense optical flow estimation that addresses the challenges in lowlight conditions. CEDFlow incorporates the Dual-Branch Motion Encoder (DBME) and Latent Contour Enhancement (LCE) modules to improve accuracy and robustness. The DBME captures finer details by utilizing its distinctively structured local and global motion feature encoders, while the LCE module enhances large-scale contours in the latent feature space. Experimental results on FCDN and VBOF datasets demonstrate that CEDFlow outperforms state-ofthe-art methods regarding end-point error. Future research directions include exploring the application of CEDFlow to other vision tasks and investigating optimizations for further enhancing efficiency and accuracy. Acknowledgments This work was supported by the National Natural Science Foundation of China (62272383, 62371389, 62031023). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7915 References Butler, D. J.; Wulff, J.; Stanley, G. B.; and Black, M. J. 2012. A naturalistic open source movie for optical flow evaluation. In ECCV, 611–625. Springer. Cai, B.; Xu, X.; Guo, K.; Jia, K.; Hu, B.; and Tao, D. 2017. A joint intrinsic-extrinsic prior model for retinex. In ICCV, 4000–4009. Chan, K. C.; Wang, X.; Yu, K.; Dong, C.; and Loy, C. C. 2021. Basicvsr: The search for essential components in video super-resolution and beyond. In CVPR, 4947–4956. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In ICCV, 2758–2766. Guo, X.; Li, Y.; and Ling, H. 2016. LIME: Low-light image enhancement via illumination map estimation. TIP, 26(2): 982–993. Horn, B. K.; and Schunck, B. G. 1981. Determining optical flow. Artificial intelligence, 17(1-3): 185–203. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In CVPR, 7132–7141. Jiang, S.; Campbell, D.; Lu, Y.; Li, H.; and Hartley, R. 2021a. Learning to estimate hidden motions with global motion aggregation. In ICCV, 9772–9781. Jiang, S.; Lu, Y.; Li, H.; and Hartley, R. 2021b. Learning optical flow from a few matches. In CVPR, 16592–16600. Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.-M.; Gu, J.; and Loy, C. C. 2021. Low-light image and video enhancement using deep learning: A survey. TPAMI, 44(12): 9396–9416. Lucas, B. D.; Kanade, T.; et al. 1981. An iterative image registration technique with an application to stereo vision, volume 81. Vancouver. Luo, A.; Yang, F.; Li, X.; and Liu, S. 2022a. Learning Optical Flow With Kernel Patch Attention. In CVPR, 8906– 8915. Luo, A.; Yang, F.; Luo, K.; Li, X.; Fan, H.; and Liu, S. 2022b. Learning optical flow with adaptive graph reasoning. In AAAI, volume 36, 1890–1898. Luo, A.; Yang, F.; Luo, K.; Li, X.; Fan, H.; and Liu, S. 2022c. Learning optical flow with adaptive graph reasoning. In AAAI, volume 36, 1890–1898. Menze, M.; Heipke, C.; and Geiger, A. 2015. Joint 3d estimation of vehicles and scene flow. ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, 2: 427. Peng, J.; Gu, Y.; Wang, Y.; Wang, C.; Li, J.; and Huang, F. 2020. Dense scene multiple object tracking with box-plane matching. In ACM MM, 4615–4619. She, D.; and Xu, K. 2022. An Image-to-video Model for Real-Time Video Enhancement. In ACM MM, 1837–1846. Sun, D.; Yang, X.; Liu, M.-Y.; and Kautz, J. 2018. Pwcnet: Cnns for optical flow using pyramid, warping, and cost volume. In CVPR, 8934–8943. Takumi, K.; Watanabe, K.; Ha, Q.; Tejero-De-Pablos, A.; Ushiku, Y.; and Harada, T. 2017. Multispectral object detection for autonomous vehicles. In ACM MM, 35–43. Teed, Z.; and Deng, J. 2020. Raft: Recurrent all-pairs field transforms for optical flow. In ECCV, 402–419. Springer. Wang, L.-W.; Liu, Z.-S.; Siu, W.-C.; and Lun, D. P. 2020. Lightening network for low-light image enhancement. TIP, 29: 7984–7996. Wang, W.; Wang, X.; Yang, W.; and Liu, J. 2022. Unsupervised face detection in the dark. TPAMI, 45(1): 1250–1266. Wei, C.; Wang, W.; Yang, W.; and Liu, J. 2018. Deep retinex decomposition for low-light enhancement. British Machine Vision Conference. Xu, H.; Yang, J.; Cai, J.; Zhang, J.; and Tong, X. 2021. Highresolution optical flow from 1d attention and correlation. In ICCV, 10498–10507. Xu, H.; Zhang, J.; Cai, J.; Rezatofighi, H.; and Tao, D. 2022a. GMFlow: Learning Optical Flow via Global Matching. In CVPR, 8121–8130. Xu, K.; Yang, X.; Yin, B.; and Lau, R. W. 2020. Learning to restore low-light images via decomposition-andenhancement. In CVPR, 2281–2290. Xu, X.; Wang, R.; Fu, C.-W.; and Jia, J. 2022b. SNR-Aware Low-Light Image Enhancement. In CVPR, 17714–17724. Zhang, M.; Zheng, Y.; and Lu, F. 2021. Optical Flow in the Dark. TPAMI. Zhao, S.; Zhao, L.; Zhang, Z.; Zhou, E.; and Metaxas, D. 2022. Global Matching with Overlapping Attention for Optical Flow Estimation. In CVPR, 17592–17601. Zheng, Y.; Zhang, M.; and Lu, F. 2020. Optical flow in the dark. In CVPR, 6749–6757. Zhou, S.; Li, C.; and Change Loy, C. 2022. Lednet: Joint low-light enhancement and deblurring in the dark. In ECCV, 573–589. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7916
2024
879
18,716
DanceAnyWay: Synthesizing Beat-Guided 3D Dances with Randomized Temporal Contrastive Learning Aneesh Bhattacharya1,2, Manas Paranjape1, Uttaran Bhattacharya3, Aniket Bera1 1Purdue University, USA 2IIIT Naya Raipur, India 3Adobe Research, USA {bhatta95, mparanja, aniketbera}@purdue.edu, [email protected] Abstract We present DanceAnyWay, a generative learning method to synthesize beat-guided dances of 3D human characters synchronized with music. Our method learns to disentangle the dance movements at the beat frames from the dance movements at all the remaining frames by operating at two hierarchical levels. At the coarser “beat” level, it encodes the rhythm, pitch, and melody information of the input music via dedicated feature representations only at the beat frames. It leverages them to synthesize the beat poses of the target dances using a sequence-to-sequence learning framework. At the finer “repletion” level, our method encodes similar rhythm, pitch, and melody information from all the frames of the input music via dedicated feature representations. It generates the full dance sequences by combining the synthesized beat and repletion poses and enforcing plausibility through an adversarial learning framework. Our training paradigm also enforces fine-grained diversity in the synthesized dances through a randomized temporal contrastive loss, which ensures different segments of the dance sequences have different movements and avoids motion freezing or collapsing to repetitive movements. We evaluate the performance of our approach through extensive experiments on the benchmark AIST++ dataset and observe improvements of about 7% −12% in motion quality metrics and 1.5% −4% in motion diversity metrics over the current baselines, respectively. We also conducted a user study to evaluate the visual quality of our synthesized dances. We note that, on average, the samples generated by our method were about 9−48% more preferred by the participants and had a 4−27% better five-point Likert-scale score over the best available current baseline in terms of motion quality and synchronization. Our source code and project page are available at https://github.com/aneeshbhattacharya/DanceAnyWay. Introduction Dancing is a central human behavior observed across societies and cultures (LaMothe 2019). Being simultaneously a form of expression and communication, the space of dance motions is dense, diverse, and, at the same time, temporally cohesive and structured (Tseng, Castellon, and Liu 2023). The complexity of dance motions and their pervasiveness in our socio-cultural fabric has led to extensive research on Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. generating dancing digital characters for applications such as character design (Mascarenhas et al. 2018), storyboard visualization for consumer media (Kucherenko et al. 2020; Watson et al. 2019), building metaverse tools (Omniverse 2021) and even advancing our understanding of the relationships between music and dance (Brown and Parsons 2008). Prior methods in dance generation can adapt to different dance genres but may encounter temporal inconsistencies (Fan, Xu, and Geng 2012) and motion freezing and instability (Li et al. 2021a). Using seed poses is a common approach to enforce plausibility (Li et al. 2021a; Zhuang et al. 2022; Li et al. 2020). However, they only provide the initial dance characteristics and become less relevant over time when generating long dance sequences. Diffusion-based approaches (Tseng, Castellon, and Liu 2023) offer better control in the generative process but come at the cost of slow inference speed and heavy parameter tuning for novel datasets. Other approaches tokenize the dance sequences into a finite, learnable set of quantized vectors (Siyao et al. 2022), which can generate long sequences with minimal tuning but tradeoff on the fine-grained diversity of the generated dances. Different from these approaches, we make two key observations. First, dancers often exhibit bursts or drops of energy at the audio beats. Therefore, the dance steps at the audio beats provide a coarse structure of the dance. Second, we can forego tokenization and explicitly enforce temporal diversity in the generative process to get long dance sequences in the continuous space without freezing or collapsing to repetitive patterns. In this paper, we introduce our method DanceAnyWay (Fig. 1), built on these observations, to generate plausible 3D dances from audio. We learn the correlation between the dance motions and the audio at two temporal levels: a coarser beat level, which corresponds to the dance poses at the audio beat frames, and a finer repletion level, which corresponds to the dance poses at all the other frames. Further, we perform a randomized temporal contrastive loss between segments at the repletion level to enforce diversity of motion between segments that are arbitrarily far from each other. By explicitly learning the correlation between the audio and the dance poses at the beat frames, we can generate plausible beat pose sequences representing the underlying dance characteristics for the entirety of the audio. Given these beat poses, we can generate the remaining or repletion poses while ensuring they are sufficiently diverse. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 783 Figure 1: DanceAnyWay: A two-stage hierarchical network that can generate beat-aligned and diverse, fine-grained 3D dances given audio. We render our results with Mixamo characters. In summary, our main contributions are as follows: • A temporally hierarchical learning method using sequence-to-sequence and generative adversarial learning to synthesize beat-aligned 3D dance sequences for digital characters synchronized with audio. • A randomized temporal contrastive loss to generate finegrained, diverse motions, particularly in the long term. • Leveraging the spatial-temporal graph representation of the 3D human poses to efficiently learn both the localized (joint-level) and the macroscopic (body-level) movements for different dances. • An end-to-end pipeline for audio-to-dance generation, which exhibits state-of-the-art performance on multiple quantitative and qualitative evaluations. Related Work We briefly review methods for human motion synthesis, particularly from audio inputs such as speech and music. 3D Human Motion-to-Motion Synthesis. 3D motion-tomotion synthesis is richly explored in computer vision and graphics. Classical approaches include kernel-based probability distributions (Galata, Johnson, and Hogg 2001; Pullen and Bregler 2000) to predict the most likely future poses given past poses, and motion graphs (Arikan and Forsyth 2002; Kovar, Gleicher, and Pighin 2008) to represent poses as nodes in a graph and transitioning between those poses according to various linking rules. These models often require significant manual tuning and do not allow for the incorporation of additional input modalities. More recently, learning-based approaches have gained immense traction in this area through convolutional networks (Holden, Saito, and Komura 2016; Holden et al. 2015), recurrent networks (Fragkiadaki et al. 2015; Jain et al. 2015; Ghosh et al. 2017; B¨utepage et al. 2017; kuang Chiu et al. 2018; Aksan, Kaufmann, and Hilliges 2019; Du, Vasudevan, and Johnson-Roberson 2019; Wang et al. 2019; Gopalakrishnan et al. 2019), generative adversarial networks (Ruiz, Gall, and Moreno-Noguer 2018), graph convolutional networks (Yan et al. 2019), and transformers (Aksan et al. 2020; Bhattacharya et al. 2021b). Current methods achieve high-quality performance on large-scale datasets and can generate diverse and realistic motions. However, these learning-based approaches are autoregressive and do not condition the motions on additional modalities such as audio. 3D Dance Motion Synthesis from Audio. Procedural methods for audio-to-dance synthesis use approaches such as motion graphs, where the audio rhythms are used to constrain the graph linking rules (Fan, Xu, and Geng 2012; Shiratori, Nakazawa, and Ikeuchi 2006; Pan et al. 2021; Yang et al. 2023; Aristidou et al. 2021; Chen et al. 2021). However, these approaches may suffer from temporal conflicts due to the differences in dance tempos. More recent approaches generate 3D dance motions using deep neural networks. LSTM-based methods (Tang, Jia, and Mao 2018; Yalta et al. 2018) can synthesize long, complex dance sequences by modeling the temporal dependencies in the motion data. GAN-based methods (Lee et al. 2019; Shiratori, Nakazawa, and Ikeuchi 2006) train a generator network to produce realistic dance sequences that match the distribution of a given dataset and a discriminator network to distinguish between the generated and real dance sequences. Transformers-based methods leverage self- and cross-attention mechanisms to capture long-range dependencies between the audio and the dances (Li et al. 2020, 2021a; Tseng, Castellon, and Liu 2023). To overcome the transformers’ limitations in generating continuous-space sequences, some transformer variants condense the latent space of dances into a finite set of quantized vectors (Siyao et al. 2022). Other methods segregate the learning into two steps: first generating key poses and then interpolating between them (Li et al. 2022). They have also been paired with diffusion models (Tseng, Castellon, and Liu 2023) to enhance joint-level editing and motion in-betweening capabilities. Large-scale 3D MoCap dance datasets (Alemi, Franc¸oise, and Pasquier 2017; Tang, Jia, and Mao 2018; Zhuang et al. 2022) have played a crucial role in the success of these methods. These datasets have been used to train and test the generative models. Additionally, 3D human models have been mapped to the motion data (Li et al. 2021b), leading to the generation of realistic and expressive dance motions. However, these methods can sometimes result in nonstandard poses or regression to mean configurations without exhibiting animated movements due to the high dimensionality of long pose sequences, or cannot adapt to fine-grained dance motions due to quantization. For interpolation-based methods, any error in key pose generation gets propagated to the interpolation network during inference. To overcome these limitations, our method explicitly learns the beat poses to ensure long-term beat alignment and performs a randomized temporal contrastive loss between segments to ensure fine-grained diversity of motions. We also use our generated beat poses only as control signals for interpolation, as a result of which the interpolation process can generate spatially The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 784 Figure 2: DanceAnyWay Network Architecture. DanceAnyWay consists of two stages, Beat Pose Synthesis (BPS) and Repletion Pose Synthesis (RPS), trained one after the other. BPS (top row, left) has a predictor architecture to generate the coarse beat poses, and RPS (bottom row) has a generative adversarial architecture to generate all the remaining poses with fine-grained detail, followed by a seq-to-seq trajectory predictor for the global root translations. To train our RPS, we propose an additional randomized temporal contrastive loss (top row, right) to enforce motion diversity. For completeness, we also expand our MFCC and Chroma encoders (top row, middle), which have the same architecture but different layer sizes. and temporally plausible movements regardless of errors in the beat pose generation. 3D Human Motion Synthesis from Other Modalities. Besides music, 3D motions are commonly generated using other modalities, such as speech and text. Co-speech gesture synthesis methods generate accompanying gestures for speech based on learned individual gesticulation patterns. These approaches aim to personalize the synthesis process by capturing the individual characteristics of the speaker (Ginosar et al. 2019), enhance generative capabilities using GAN-based approaches (Ferstl, Neff, and McDonnell 2019), enhance robustness by combining speech, text and speaker identities in the inputs (Yoon et al. 2020), incorporate emotional cues from the audio and gesticulation patterns (Bhattacharya et al. 2021a), explicitly add rhythmaware information (Ao et al. 2022), and use diffusion models to enable editability (Ao, Zhang, and Liu 2023). In our work, we leverage the audio beat information and the physiological dance movements and use an adversarial framework to improve the plausibility of the generated dances. Temporally Hierarchical Dance Synthesis We aim to generate 3D pose sequences for dances given input audio. Our approach is to separately learn the structure of the dance described by the beat poses or the poses at the beat frames of the audio, and the finer details of the dance described by the repletion poses or the poses at the remaining frames. To this end, we develop a two-stage learning method consisting of Beat Pose Synthesis (BPS) followed by Repletion Pose Synthesis (RPS). In BPS, given a short sequence of seed beat poses and the audio, we generate the beat poses. In RPS, given all the seed poses, the beat poses following the seed pose duration, and the audio, we generate the remaining poses to complete the dance. Mathematically, we represent the pose at frame t as Ut = h u(1) t , . . . , u(J−1) t i ∈R(J−1)×3, consisting of the unit line vectors denoting the J −1 bones corresponding to the J body joints. We take in the audio as a raw waveform and process it into a feature sequence A = [a1, . . . , aT ] ∈RDA×T for some feature dimension DA and total temporal length T. We extract the beat frames from the audio using available beat detection methods and represent them as a set B = {beat frames in A}. These beat frames may or may not be equidistant in time. Our BPS takes in the audio features A and the initial seed beat pose sequence UBS = {Uf}f∈BS, where BS ⊂B consists of all the beat frames in B contained within the seed sequence length TS ≪T, and generates the beat poses corresponding to U−BS = {Uf}f∈B−BS. Our RPS takes in the audio features A, all the seed poses US = {Uf}f∈{1,...,TS}, and the generated beat poses corresponding to U−BS = {Uf}f∈−BS, and synthesizes the repletion poses corresponding to UR = {Uf}f∈R where R = {1, . . . , T} −(B ∪S). We first fully train our BPS and then use its generated outputs to fully train our RPS, followed by a trajectory predictor for the global root translations. We show the overview of our end-to-end pipeline in Fig. 2 and describe the individual components below. Beat Pose Synthesis Our Beat Pose Synthesis (BPS) network takes in the raw audio waveform ψ and the seed beat pose sequence UBS, and generates the beat poses ˆU−BS. It uses multiple feature encoders to extract rhythmic and semantic information from the audio and the physiological information from the seed beat poses. It combines these features through a transformerencoder-based generator to synthesize the beat poses. Feature Encoders. We encode the audio and the seed pose sequences using separate encoder blocks. The audio enThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 785 coder block consists of an MFCC encoder and a Chroma encoder. MFCCs naturally capture the human auditory response and are commonly used in tasks such as emotion recognition (Neiberg, Elenius, and Laskowski 2006) and speaker identification (Murty and Yegnanarayana 2006). In our work, we leverage the audio prosody and the vocal intonations (when present) captured by the MFCCs. An MFCC encoder MB takes in the MFCCs and their first- and secondorder derivatives and uses convolutional layers to learn DMdimensional latent feature sequences AMB ∈RDM×|B| from their localized inter-dependencies as AMB = MB (MFCC (ψ) ; WMB) , (1) where WMB are the trainable parameters. Chroma CENS features capture the melody and pitch in audio, and we use a Chroma encoder CB with convolutional layers to transform these Chroma CENS features into DCB-dimensional latent feature sequences ACB ∈RDCB ×|B| based on their localized inter-dependencies as ACB = CB (Chroma (ψ) ; WCB) , (2) where WCB are the trainable parameters. We concatenate these two features into audio features AB as AB = [AMB; ACB] ∈RDAB ×|B|, (3) where DAB = DM + DCB. For the pose encoder block, we adopt the pose encoder architecture of (Bhattacharya et al. 2021a) to learn the physiological variations in the dance motions represented by UBS. The pose encoder block PB outputs latent pose features Q ∈RDP ×|B| as QB = PB (UBS; WPB) , (4) where WPB are the trainable parameters. Transformer-Encoder-Based Generator. We concatenate the latent features AB and QB and pass them through a transformer encoder (TE) ΘB with cross-attention between the two features to generate ˆU−BS, as ˆU−BS = ΘB (AB ⊕QB; WΘB) , (5) where ⊕denotes concatenation and WΘB are the trainable parameters. We note the use of TE here, which generates the entire sequence at once. This is because the beat frames can be irregularly separated in time, and the traditional autoregressive decoder fails to learn these separations. Repletion Pose Synthesis In contrast to BPS, our Repletion Pose Synthesis (RPS) network generates a dense sequence of repletion poses, capturing the finer details. Traditional seq-to-seq approaches fail to capture these details and lead to mean regression. To overcome this, we opt for a generative adversarial approach using a generator and a discriminator. The generator takes in the raw audio waveform ψ, the full seed pose sequence US, all the generated beat poses ˆU−BS and Gaussian noise ϵRE for the encoder, and synthesizes the repletion poses ˆUR. The discriminator learns to distinguish between Figure 3: t-SNE Plot of Samples from RPS Latent Decoder Space. Distribution of the features for the m-length segments in ZR, for 100 random samples (each represented with a different color) in AIST++ (Li et al. 2021b), after training with (right) and without (left) our RTC loss. Clustering all the sample segments using the RTC loss is necessary to generate diverse motions. the ground truth and the synthesized pose sequences based on their physiological features, and the generator eventually produces dance motions that the discriminator cannot distinguish from the ground truth, leading to plausible synthesized dances. Generator. The RPS generator consists of encoder blocks (similar to the BPS network), followed by a transformer encoder-decoder (TE-TD) architecture with cross-attention. Specifically, we use an MFCC encoder MR, a Chroma encoder CR, and a pose encoder PR, similar in architecture to their BPS counterparts but trained separately with parameters WMR, WCR, and WPR, to obtain the counterpart latent features AR and QR. Different from BPS, we include ϵRE as an additional input feature and then use the TE ΘR to obtain encoded features ER, as ER = ΘR (AR ⊕QR ⊕ϵRE; WΘR) , (6) where WΘR are the trainable parameters. We decode ER first into a latent space and then into the motion space, both employing transformer decoders (TDs), as ZR = ΦZ R  ER; WΦZ R  , (7) where ZR =  Zf ∈RDZ f∈R are the DZ-dimensional latent features and WΦZ R are the trainable parameters. In the subsequent motion decoding phase, we use a TD ΦU R, as ˆUR = ΦU R  ZR; WΦU R  , (8) where WΦU R are the trainable parameters. We perform latent decoding into ZR, of the same sequence length as ˆUR, to efficiently apply temporal diversity constraints on the smooth, equivalence space of ZR rather than on the non-smooth space of unit line vector sequences ˆUR. Discriminator. Our discriminator takes in 3D dance pose sequences eUR, which can be either the ground truth UR or the generated ˆUR, and uses a pose encoder PD (same architecture as PR) to learn latent pose features eQD ∈RDP ×|R| based on the physiological variations in the dances, as eQD = PD  eU; WPD  , (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 786 Method Quality Diversity BAS PFC FIDk FIDg MDk MDg ↓ ↓ → → ↑ ↓ GT 17.10 10.60 10.61 7.48 0.24 0.32 Dnc. Tf. 86.43 43.46 6.85 3.32 0.16 × DncNet 69.18 25.49 2.86 2.85 0.14 × DncRev. 73.42 25.92 3.52 4.87 0.19 × FACT 35.35 22.11 10.85 6.14 0.22 2.25 B’lando 28.16 9.62 7.92 7.72 0.23 1.75 EDGE 20.55 9.49 10.58 7.62 0.27 1.65 DAW 17.98 9.42 10.62 7.51 0.33 0.83 −BPS 18.64 9.73 7.26 4.98 0.22 1.79 −RTC 18.54 9.95 6.91 4.81 0.26 1.05 −rand. 18.82 11.08 7.72 5.28 0.27 2.80 Table 1: Quantitative Evaluation on AIST++ (Li et al. 2021b). Bold = best, underline = best among current methods, × = metric not calculated, arrows are directions for better values: ↑= higher, ↓= lower, →= closer to ground truth. where WPD are the trainable parameters. It then uses a bidirectional GRU (BiGRU) of latent dimension HD to learn the temporal inter-dependencies in the pose features, followed by a set of FC layers to compress the features into scalar variables and a sigmoid function to compute binary class probabilities pD ∈[0, 1], as pD = σ  FCD  BiGRUD  eQD; WLD  ; WF CD  , (10) where we only consider the output of the BiGRU and not its hidden states, WLD and WF CD denote the trainable parameters, and σ (·) denotes the sigmoid function. Trajectory Predictor. We complete the dances by learning the root trajectory from the generated poses. We use a TE-TD architecture that takes in ˆUR, learns latent pose features QT ∈RDP ×|R| through a TE ΘT with trainable parameters WΘT , and decodes them autoregressively through a TD ΦT with trainable parameters WΦT to predict the 3D world coordinates of the root, ˆr ∈R3×|R|. Training and Testing We detail the loss functions we use for training our network, the implementation details, and the testing procedure. Training Loss Functions We use pose and leg motion losses to first train our BPS. We then use our proposed randomized temporal contrastive (RTC) loss, pose and leg motion losses, and adversarial losses to train our RPS. We describe our RTC loss below and provide details of the other losses in our appendix. Randomized Temporal Contrastive (RTC) Loss. While the transformer is currently state-of-the-art for sequence generation (Vaswani et al. 2017), it can lead to freezing and mode collapse for motion sequences such as dances (Li et al. Figure 4: Beat Alignment. Kinetic velocities over time for one ground truth (GT) motion and corresponding generative results. Our method has more peaks and valleys at the beat frames, indicating more alignment with the audio. 2021b), where the sequence variables lie in an infinite, continuous space rather than in a finite set of quantized tokens. To address these issues, we consider overlapping segments of length m within each sequence with a sliding length d, and enforce diversity across these segments. We choose a segment n at random and obtain the non-overlapping segment ¯n with the minimum cosine similarity to it, as ¯n = arg min x∈N |cossim (Un, Ux)| , (11) where N is the set of l |R|−m d m segments. We then compute our RTC loss on the RPS latent decoder space (Eqn. 7) as LRT C = |cossim (Zn, Z¯n)| . (12) This ensures that the segments of our generated sequences are as temporally well-separated as the corresponding training data, enforcing diversity and avoiding freezing or collapse to repetitive motions. The random choice of n is necessary as it prevents the network from memorizing segment positions and makes it focus on all the segments across the training epochs in an expected sense. Using the smooth space of the latent decoder sequence Z instead of the nonsmooth motion space of ˆU is also necessary, as it enables stable backpropagations. Our RTC loss thus enables the transformer architecture to operate reliably on continuous sequences. This differs from the commonly used alternative of vector quantization (VQ) followed by tokenized sequence generation (van den Oord, Vinyals, and Kavukcuoglu 2017), which limits the generative power to the finite set of quantized vectors. While some methods improve on the conventional VQ approach by learning separate upper- and lowerbody representations (Siyao et al. 2022), their combined representations cannot encompass all possible motions in the full-body motion space. Implementation Details We train our network using 7-second dance clips sampled at 10 fps, i.e., with T = 70, and use a seed pose length TS = 20. For our RTC loss, we use segments of length m = 25 with sliding window length d = 5. We use Librosa (Brian McFee et al. 2015) to extract the MFCC and the Chroma CENS features and compute the beat frames. We use a maximum of |B| = 20 beat frames and |BS| = 3 seed beat frames. We use DM = 32, DCB = 4, DCR = 6, DP = 16, ΘB, ΘR, and ΘT . ΘB and ΘT have 4 heads and 6 blocks, while ΘR has 8 heads and 6 blocks, ΦZ R with 3 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 787 Figure 5: Visualizations on AIST++ (Li et al. 2021b). Sampled frames in a left-to-right sequence for one test sample. Our generated samples are better aligned with beats, more diverse, and have more plausible fine-grained details. heads and 8 blocks, ΦU R with 3 heads and 4 blocks, and ΦT with 1 head and 8 blocks. For BPS, we use the Adam optimizer (Kingma and Ba 2014) with β1 = 0.5, β2 = 0.99, a mini-batch size of 8, a learning rate (LR) of 1e −4, and train for 500 epochs. For RPS, we use the Adam optimizer with β1 = 0.5, β2 = 0.99, a mini-batch size of 8, an LR of 1e−4 for both the generator and discriminator, and train for 250 epochs. For our trajectory predictor, we use the Adam optimizer with β1 = 0.8, β2 = 0.99, a mini-batch size of 8, a learning rate of 1e −5, and train for a total of 700 epochs. Training our BPS, RPS, and trajectory predictor takes 6, 16, and 3 hours, respectively, on an NVIDIA A100 GPU. Inference During inference, we provide the input audio and the seed poses to our network. BPS generates the beat poses in one prediction step. RPS generates the entire dance and the root trajectories. To render our generated dances on human meshes, we follow the approach of (Li et al. 2021a) to apply them on Mixamo characters (https://www.mixamo.com). Experiments and Results We describe the benchmark dataset we evaluate on and our quantitative and qualitative performances. Dataset We use the benchmark AIST++ dataset (Li et al. 2021b), a large-scale 3D dance dataset of paired music and pose sequences spanning ten dance genres. We use the official dataset splits for training and testing our model. Evaluation Metrics We use the following common evaluation metrics (Li et al. 2021a; Siyao et al. 2022; Tseng, Castellon, and Liu 2023). Figure 6: DanceAnyWay Ablations on AIST++ (Li et al. 2021b). Sampled frames in a left-to-right sequence for one test sample. We highlight issues such as misalignment with beats (row 3), lack of motion diversity (row 4), and motion jitter (row 5) with red boxes. Fr´echet Inception Distance (FID). Following (Li et al. 2021a; Siyao et al. 2022), we compute FID on both the kinetic (k) and the geometric (g) features to measure the generated motion quality relative to the ground truth. Motion Diversity (MD). Following (Li et al. 2021a; Siyao et al. 2022; Tseng, Castellon, and Liu 2023), we compute MD on both the kinetic (k) and the geometric (g) features as well to measure the diversity of the generated dances relative to the ground truth. Beat Alignment Score (BAS). BAS measures the temporal alignment of dances with the audio beats. It is essential to understand the rhythmic quality of the dances. We use the BAS implementation of prior work (Li et al. 2021a; Siyao et al. 2022; Tseng, Castellon, and Liu 2023). Physical Foot Contact Score (PFC). We also report PFC, introduced by (Tseng, Castellon, and Liu 2023) to measure the physical plausibility of the foot movements w.r.t. the ground plane by measuring foot sliding. Quantitative Evaluations We compare our proposed method, DanceAnyWay, with the baseline methods of Dance Transformers (Li et al. 2020), DanceNet (Zhuang et al. 2022), DanceRevolution (Huang et al. 2021), FACT (Li et al. 2021a), Bailando (Siyao et al. 2022) and EDGE (Tseng, Castellon, and Liu 2023). We also evaluate three ablated versions of our method: without the BPS network, without the RTC loss, and assigning a fixed “reference” segment instead of using randomization for the RTC loss. We report all the results in Table 1. Comparison With Baselines. Compared to the best baseline of EDGE (Tseng, Castellon, and Liu 2023), our FIDk The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 788 GT DAW FACT Bailando EDGE Quality 3.66 3.58 2.86 2.23 3.36 Sync 3.61 3.61 2.93 2.37 3.60 Table 2: Perceptual Study Scores. Mean preferences on samples generated from AIST++ (Li et al. 2021b) based on the quality of the dances and synchronization with the audio. Bold = best, underline = second-best among all methods. and FIDg scores are about 12% and 7% better respectively, our MDk and MDg scores are about 4% and 1.5% better respectively, and our BAS and PFS improve by about 22% and 50% respectively. We further demonstrate better beat alignment of our generated dances for a test sample in Fig. 4 and visualize snapshots from the generated sequences in Fig. 5. Comparison With Ablated Version Without BPS. In this ablation, we train only the RPS network with audio and seed poses to synthesize the dance sequences. Without BPS, the RPS network loses alignment with the audio beats as time progresses, leading to lower BAS (Table 1, row 9). These results show the importance of BPS supplying the necessary beat information for long-term motion synthesis. Comparison With Ablated Version Without RTC Loss. In this ablation, we remove the RTC loss during training. We observe that the motion diversity drops rapidly as time progresses, and the network becomes susceptible to motion freezing and looping over limited motions after a few time steps, leading to lower MD (Table 1, row 10). This also corroborates with how the network learns the RPS latent decoder space, consisting of feature sequences ZR (Eqn. 7), with and without the RTC loss (Fig. 3). Using the RTC loss enables the network to avoid mode collapse and enforce diversity by clustering the features at different time steps within each sequence. Without the RTC loss, the network can still generate novel motions in sporadic bursts if it uses BPS. However, the overall motion diversity is limited, as we visualize on a random test sample in Fig. 6, row 4. Comparison With Ablated Version Without Randomization of RTC Loss. In this ablation, we assign a fixed “reference” segment at the beginning of the sequence and compute the RTC loss w.r.t. this segment. The resultant synthesized dances are unstable as time progresses, changing the joint positions abruptly in an attempt to diversify from the reference segment. Going to the other extreme, minimizing the RTC loss across all segment pairs in each sequence leads to even higher temporal instability. To summarize, the lack of randomization leads to higher FID of the motions (Table 1, row 11) and significant jitter (Fig. 6, row 5). Perceptual Study We evaluate the perceived performance of our generated dances through a perceptual study with human participants. For each participant, we select eight random audios from the AIST++ (Li et al. 2021b) test set and generate the corresponding dances using our method and the bestperforming baselines methods of FACT (Li et al. 2021a), Figure 7: Perceptual Study Response Distributions. Distributions of the Likert-scale scores for all the methods and the ground truth on motion quality (top) and synchronization with audio (bottom). We note EDGE and our method with the most responses of 3 or above among the methods. Bailando (Siyao et al. 2022), and EDGE (Tseng, Castellon, and Liu 2023). We show the participants these generated dances and the corresponding ground truth dances – five dances in total for each audio – in a random order unknown to them. We ask them to rate each dance for each audio on a five-point Likert scale on two aspects: motion quality and synchronization with the audio. To reduce inter-annotator variance, we also provide them guidelines on which aspects of the dances to focus on when assigning the Likert-scale scores. We detail these guidelines in our appendix. We report results on 31 responses to our perceptual study, discounting responses that failed our validation checks. 9 identified as female, 20 identified as male, 1 identified as non-binary and 1 did not disclose their gender. 16 participants were between the ages of 18 and 24, 13 between 25 and 35, and 2 above 35. We report the mean Likert-scale scores for the methods and the ground truth in Table 2, and show the distribution of responses in Fig. 7. For motion quality, on average, participants preferred our generated dance motions 14%, 27%, and 4% more compared to FACT, Bailando, and EDGE, respectively. They also marked our generated dances 3 or more in motion quality for 82% of the samples, compared to 59% for FACT, 34% for Bailando, and 73% for EDGE. For synchronization, on average, participants preferred our generated dance motions 14%, 25%, and 0.2% more compared to FACT, Bailando, and EDGE, respectively. They also marked our generated dances 3 or more in synchronization for 84% of the samples, compared to 61% for FACT, 43% for Bailando, and 85% for EDGE. Conclusion and Future Work We have presented a novel learning method to synthesize beat-aligned, long-term 3D dances from audio. Through extensive quantitative and qualitative evaluations, we have demonstrated the state-of-the-art performance of our method on a benchmark dance dataset. In the future, we plan to extend our method to explicitly understand different dance styles and make the generation more controllable. We also plan to incorporate dancer-specific capabilities and humanhuman and human-object interactions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 789 References Aksan, E.; Kaufmann, M.; Cao, P.; and Hilliges, O. 2020. A Spatio-temporal Transformer for 3D Human Motion Prediction. arXiv. Aksan, E.; Kaufmann, M.; and Hilliges, O. 2019. Structured Prediction Helps 3D Human Motion Modelling. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 7143–7152. Alemi, O.; Franc¸oise, J.; and Pasquier, P. 2017. GrooveNet: Real-time music-driven dance movement generation using artificial neural networks. networks, 8(17): 26. Ao, T.; Gao, Q.; Lou, Y.; Chen, B.; and Liu, L. 2022. Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings. ACM Trans. Graph., 41(6). Ao, T.; Zhang, Z.; and Liu, L. 2023. GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents. ACM Trans. Graph. Arikan, O.; and Forsyth, D. 2002. Interactive motion generation from examples. Proceedings of the 29th annual conference on Computer graphics and interactive techniques. Aristidou, A.; Yiannakidis, A.; Aberman, K.; Cohen-Or, D.; Shamir, A.; and Chrysanthou, Y. 2021. Rhythm is a dancer: Music-driven motion synthesis with global structure. arXiv preprint arXiv:2111.12159. Bhattacharya, U.; Childs, E.; Rewkowski, N.; and Manocha, D. 2021a. Speech2AffectiveGestures: Synthesizing CoSpeech Gestures with Generative Adversarial Affective Expression Learning. In Proceedings of the 29th ACM International Conference on Multimedia, MM ’21. New York, NY, USA: Association for Computing Machinery. Bhattacharya, U.; Rewkowski, N.; Banerjee, A.; Guhan, P.; Bera, A.; and Manocha, D. 2021b. Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). IEEE. Brian McFee; Colin Raffel; Dawen Liang; Daniel P.W. Ellis; Matt McVicar; Eric Battenberg; and Oriol Nieto. 2015. librosa: Audio and Music Signal Analysis in Python. In Kathryn Huff; and James Bergstra, eds., Proceedings of the 14th Python in Science Conference, 18 – 24. Brown, S.; and Parsons, L. M. 2008. The Neuroscience of Dance. Scientific American, 299(1): 78–83. B¨utepage, J.; Black, M. J.; Kragic, D.; and Kjellstr¨om, H. 2017. Deep Representation Learning for Human Motion Prediction and Classification. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1591– 1599. Chen, K.; Tan, Z.; Lei, J.; Zhang, S.-H.; Guo, Y.-C.; Zhang, W.; and Hu, S.-M. 2021. Choreomaster: choreographyoriented music-driven dance synthesis. ACM Transactions on Graphics (TOG), 40(4): 1–13. Du, X.; Vasudevan, R.; and Johnson-Roberson, M. 2019. Bio-LSTM: A Biomechanically Inspired Recurrent Neural Network for 3-D Pedestrian Pose and Gait Prediction. IEEE Robotics and Automation Letters, 4(2): 1501–1508. Fan, R.; Xu, S.; and Geng, W. 2012. Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis. IEEE Transactions on Visualization and Computer Graphics, 18(3): 501–515. Ferstl, Y.; Neff, M.; and McDonnell, R. 2019. MultiObjective Adversarial Gesture Generation. In Motion, Interaction and Games, MIG ’19. New York, NY, USA: Association for Computing Machinery. ISBN 9781450369947. Fragkiadaki, K.; Levine, S.; Felsen, P.; and Malik, J. 2015. Recurrent Network Models for Human Dynamics. 2015 IEEE International Conference on Computer Vision (ICCV), 4346–4354. Galata, A.; Johnson, N.; and Hogg, D. 2001. Learning variable-length Markov models of behavior. Computer Vision and Image Understanding, 81(3): 398–413. Ghosh, P.; Song, J.; Aksan, E.; and Hilliges, O. 2017. Learning Human Motion Models for Long-Term Predictions. In 2017 International Conference on 3D Vision (3DV), 458– 466. Ginosar, S.; Bar, A.; Kohavi, G.; Chan, C.; Owens, A.; and Malik, J. 2019. Learning Individual Styles of Conversational Gesture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Gopalakrishnan, A.; Mali, A.; Kifer, D.; Giles, L.; and Ororbia, A. G. 2019. A Neural Temporal Model for Human Motion Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Holden, D.; Saito, J.; and Komura, T. 2016. A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Trans. Graph., 35(4). Holden, D.; Saito, J.; Komura, T.; and Joyce, T. 2015. Learning Motion Manifolds with Convolutional Autoencoders. In SIGGRAPH Asia 2015 Technical Briefs, SA ’15. New York, NY, USA: Association for Computing Machinery. ISBN 9781450339308. Huang, R.; Hu, H.; Wu, W.; Sawada, K.; Zhang, M.; and Jiang, D. 2021. Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Jain, A.; Zamir, A. R.; Savarese, S.; and Saxena, A. 2015. Structural-RNN: Deep Learning on Spatio-Temporal Graphs. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5308–5317. Kingma, D. P.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. Kovar, L.; Gleicher, M.; and Pighin, F. 2008. Motion Graphs. In ACM SIGGRAPH 2008 Classes, SIGGRAPH ’08. New York, NY, USA: Association for Computing Machinery. ISBN 9781450378451. kuang Chiu, H.; Adeli, E.; Wang, B.; Huang, D.-A.; and Niebles, J. C. 2018. Action-Agnostic Human Pose Forecasting. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 1423–1432. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 790 Kucherenko, T.; Jonell, P.; van Waveren, S.; Henter, G. E.; Alexandersson, S.; Leite, I.; and Kjellstr¨om, H. 2020. Gesticulator: A Framework for Semantically-Aware SpeechDriven Gesture Generation. In Proceedings of the 2020 International Conference on Multimodal Interaction, ICMI ’20, 242–250. New York, NY, USA: Association for Computing Machinery. ISBN 9781450375818. LaMothe, K. 2019. The dancing species: how moving together in time helps make us human. Aeon, June, 1. Lee, H.-Y.; Yang, X.; Liu, M.-Y.; Wang, T.-C.; Lu, Y.-D.; Yang, M.-H.; and Kautz, J. 2019. Dancing to Music. In Wallach, H.; Larochelle, H.; Beygelzimer, A.; d'Alch´e-Buc, F.; Fox, E.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Li, B.; Zhao, Y.; Zhelun, S.; and Sheng, L. 2022. DanceFormer: Music Conditioned 3D Dance Generation with Parametric Motion Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2): 1272–1279. Li, J.; Yin, Y.; Chu, H.; Zhou, Y.; Wang, T.; Fidler, S.; and Li, H. 2020. Learning to Generate Diverse Dance Motions with Transformer. Li, R.; Yang, S.; Ross, D. A.; and Kanazawa, A. 2021a. AI Choreographer: Music Conditioned 3D Dance Generation with AIST++. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 13381–13392. Los Alamitos, CA, USA: IEEE Computer Society. Li, R.; Yang, S.; Ross, D. A.; and Kanazawa, A. 2021b. Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. arXiv:2101.08779. Mascarenhas, S.; Guimar˜aes, M.; Prada, R.; Dias, J.; Santos, P. A.; Star, K.; Hirsh, B.; Spice, E.; and Kommeren, R. 2018. A Virtual Agent Toolkit for Serious Games Developers. In 2018 IEEE Conference on Computational Intelligence and Games (CIG), 1–7. Murty, K. S. R.; and Yegnanarayana, B. 2006. Combining evidence from residual phase and MFCC features for speaker recognition. IEEE Signal Processing Letters, 13(1): 52–55. Neiberg, D.; Elenius, K.; and Laskowski, K. 2006. Emotion recognition in spontaneous speech using GMMs. In Ninth international conference on spoken language processing. Omniverse, N. 2021. NVIDIA Omniverse, https://www.nvidia.com/en-us/omniverse/. Pan, J.; Wang, S.; Bai, J.; and Dai, J. 2021. Diverse Dance Synthesis via Keyframes with Transformer Controllers. Computer Graphics Forum, 40(7): 71–83. Pullen, K.; and Bregler, C. 2000. Animating by multi-level sampling. In Proceedings Computer Animation 2000, 36– 42. Ruiz, A. H.; Gall, J.; and Moreno-Noguer, F. 2018. Human Motion Prediction via Spatio-Temporal Inpainting. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 7133–7142. Shiratori, T.; Nakazawa, A.; and Ikeuchi, K. 2006. Dancingto-Music Character Animation. Computer Graphics Forum, 25(3): 449–458. Siyao, L.; Yu, W.; Gu, T.; Lin, C.; Wang, Q.; Qian, C.; Loy, C. C.; and Liu, Z. 2022. Bailando: 3D Dance Generation by Actor-Critic GPT With Choreographic Memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11050–11059. Tang, T.; Jia, J.; and Mao, H. 2018. Dance with Melody: An LSTM-Autoencoder Approach to Music-Oriented Dance Synthesis. In Proceedings of the 26th ACM International Conference on Multimedia, MM ’18, 1598–1606. New York, NY, USA: Association for Computing Machinery. ISBN 9781450356657. Tseng, J.; Castellon, R.; and Liu, K. 2023. EDGE: Editable Dance Generation From Music. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 448–458. van den Oord, A.; Vinyals, O.; and Kavukcuoglu, K. 2017. Neural Discrete Representation Learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, 6309–6318. Red Hook, NY, USA: Curran Associates Inc. ISBN 9781510860964. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Wang, B.; Adeli, E.; kuang Chiu, H.; Huang, D.-A.; and Niebles, J. C. 2019. Imitation Learning for Human Pose Prediction. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 7123–7132. Watson, K.; Sohn, S. S.; Schriber, S.; Gross, M.; Muniz, C. M.; and Kapadia, M. 2019. StoryPrint: An Interactive Visualization of Stories. In Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI ’19, 303–311. New York, NY, USA: Association for Computing Machinery. ISBN 9781450362726. Yalta, N.; Watanabe, S.; Nakadai, K.; and Ogata, T. 2018. Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. Yan, S.; Li, Z.; Xiong, Y.; Yan, H.; and Lin, D. 2019. Convolutional Sequence Generation for Skeleton-Based Action Synthesis. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 4393–4401. Yang, Z.; Wen, Y.-H.; Chen, S.-Y.; Liu, X.; Gao, Y.; Liu, Y.J.; Gao, L.; and Fu, H. 2023. Keyframe Control of Musicdriven 3D Dance Generation. IEEE Transactions on Visualization and Computer Graphics. Yoon, Y.; Cha, B.; Lee, J.-H.; Jang, M.; Lee, J.; Kim, J.; and Lee, G. 2020. Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity. ACM Transactions on Graphics, 39(6). Zhuang, W.; Wang, C.; Chai, J.; Wang, Y.; Shao, M.; and Xia, S. 2022. Music2Dance: DanceNet for Music-Driven Dance Generation. ACM Transactions on Multimedia Computing, Communications, and Applications, 18(2). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 791
2024
88
18,717
Parameterization of (Partial) Maximum Satisfiability above Matching in a Variable-Clause Graph Vasily Alferov1, Ivan Bliznets2, Kirill Brilliantov3 1Independent Researcher 2University of Groningen 3St. Petersburg Department of Steklov Mathematical Institute of the RAS [email protected], [email protected]/[email protected], [email protected] Abstract In the paper, we study the Maximum Satisfiability and the Partial Maximum Satisfiability problems. Using Gallai–Edmonds decomposition, we significantly improve the upper bound for the Maximum Satisfiability problem parameterized above maximum matching in the variable-clause graph. Our algorithm operates with a runtime of O∗(2.83k), a substantial improvement compared to the previous approach requiring O∗(4k), where k denotes the relevant parameter. Moreover, this result immediately implies O∗(1.14977m) and O∗(1.27895m) time algorithms for the (n, 3)-MaxSAT and (n, 4)-MaxSAT where m is the overall number of clauses. These upper bounds improve prior-known upper bounds equal to O∗(1.1554m) and O∗(1.2872m). We also adapt the algorithm so that it can handle instances of Partial Maximum Satisfiability without losing performance in some cases. Note that this is somewhat surprising, as the existence of even one hard clause can significantly increase the hardness of a problem. Introduction The Maximum Satisfiability (MAXSAT) problem holds significant importance in various fields of artificial intelligence, computer science, mathematics, and engineering due to its theoretical relevance, algorithmic challenges, and practical applications. MAXSAT is a natural extension of the wellstudied Boolean Satisfiability (SAT) problem. Investigating MAXSAT contributes to a deeper understanding of the complexity landscape of optimization and decision problems. The study of the Satisfiability and Maximum Satisfiability problems provides insights into the boundaries of what algorithms can achieve. The Satisfiability problem was one of the first problems for which NP-completeness was shown. For a detailed survey about SAT and MAXSAT problems, we refer to (Biere, Heule, and van Maaren 2021). A diverse array of strategies has been employed to address the inherent complexity of the MAXSAT problem. These encompass a spectrum of approaches, including randomized, approximation, exact, and parameterized algorithms. After a prolonged period of silence, new results from an exact exponential point of view start to appear for MAXSAT again. At AAAI-21 (Alferov and Bliznets 2021) presented Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. a new algorithm, introducing a novel approach with a runtime of O∗(1.0927L)1, where L is the total number of literals within the input formula. Building upon this progress, at AAAI-2023 Brilliantov et al. published a subsequent enhancement that achieved an even more impressive upper bound of O∗(1.0911L) (Brilliantov, Alferov, and Bliznets 2023). It is worth highlighting that prior to these results, the previous best-known upper bound dated back over two decades ago (Bansal and Raman 1999), yielding an upper bound of O∗(1.1057L). This trend of improvements also extends to another crucial measure of the input formulas: the total number of clauses. At AAAI-2019, Xu et al. introduced an algorithm with a time complexity of O∗(1.2989m) (Xu et al. 2019), presenting a substantial leap in performance. Following this result, at IJCAI-2022, Xiao further refined the landscape by unveiling an algorithm with an improved time complexity of O∗(1.2886m) (Xiao 2022). These advancements superseded the previously established best upper bound, which had remained unchallenged since 2004 (Chen and Kanj 2004). The number of clauses that one is asked to satisfy in a given CNF formula is also a natural and well-studied measure. For this measure, the best-known algorithm was developed in (Chen, Xu, and Wang 2015), and the running time of the algorithm is O∗(1.325k′), where k′ is the number of clauses that we need to satisfy (we note that generally this measure is denoted by k, in order to avoid ambiguity we denote this measure by k′). In this paper, we study the MAXSAT problem from parameterized point of view. Specifically, we study MAXSAT parameterized above lower bounds. The MAXSAT has several natural lower bounds computable in polynomial time. It is well known that there is a truth assignment that satisfies at least half of all clauses. If we consider an assignment π1 that sets all variables to True, and assignment π2 that sets all variables to False then at least one of them satisfies at least half of all clauses. Let us denote by ℓ0 the number of all clauses satisfied by the assignment π1. It is obvious, that optimal truth assignment satisfies at least ℓ0 clauses. In order to compute the third lower bound we do the following. Let us construct a graph in which each vertex corresponds 1O∗(·) supresses polynomial factors similarly as O(·) supresses constant factors The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7918 either to a variable or to the clause from the input formula. We connect two vertices with an edge if and only if one of the vertices corresponds to a variable x, the other vertex corresponds to a clause C, and a variable x belongs to C. The constructed graph we call a variable-clause graph, and denote it by GF (in future, if F is clear from the context we simply write G). Let us compute the maximum matching M in GF and denote it by ν(GF ). It is easy to see that we can set values of variables so that we satisfy at least ν(GF ) clauses. Precisely, for each edge e in M, we pick a value of the variable corresponding to one of the endpoints of e so that the clause corresponding to the other endpoint is satisfied. In parameterization above guarantee one is given an optimization problem, an integer k (parameter), a lower bound ℓ for an optimum value and the goal is to check if the optimum value is at least ℓ+ k. An interested reader can find more details on parameterization above guarantee in a recent survey (Gutin and Mnich 2022). Parameterization above lower bound m 2 was introduced in (Mahajan and Raman 1999). It was shown there that MAXSAT admits an FPT-algorithm under such parameterization. Moreover, they showed that the problem can be reduced to MAXSAT where the number of clauses that need to satisfy is used as a parameter without significant blow-up of the original parameter. In the case of a lower bound ℓ0, it was shown (Belova and Bliznets 2020) that the problem is NP-complete even for k = 1. For a parameterization above lower bound ν(GF ) (Crowston et al. 2014) presented a deterministic algorithm with running time O∗(2e)2k+O(log2k), and randomized algorithm with running time O∗(8k+O( √ k)). Later, advances for (n −k)-SET COVER problem lead to a O∗(4k) algorithm for this parameterization (Basavaraju et al. 2016). In this paper we further improve algorithm for this parameterization. Note that the measure k′ (number of clauses that need to satisfy) also can be considered as parameterization above guarantee where the lower bound is 0. Our Results We employ Gallai-Edmonds decomposition and significantly improve upper bound for (n −k)-SET COVER. The new algorithm works in 2 3k 2 time improving previous O(4k) time algorithm. This result imply 2 3k 2 time algorithm for MAXSAT parameterized above maximum matching in a variable-clause graph. Besides we adapt reduction from MAXSAT to (n −k)-SET COVER so that it can handle hard clauses. Formally, we prove that PARTIAL MAXIMUM SATISFIABILITY above maximum matching can be solved in 2 3k 2 time if all hard clauses has length at most 2, and in time O∗(2 3k 2 + 2k · 1.12226h) if the number of hard clauses is h. Moreover, our result imply O∗(1.14977k′), O∗(1.14977m) (here m is overall number of clauses in the input formula and k′ is the number of clauses that we need to satisfy) time algorithms for MAXSAT restricted to instances where each variable appears at most three times and an O∗(1.27895k′), O∗(1.27895m) time algorithms for MAXSAT instances where each variable appears at most four times. Note that previous best-known algorithms for these special cases were O∗(1.1554k), O∗(1.1554m) (Brilliantov, Alferov, and Bliznets 2023), O∗(1.2872m) (Xiao 2022), O∗(1.2989k) (Brilliantov, Alferov, and Bliznets 2023). We complement our theoretical findings with computational experiments. Note that the goal of the algorithm to be efficient when parameter k is small (such instances can arise during an execution of branching algorithm). If this is not the case then even simple brute-force algorithm outperform our algorithm. We compare performance of our solver with the state-of-the-art open source MaxSAT solvers on type of instances for which our algorithm was designed. It is obvious that on general instances our algorithm will be less efficient as it does not contain sophisticated heuristics that are very useful on practice, however, lack proved guarantee on the running time. Through this testing, our objective was to determine whether there are scenarios in which our algorithm demonstrates superior performance compared to the currently most competitive solvers. Note that on practice algorithms with the best proven guarantee can be significantly slower than algorithms based on heuristics without proven efficiency. Turns out that in this setting our algorithm outperforms other solvers. Preliminaries In the paper, we assume that a reader is familiar with concepts such as boolean variables, literals, and clauses. A variable x is an (i, j)-variable in the formula F if literal x and literal x appear i times and j times in F, respectively. For a variable x, we call literal x positive and literal x negative. Throughout the whole paper we assume that for each variable x literal x appears at least the same number of times as literal x. Note that if some variable x in a formula does not satisfy the property, we can simply replace all literals x with x′, and all literals x with x′. We call clause positive if it contains only positive literals. For convenience instead of clause (x1 ∨x2 ∨· · · ∨xk) we often write slightly abusing notation simply x1x2 . . . xk. Interested readers may find details about CNF formula, satisfiability, the MAXSAT problem and further relevant information in (Marek 2009). We also assume that the reader is familiar with basic notions of parameterized algorithms, and branching algorithms. Relevant details for this two subjects can be found in (Cygan et al. 2015) and (Fomin and Kratsch 2010) respectively. In the paper, we study the MAXIMUM SATISFIABILITY problem, for short MAXSAT. In the problem, one is given a formula F in CNF, integer k′ and the goal is to check if at least k′ clauses can be satisfied in F. If we restrict our inputs to formulas where each variable appears at most s times we call such problem (n, s)-MAXSAT. We also consider PARTIAL MAXIMUM SATISFIABILITY problem, in this problem comparing with MAXSAT we additionally have hard clauses that must be satisfied, i.e. we need to check if there is an assignment that satisfies all hard clauses and at least k′ clauses. Main focus of our paper is the MAXIMUM SATISFIABILITY problem parameterized above matching. In this problem, we are given a formula F, an integer k and our goal is to check if we can satisfy at least ν(GF )+k clauses in F, where ν(GF ) is the size of the maximum matching in a variable-clause The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7919 graph GF , defined in the introduction. In future, sometimes we use shorthand ν(F) instead of ν(GF ). For just presented problem we use shorthand (ν(F) + k)-SAT. Similarly, if additionally we have hard clauses, we use shorthand Partial (ν(F)+k)-SAT. Throughout the whole paper, we denote the overall number of clauses in the input formula by m and the number of variables by n. If we apply reduction rule to an instance of Partial (ν(F)+k)-SAT we say that initial instance has a formula F and a new instance has a formula F ′. Moreover, a parameter of the initial instance is k and a parameter of a new instance is k′. As the key of our advances for MAXSAT is improvement for the (n−k)-SET COVER problem. We define the latter problem here. In the (n −k)-SET COVER problem we are given a universe U = {1, 2, . . . , n}, a family F = {S1, S2, . . . , Sm} such that Si ⊂U for any 1 ≤i ≤m, an integer k, and the goal is to check if there are i1, i2, . . . .in−k such that Si1 ∪Si2 ∪· · · ∪Sin−k = U. We also employ theorems about graphs and the Satisfiability problem listed below. Theorem 1 (Szeider (2004)). If a formula F contains n variables, n + 1 clause and ν(F) = n then we can check in time O(n3) if F is satisfiable or not. Theorem 2 (generalization of Hall’s lemma). Let d ≥0, G – bipartite graph with parts A, B, such that for any X ⊆A we have |N(X)| ≥|X| −d. If M is a maximum matching in G then |M| ≥|A| −d. Theorem 3 (Hopcroft and Karp (1973)). Let G be a bipartite graph with bipartition V1 and V2, on n vertices and m edges. Then we can find a maximum matching of G in time O(m√n). Furthermore, in time O(m√n) either we can find a matching saturating V1 or an inclusion-wise minimal set X ⊂V1 such that |N(X)| < |X|. Theorem 4 (Gallai–Edmonds decomposition). For any graph G, we can split in polynomial time its vertices into three sets C, B, D such that for any maximum matching M in G, we have: (i) there is no edge going from the set C to the set D; (ii) all connected components of G[C] have an even number of vertices and contain perfect matching; (iii) all connected components in G[D] consist of an odd number of vertices and are factor-critical; (iv) matching M induces a perfect matching in C, an almost perfect matching in each connected component of G[D] (almost perfect matching cover all vertices except one vertex); (v) matching M to each vertex from the set B assign its unique connected component of G[D] with which the vertex is connected by an edge from M. Theorem 5 (Fomin and Kratsch (2010), Corollary 3.25). Pn j=0 Pmin(2j,n) k=j n−j k−j  = O(poly(n) · φn), here and later φ denotes 1 2( √ 5 + 1). Due to space constraints all proofs of Section ”Partial Maximum Satisfiability” are deferred to the full version. Partial Maximum Satisfiability Crowston et al. (2014) essentially proved the following theorem. Theorem 6 ( Crowston et al. (2014)). If (n−k)-SET COVER admits an algorithm with running time f(k) = Ω(2k) then (ν(F) + k)-SAT admits O∗(f(k))-time algorithm. Inspired by Reduction and Branching rules from (Crowston et al. 2014), we construct adapted Reduction and Branching rules for Partial (ν(F) + k)-SAT such that eventually for some special cases of Partial (ν(F) + k)-SAT we obtain the same upper bound as for (ν(F) + k)-SAT. Note that existence of such modification is somewhat surprising as generally existence of hard clauses makes problem significantly more difficult from theoretical point of view. In the following example we show that the presence of only one short hard clause significantly changes computational complexity of a problem. Consider a family of formulas that have the following type xC1∧xC2∧· · ·∧xCm∧x∧x. It is obvious that we can satisfy maximum m+1 clause in such formulas, and it is enough to set x = 1 to do this. Hence, MAXSAT is solvable in polynomial time on this type of instances. It is enough to check that an input instance match the format and count the overall number of clauses. However, if we consider PARTIAL MAXSAT on this type of instances and mark x as a hard clause then essentially we have to solve MAXSAT on the formula C1 ∧· · · ∧Cm where there is no restriction on clauses C1, C2, . . . , Cm at all. Therefore PARTIAL MAXSAT restricted to these instances cannot be solved using 2o(m) time assuming the Exponential Time Hypothesis. Hence, this example provides us with an exponential gap between the running time of algorithms for MAXSAT and the running time for PARTIAL MAXSAT assuming ETH. We prove the following theorem in this section. Theorem 7. If (n−k)-SET COVER is solvable in O∗(f(k)) time where f(k) ≥2k then Partial (ν(F) + k)-SAT admits a O∗(f(k)) time algorithm on instances where all hard clauses have length at most 2. Moreover, under the same assumption Partial (ν(F) + k)-SAT admits O∗(f(k) + 2k · 1.12226h) time algorithm on formulas with h hard clauses. Before we present proof of the above theorem, we provide several Reduction rules and prove their correctness. If Reduction rule R is applied to an instance (F, k) and produces an instance (F ′, k′) then we say that R is correct if: (i) (F, k) is a YES-instance if and only if (F ′, k′) is a YES-instance; (ii) k′ ≤k; (iii) number of hard clauses in F ′ is not larger then number of hard clauses in F; (iv) if all hard clauses in F have length at most 2 then all hard clauses in F ′ also have length at most 2. Similarly, for correctness of a Branching rule that produces two instances (F1, k1), (F2, k2) we require: (i) (F, k) is a YES-instance if and only if (F1, k1) or (F2, k2) are YES-instance instances; (ii) k1, k2 < k; (iii)the number of hard clauses in F1, F2 is not larger then the number of hard clauses in F; (iv) if all hard clauses in F have length at most 2 then all hard clauses in F1, F2 have length at most 2. Lemma 1. Let M be a maximum matching in a variableclause graph of a formula F, and let πM be a satisfying assignment that correspond to this matching (clauses from M are satisfied by variables from M). If πH is a satisfying assignment that satisfies all hard clauses in F then in polynomial time we can construct a truth assignment π such that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7920 π satisfies all hard clauses and overall number of satisfied clauses is at least ν(F). Previous lemma implies our first Reduction rule. Reduction rule 1. If k ≤0, and all hard clauses have length at most 2 then solve the instance in polynomial time. Otherwise, solve the instance in time O∗(1.2226h) where h is the total number of hard clauses in F. Reduction rule 2. If formula contains a clause C = llC′ for some literal l then delete the clause. Decrease the parameter by one if the matching size in the variable-clause graph did not change, and leave the parameter unchanged if the matching size decreased by one. Reduction rule 3. If there is a literal l such that literal l does not appear in F then set l = 1 and recompute the value of the parameter. Reduction rule 4. If there is variable x such that F contains hard clauses x and x then return NO. Reduction rule 5. If formula contains hard clause l where l is some literal, then set l = 1 and recompute the value of the parameter. Reduction rule 6 (Resolution). If F contains a (1, 1)variable x with clauses xC, xD then we replace these two clauses with clause CD. Moreover, we mark the new clause as a hard clause if and only if both clauses xC, xD were marked as hard. After that we recompute the value of the parameter depending on the change of the maximum matching. We call a bipartite graph G with parts A, B t-expanding if for each X ⊆A, X ̸= ∅we have |NG(X)| ≥|X| + t. Similarly, we call a formula F t-expanding if the corresponding variable-graph GF is t-expanding. Reduction rule 7. If the variable-clause graph G of the formula F is not 1-expanding i.e. there is a set of variables C such that |NG(C)| ≤|C| then find such inclusion minimal set C and assign values to the variables from C so that all clauses from |NG(C)| are satisfied. From now on we can assume that for the input formula F the variable-clause graph is 1-expanding. If variable-clause graph G of the formula F is not 2expanding then we find a set of variables S such that |NG(S)| = |S| + 1. We can find such set in polynomial time by Lemma 8 in (Crowston et al. 2014). Denote by FS a formula that we get if we delete all variables from F except for the variables from the set S. We apply Theorem 1 to the formula FS and check if FS is satisfiable. Reduction rule 8. If FS is satisfiable, then we assign values to the variables from S such that all clauses in N(S) are satisfied. Reduction rule 9. If FS is not satisfiable we delete from the formula F the set of clauses corresponding to NG(S) and add a new clause which contains all literals from all clauses of NG(S) except literals of variables from the set S. Let us call the new clauses C. We mark C as hard if and only if all clauses in NG(S) are hard. After application of all Reduction Rules listed above we apply branching rules. Before we present the branching rules, we prove the following lemma. Lemma 2. If formula F has 2-expanding variable-clause graph and x is (a, 1+)-variable with a > 1 then ν(Fx=1) ≥ ν(F) −a + 1. Branching rule 1. If k > 0 and there is (2+, 2+)-variable x in F then we consider two branchings x = 0 and x = 1. In both branchings we recompute the value of the parameter and it drops at least by one. If Branching rule 1 is not applicable, we try to apply the following Branching rule. Branching rule 2. If the formula F contains a clause xyC then consider two branchings F[x = 1] and F[y = 1]. In both branchings we recompute the value of the parameter and it drops at least by one. Note that if none of the above rules is applicable, then negative literals appear exactly once (as for each variable x literal x appears at least the same number of times as literal x), and each clause contains at most one negative literal. If F has this type, then we assign to it the following directed graph DF : for each variable x we introduce a vertex vx, for each clause xy1 . . . yk we introduce edges vxvy1, . . . , vxvyk. Reduction rule 10. If DF contains a cycle vz0vz1 . . . vzℓ then set z0 = z1 = · · · = zℓ= 1. Lemma 3. If none of the above rules is applicable to the formula F, then Partial (ν(F) + k)-SAT and (ν(F) + k)SAT are equivalent on the formula F. Formally, if π satisfies k clauses in F then there is an assignment π′ that satisfies k clauses in F and all hard clauses in F. Moreover, given π we can find π′ in polynomial time. The above lemma helps us to reduce Partial (ν(F) + k)SAT to (ν(F) + k)-SAT. Now, we can apply all reductions from (Crowston et al. 2014) without any adaptation. The following Reduction rule was presented in (Crowston et al. 2014) as Lemma 11. Reduction rule 11 (Crowston et al.). If there is a variable x such that F contains clauses xC1, . . . , xCi, xD (this is an exhaustive list of all clauses containing variable x) and D is non-empty. Then replace these clauses with xC1D, . . . , xCiD, x correspondingly. After exhaustive application of all rules above all variables are (i, 1)-variables and for each variable x there is a clause x and it is the only occurrence of the literal x. Now, we show how to transform our current instance of (ν(F) + k)-SAT into (n −k)-SET COVER. First of all, note that for the current instance there is an optimal matching that satisfies all clauses with positive literals. Indeed all clauses with negative literals contain only one literal i.e. can be expressed as x. Moreover, all such clauses are not hard as otherwise Reduction rule 5 is applicable. So, if clauses with positive literal x is not satisfied then we can flip value of x. In this case we satisfy at least one clause and at most one clause become unsatisfiable and this is not a hard clause. Therefore, from now on we are looking for an assignment that satisfies all positive clauses. Note that matching in the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7921 variable-clause graph has size n as we simply can match each variable xi with the clause xi. Our reduction works in the following way: (i) for each positive clause Cj we create an element j; (ii) for each variable xi we create a set Si, the set Si contains an element j if and only if xi ∈Cj. So, our instance of SET COVER has universe U of size n′ = m −n, and a family of covering sets F of size m′ = n. Note that we can satisfy ν(F)+k = n+k clauses in F if and only if there is a covering containing at most n′−k = m−n−k sets in the constructed SET COVER instance. Indeed, if π is an assignment satisfying n+k clauses and all positive clauses, then at least (n+k)−(m−n) = 2n+k −m variables are set to 0. Hence, there is at most n−(2n+k−m) = m−n−k = n′−k variables that are set 1, and family {Si|π(xi) = 1} covers the whole U. Moreover, if sets Si1, Si2, . . . , Sip cover U, then an assignment with xi1 = xi2 = · · · = xip = 1 and all other variables set to 0 satisfy C1, . . . , Cm−n and n −p clauses from x1, x2, . . . , xn. Hence, covering with p = n′ −(n′ −p) = n′ −k′ sets was converted into a assignment satisfying m−n+(n−p) = ν(F)+n′−p = ν(F)+k′. Similar reduction was described in (Crowston et al. 2014)), so more details can he found there. Hence, in order to solve Partial (ν(F) + k)-SAT it is enough to solve (n −k)-SET COVER. In order to transform Partial (ν(F) + k)-SAT to (n −k)SET COVER we use Reduction rules and Branching rules. Note that Reduction rules can by applied in polynomial time. All of the Branching rules have two cases and parameter decreases by 1 in both cases. Hence, all of the branching factors are 2. After exhaustive application of the reduction and branching rules either we obtain an instance with k = 0 or we reduce the problem to (n −k)-SET COVER. Hence, we proved Theorem 7. Algorithm for (n −k)-SET COVER In this section we present O∗(2 3 2 k) time algorithm for (n −k)-SET COVER. By Theorem 7, this result implies that (ν(F) + k)-SAT admits an O∗(2 3 2 k) time algorithm. Theorem 8. (n −k)-SET COVER admits an O∗(2 3 2 k) time algorithm. Proof. First of all our algorithm checks if each element is contained at least in one set. If it is not the case then we can immediately output NO. After that, our algorithm execute a simple greedy procedure. The procedure goes through all sets in arbitrary order and picks a set S′ into cover if the set S′ cover at least 3 new elements. The greedy procedure stops when each of the remaining sets covers at most two new elements. We denote by A the subset of all covered elements at the greedy step, and by R we denote the set of all sets that were picked during the procedure. Note that if A ≥3 2k then there is a set cover of size at most n −k. Indeed, there are n −|A| uncovered elements, it is obvious that they can be covered by n −|A| sets (roughly speaking for each element we have a special set covering it). Hence, our covering contains at most |R| + n −|A| sets. Recall that each set in R covers at least 3 new elements, hence, we have that R ≤ 1 3|A|. Hence, our covering is of size at most |R| + n −|A| ≤n −2 3|A| ≤n −k. It means that from now on we can assume that |A| ≤3 2k, and |R| ≤1 2k. Now we consider elements from the set V = U \ A. By the definition of the set A we have that each set from the family F covers at most two elements in V . We construct graph G based on V and F. Each vertex in G corresponds to an element from the set V . Vertices u, v in G are connected by an edge if there is a W ∈F that contains both elements from V that correspond to vertices u, v. Denote by M a maximum matching in the graph G. Note that if |M| ≥k−|A|+|R| then our instance of (n−k)-SET COVER is a YES-instance. Indeed, consider a covering that contains all sets from R, sets that correspond to edges from the matching M, and for each yet uncovered element, we use a new set. Note that the number of elements not covered by sets from R and sets corresponding to edges from M is exactly n −|A| −2|M|. So, overall constructed covering contains |R| + |M| + (n −|A| −2|M|) = n + |R| −|A| − |M| ≤n + |R| −|A| −(k −|A| + |R|) = n −k sets. Therefore, from now on without loss of generality, we can assume that |M| < k −|A| + |R|. Now, for the graph G, we find a set B described in GallaiEdmonds decomposition (Theorem 4). Recall that all edges in M cover at most one vertex from B and all vertices in B are covered by M. Hence, |B| ≤|M|. Let H be a connected component in the graph G \ B. We know that M covers almost all vertices in H (at most one vertex is not covered) solely by edges fully contained in H. Hence, |V (H)| ≤2(|M| −|B|) + 1 ≤2(k −|A| + |R| −|B|) + 1. Denote by A′ = A∪B, V ′ = V \B. We order vertices in V ′ such that vertices of each connected component in G[V ′] are consecutive in the ordering. Let us denote the ordering by π. Now our algorithm employs dynamic programming. We fill in some cells in the table dp indexed by a subset X ⊆ A′ and a subset Y ⊆V ′. Our goal is to store in the cell dp(X, Y ) a minimum number of sets from F that are needed to cover the set X ∪Y . We note that we compute values not in all cells of the table dp and we do not create a whole table in a memory to store the data. We compute the values only in those cells that will be required to compute the value dp(A′, V ′). We perform a computation of values in the table dp(·, ·) in the following way. First of all we initialize dp(∅, ∅) = 0, undefined values we treat as +∞. Knowing the value in the cell dp(X, Y ) we take the smallest element v (smallest with respect to the ordering π) in V ′ \ Y . For each subset S ∈ F such that v ∈S we update a value in the cell dp(X ∪ (S ∩A′), Y ∪(S ∩V ′)) with the value min{dp(X ∪(S ∩ A′), Y ∪(S ∩V ′)), dp(X, Y ) + 1}. After that we go to the next cell dp(X, Y ) such that: (i) value dp(X, Y ) was already updated; (ii) value of minimum element in V ′ \ Y is the smallest among all potential pairs dp(X, Y ). Finally, when we exhaust all elements from V ′ we fix some ordering σ on F, and in order σ for all S ∈F and all X ⊆A′ we update answers in cells dp(X ∪(S ∩A′), V ′) with value The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7922 dp(X, V ′)+1, i,e. we run simple dynamic programming for a set cover with the second coordinate being fixed with the value V ′. Claim 1. After execution of the algorithm the cell dp(A′, V ′) stores a minimum number of sets from F required to cover the set U = A′ ∪V ′. Proof. First of all note that the value of the smallest element uncovered by a set Y never decreases. Indeed, staying in the cell dp(X, Y ) we update values only in cells (X′, Y ′) such that Y ⊊Y ′. Let A ⊆F be an optimum answer. We order sets in A in the following way: (i) let S′ 1 ∈A be an arbitrary set containing the smallest element in V ′; (ii) let S′ 2 ∈A be an arbitrary set containing the smallest element in V ′ \ S′ 1; (iii) let S′ 2 ∈A be arbitrary set containing the smallest element in V ′ \ (S′ 1 ∪S′ 2); (iv) and so on until we cover all elements in V ′; (v) after that we list the rest of sets from A in order σ. We assume that we obtain the following ordering S′ 1, S′ 2, . . . , S′ |A|. Denote by Ti = Si j=1 S′ j. We prove that dp(Ti ∩A′, Ti ∩V ′) ≤i by induction. It is obvious for i = 0, dp(∅, ∅) = 0 = i. If V ′ ̸⊆Ti−1, then by definition of S′ i staying in cell (Ti−1 ∩A′, Ti−1 ∩V ′), we update dp((Ti−1 ∪Si) ∩A′, (Ti−1 ∪Si) ∩V ′) = (Ti ∩ A′, Ti∩V ′) with the value dp(Ti−1∩A′, Ti−1∩V ′)+1 ≤i. If V ′ ⊆Ti−1, then being in cell dp(Ti−1∩A′, V ′) we update cell dp((Ti−1 ∪S′ i) ∩A′, V ′) = (Ti, V ′) with the value dp(Ti ∩A′, V ′) + 1. Moreover, at this point we know that dp(Ti−1 ∩A′, V ′) ≤i −1. Therefore, dp(A′, V ′) ≤|A|. Note, that we always can construct a covering with at most dp(A′, V ′) sets simply analyzing which sets we used in dynamic programming step. Hence, dp(A′, V ′) = |A|. Essentially above claim proves the correctness of the algorithm. It is left to prove an upper bound on the running time. First of all, note that running time of the last phase of the dynamic programming when the second coordinate Y equals to V ′ is O∗(2|A′|) (essentially this is standard dynamic programming for the SET COVER problem). Recall that π order all vertices of V ′ such that connected components of G[V ′] induce a consecutive blocks in the π. So, essentially π also induce an ordering on connected components of G[V ′]. Let us denote connected components of G[V ′] by H1, H2, . . . , Hr. We can assume that all vertices in Hi are smaller than all vertices in Hj if i < j. Note that the running time of our algorithm up to polynomial time is equal to the number of updated cells in the table dp(·, ·). Let (X, Y ) be one of such cells. If the smallest vertex outside of Y belongs to Hi then Y contains only vertices from connected components H1, H2, . . . Hi. Moreover, Y contains all vertices from H1, H2, . . . Hi−1. Let us denote by f(Hi) a number of all Y ′ ⊆Hi such that there is X and the cell (X, Y ′ ∪Si−1 j=1 Hj) was updated. It is easy to see that the overall number of updated cells is at most 2|A′|(f(H1) + f(H2) + · · · + f(Hr)) ≤ r2|A′| · maxi f(Hi) = O∗(2|A′| maxi f(Hi)). Now we provide an upper bound on the number f(Hi). Let π defines the following ordering vi 1, vi 2, . . . vi |Hi| on the vertices of the connected component Hi. Note that if Y ′ ⊆Hi with described properties, and {vi 1, vi 2, . . . , vi j} ⊆Y ′, vi j+1 ̸∈Y ′ then |Y ′| ≤2j. Indeed we always increase the second coordinate by some set with at most two elements and one of this elements is currently the smallest uncovered element. So, each Yi with cell (X, Y ′ ∪Si−1 j=1 Hj) being updated can be constructed in the following way: fix some j, take first j elements from the Hi, and take at most j elements from the set Hi \ {vi 1, vi 2, . . . , vi j}. Therefore the total number of all such Y ′ is at most: P|Hi| j=0 Pmin(2j,|Hi|) k=j |Hi|−j k−j  ≤ P |Hi| 2 j=0 Pmin(2j,|Hi|) k=j |Hi|−j k−j  + P|Hi| j= |Hi| 2 P|Hi| k=j |Hi|−j k−j  . Note that P|Hi| j= |Hi| 2 P|Hi| k=j |Hi|−j k−j  ≤ |Hi|2 |Hi| 2 . By Theorem 5 we have that P |Hi| 2 j=0 Pmin(2j,|Hi|) k=j |Hi|−j k−j  ≤ O∗(φ|Hi|). Since, √ 2 < φ we have that f(Hi) ≤ O∗(φ|Hi|). Therefore, overall running time of the algorithm is O∗(φmaxi |Hi|2|A′|). It is left to bound the expression in terms of k. In order to do that we compute logarithm base 2 of this expression log2  φmaxi |Hi|2|A′| = |A′| + (maxi |Hi|) log2 φ. Recall that |A′| = |A| + |B|, and |Hi| ≤2(k −|A| + |R| −|B|) + 1. Hence, the above expression is bounded by |A| + |B| + (2(k −|A| + |R| −|B| + 1) log2 φ = |A| · (1 −2 log2 φ) + |B| · (1 −2 log2 φ) + |R| · 2 log2 φ + + k · 2 log2 φ + log2 φ. Since, 1 −2 log2 φ < 0, the expression is not larger than |A| · (1 −2 log2 φ) + |R| · 2 log2 φ + k · 2 log2 φ + log2 φ. Since, |A| ≥3|R| and 1 −2 log2 φ < 0 the above expression can be bounded by: |R| · 3(1 −2 log2 φ) + |R| · 2 log2 φ + k · 2 log2 φ + log2 φ = = |R| · (3 −4 log2 φ) + k · 2 log2 φ + log2 φ ≤ ≤k 2 · (3 −4 log2 φ) + k · 2 log2 φ + log2 φ = 3k 2 + log2 φ. Therefore the running time of our algorithm is at most O∗(2 3 2 k) and we proved the desired claim. Natural Parameterization In order to obtain an improvement for (n, 3)-MAXSAT, and (n, 4)-MAXSAT we employ the following theorem. Theorem 9. (Belova and Bliznets 2020; Brilliantov, Alferov, and Bliznets 2023) Assume that MAXSAT parameterized above matching can be solved in O∗(ck 1) time, (n, s)MAXSAT can be solved in O∗(cn 2) and c = c2 log c1 log c1+log c2 . In this case, for CNF formulas where each variable appears at most s times we can check if at least k′ clauses are satisfiable in O∗(ck′) time. Using Theorems 6, 8, 9, we get the following result. Theorem 10. (n, 3)-MAXSAT admits O∗(1.14977k′), O∗(1.14977m) time algorithms. (n, 4)-MAXSAT can be solved in O∗(1.27895k′) running time. Here, k′ denotes number of clauses that we need to satisfy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7923 This theorem improves previous known upper bounds for (n, 3)-MAXSAT which were O∗(1.1554k′) and O∗(1.1554m) (Brilliantov, Alferov, and Bliznets 2023). It also improves upper bounds O∗(1.2872m) (Xiao 2022), O∗(1.3248k′) (Chen, Xu, and Wang 2017) which follows for (n, 4)-MAXSAT from algorithms for general MAXSAT. Experiments In this section, we present the experiments conducted to evaluate the performance of our implemented algorithm in C++. The primary objective of these experiments was to assess the algorithm’s efficiency in solving various instances of the MaxSAT problem for which solver was designed. To do this we consider performance of our algorithm and opensource solvers on specially generated instances. More information about implementation and environment details can be found in the full version. Generated Tests The generator used to test the capabilities of the algorithm accepts three parameters: a, b, and k. It creates a formula with ν(F) = ab and at most ab + k satisfiable clauses. There are ab variables in the test: x1,1, . . . , x1,b, . . . , x2,1, . . . , x2,b, . . . , xa,1, . . . , xa,b. For each variable xi,j, the clause xi,j is added. For each 1 ⩽i ⩽a, the clause xi,1...xi,b is added. Furthermore, we create k clauses more, and for each variable, its positive literal is added to a random subset of these clauses. Note that we can match each variable xi,j with the clause xi,j, so ν(F) = ab. Moreover, we claim that in this assignments we can satisfy at most ab + k clauses. Indeed, let the optimal satisfying assignment satisfy c clauses of the form xi,1 . . . xi,b. Then, out of the clauses xi,j, at least c must be unsatisfied. Therefore, the optimal satisfying assignment satisfies at most k + c + (ab −c) = k + ab clauses. Baselines We considered the following solvers (all participants of MaxSat Evaluations 2022): cash-w-maxsat-coreplus (Lei et al. 2022), cgss (Ihalainen, Berg, and J”arvisalo 2021), eval-max-sat (Avellaneda, Bilodeau-Savaria, and Normand 2022), exact (Devriendt 2022), max-cdcl, w-max-cdcl (Coll et al. 2022a), max-hs (Bacchus 2022), open-wbo (Martins, Manquinho, and Lynce 2014), uwr-max-sat-scip, uwr-max-sat (Piotr´ow 2020), w-max-cdcl-band-all (Coll et al. 2022b). Results We conducted a comparison on generated tests. Environment details can be found in the full version. As illustrated in Fig. 1 the results clearly demonstrate the superior performance of our algorithm over the competing solvers. To carry out the comparison, we set the parameters a and k to fixed values and linearly increased the parameter b. Demonstrating running time on practice of our algorithm matches theoretical upper bound provided for the algorithm. 0 10000 20000 30000 40000 50000 0 5000 10000 15000 20000 25000 30000 our algorithm open-wbo-glucose cash-w-maxsat-coreplus uwr-max-sat-scip exact cgss open-wbo-mergesat eval-max-sat w-max-cdcl max-cdcl w-max-cdcl-band-all max-hs uwr-max-sat Figure 1: Computational time of various MaxSAT solvers on generated instances with 100 ≤b ≤50000, a = 20, k = 10. The absence of the point on the graph means that the corresponding algorithm spent more than 30s for the certain instance. Y-axis in milliseconds For majority of instances from MaxSAT Evaluations 2022 application of our solver is problematic. As usually for these instances value of the parameter is very large. Besides, we do not implement any heuristics that sometimes significantly speed up computations in practice even though they do not have any proven guarantee. Conclusion Our findings lead to improvement of algorithms for the following problems (ν(F) + k)-SAT, (n −k)-SET COVER. The new running time is O∗(2 3k 2 ). We also show that the same upper bound holds for Partial (ν(F) + k)-SAT if all hard clauses have length at most 2. If this is not the case then we can solve Partial (ν(F) + k)-SAT in time O∗(2 3k 2 + 2k · 1.12226h) where h is the number of hard clauses in the input formula. Moreover, we establish record upper bounds for (n, 3)-MAXSAT and (n, 4)-MAXSAT if we measure the running times in terms of the overall number of clauses or in terms of the number of clauses that we need to satisfy. Note that recent algorithms for MAXSAT in terms of the number of clauses that needs to be satisfied reduce problem to the case when only (4, 1), (3, 1) variables appear in the formula. Here, we showed that instead we can try to reduce the problem to an instance where each variable appears at most 4 times, as for such subproblem we have an algorithm. Moreover, we implemented our algorithm and tested it versus state-of-the-art MAXSAT solvers. On instances for which our algorithm was designed it significantly outperforms competitors. Hence, including ideas of our algorithm as subroutines or subprocedures in modern solvers might further extend area of their applicability. Acknowledgements Research is partially supported by Huawei (grant TC20231108096). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7924 Research of Ivan Bliznets is supported by the project CRACKNP that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 853234). References Alferov, V.; and Bliznets, I. 2021. New Length Dependent Algorithm for Maximum Satisfiability Problem. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3634–3641. Avellaneda, F.; Bilodeau-Savaria, C.-E.; and Normand, L. 2022. Weighted version of EvalMaxSAT 2022. In MaxSAT Evaluation, volume 16, 12. Bacchus, F. 2022. MaxHS in the 2022 MaxSat Evaluation. In MaxSAT Evaluation, volume 16, 17–18. Bansal, N.; and Raman, V. 1999. Upper bounds for MaxSat: Further improved. In International symposium on algorithms and computation, 247–258. Springer. Basavaraju, M.; Francis, M. C.; Ramanujan, M.; and Saurabh, S. 2016. Partially polynomial kernels for set cover and test cover. SIAM Journal on Discrete Mathematics, 30(3): 1401–1423. Belova, T.; and Bliznets, I. 2020. Algorithms for (n, 3)MAXSAT and parameterization above the all-true assignment. Theoretical Computer Science, 803: 222–233. Biere, A.; Heule, M.; and van Maaren, H. 2021. Handbook of satisfiability. IOS press. Brilliantov, K.; Alferov, V.; and Bliznets, I. 2023. Improved Algorithms for Maximum Satisfiability and Its Special Cases. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 3898–3905. Chen, J.; and Kanj, I. A. 2004. Improved exact algorithms for Max-Sat. Discrete Applied Mathematics, 142(1-3): 17– 27. Chen, J.; Xu, C.; and Wang, J. 2015. Dealing with 4variables by resolution: an improved MaxSAT algorithm. In Workshop on Algorithms and Data Structures, 178–188. Springer. Chen, J.; Xu, C.; and Wang, J. 2017. Dealing with 4variables by resolution: an improved MaxSAT algorithm. Theoretical Computer Science, 670: 33–44. Coll, J.; Li, S.; Li, C.-M.; Manya, F.; Habet, D.; and He, K. 2022a. MaxCDCL and WMaxCDCL in MaxSAT Evaluation 2022. In MaxSAT Evaluation, volume 16, 15–16. Coll, J.; Li, S.; Li, C.-M.; Manya, F.; Habet, D.; and He, K. 2022b. WMaxCDCL-BandAll in MaxSAT Evaluation 2022. In MaxSAT Evaluation, volume 16. Crowston, R.; Gutin, G.; Jones, M.; Raman, V.; Saurabh, S.; and Yeo, A. 2014. Fixed-parameter tractability of satisfying beyond the number of variables. Algorithmica, 68(3): 739– 757. Cygan, M.; Fomin, F. V.; Kowalik, Ł.; Lokshtanov, D.; Marx, D.; Pilipczuk, M.; Pilipczuk, M.; and Saurabh, S. 2015. Parameterized algorithms, volume 5. Springer. Devriendt, J. 2022. Exact: evaluating a pseudo-Boolean solver on MaxSAT problems. In MaxSAT Evaluation, volume 16, 13–14. Fomin, F. V.; and Kratsch, D. 2010. Exact Exponential Algorithms. Texts in Theoretical Computer Science. An EATCS Series. Springer. ISBN 978-3-642-16532-0. Gutin, G.; and Mnich, M. 2022. A survey on graph problems parameterized above and below guaranteed values. arXiv preprint arXiv:2207.12278. Hopcroft, J. E.; and Karp, R. M. 1973. An nˆ5/2 algorithm for maximum matchings in bipartite graphs. SIAM Journal on computing, 2(4): 225–231. Ihalainen, H.; Berg, J.; and J”arvisalo, M. 2021. Refined Core Relaxation for Core-Guided MaxSAT Solving. In CP, volume 210 of LIPIcs, 28:1–28:19. Schloss Dagstuhl - Leibniz-Zentrum f”ur Informatik. Lei, Z.; Wang, Y.; Pan, S.; Cai, S.; and Yin, M. 2022. CASHWMaxSAT-CorePlus: Solver Description. In MaxSAT Evaluation, volume 16, 8. Mahajan, M.; and Raman, V. 1999. Parameterizing above guaranteed values: MaxSat and MaxCut. J. Algorithms, 31(2): 335–354. Marek, V. W. 2009. Introduction to mathematics of satisfiability. CRC Press. Martins, R.; Manquinho, V.; and Lynce, I. 2014. OpenWBO: A Modular MaxSAT Solver,. In Sinz, C.; and Egly, U., eds., Theory and Applications of Satisfiability Testing – SAT 2014, 438–445. Cham: Springer International Publishing. ISBN 978-3-319-09284-3. Piotr´ow, M. 2020. UWrMaxSat: Efficient Solver for MaxSAT and Pseudo-Boolean Problems. In 32nd IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2020, Baltimore, MD, USA, November 9-11, 2020, 132–136. IEEE. Szeider, S. 2004. Minimal unsatisfiable formulas with bounded clause-variable difference are fixed-parameter tractable. Journal of Computer and System Sciences, 69(4): 656–674. Xiao, M. 2022. An Exact MaxSAT Algorithm: Further Observations and Further Improvements. In Raedt, L. D., ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, 1887–1893. International Joint Conferences on Artificial Intelligence Organization. Main Track. Xu, C.; Li, W.; Yang, Y.; Chen, J.; and Wang, J. 2019. Resolution and domination: an improved exact MaxSAT algorithm. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, 1191–1197. AAAI Press. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7925
2024
880
18,718
Approximation Scheme for Weighted Metric Clustering via Sherali-Adams Dmitrii Avdiukhin1, Vaggos Chatziafratis2, Konstantin Makarychev1, Grigory Yaroslavtsev3 1Northwestern University, Illinois 2University of California at Santa Cruz, California 3George Mason University, Virginia [email protected], [email protected], [email protected], [email protected] Abstract Motivated by applications to classification problems on metric data, we study Weighted Metric Clustering problem: given a metric d over n points, the goal is to find a k-partition of these points into clusters C1, . . . , Ck, while minimizing Pk i=1 Pk j=1 P u∈Ci P v∈Cj Aij duv, where A is a k × k symmetric matrix with non-negative entries. Specific choices of A lead to Weighted Metric Clustering capturing wellstudied graph partitioning problems in metric spaces, such as Min-Uncut, Min-k-Sum, Min-k-Cut, and more. Our main result is that Weighted Metric Clustering admits a polynomial-time approximation scheme (PTAS). Our algorithm handles all the above problems using the SheraliAdams linear programming relaxation. This subsumes several prior works, unifies many of the techniques for various metric clustering objectives, and yields a PTAS for several new problems, including metric clustering on manifolds and a new family of hierarchical clustering objectives. Our experiments on the hierarchical clustering objective show that it better captures the ground-truth structural information compared to the popular Dasgupta’s objective. 1 Introduction We introduce and study Weighted Metric Clustering problem: given n points from an arbitrary metric space (V, d), we want to find a k-partition of V , i.e. a partition into k clusters C1, . . . , Ck, where k is assumed to be a fixed constant. Because the quality of clustering may depend on the application at hand, we allow for a user-defined k × k symmetric matrix A with non-negative entries to be part of the input. Matrix A determines the “cost penalty” for how the k different clusters interact: if u is assigned to cluster Ci and v is assigned to cluster Cj, then the pair (u, v) pays Aijduv, where the distance between elements u, v is denoted as duv. Hence, our goal is to minimize the following objective: COST(C1, . . . , Ck) = k X i=1 k X j=1 X u∈Ci X v∈Cj Aijduv. (⋆) In Weighted Metric Clustering, n is the number of input variables and k is assumed to be a fixed constant independent of n. Observe that P u∈Ci P v∈Cj duv can be thought of as an Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. overall measure of dissimilarity between clusters Ci and Cj, which is weighted with Aij in the objective (⋆). Note that we can interpret our objective (⋆) as a minimization valued Constraint Satisfaction Problem (MIN-CSP) on variables in V and domain D = {1, . . . , k}. In this CSP, we have a constraint for all pairs of variables. The weight of the constraint between variables u and v equals the distance duv. The payoff function for each constraint is defined by matrix A: namely, the cost of assigning labels i and j to variables u and v equals Aij. The goal is to find an assignment, i.e. a mapping ℓ: V →D minimizing the total payoff: Pk i=1 Pk j=1 P u∈V P v∈V Aijduv · 1{ℓ(u) = i; ℓ(v) = j}. The strength of objective (⋆) lies in the flexibility of choice of matrix A, allowing it to cover many important problems. Metric Min-Uncut (Indyk 1999) This is the complement of Max-Cut where we want to split into two clusters so as to minimize the sum of pairwise distances within clusters. If in (⋆) we set k = 2, A =  1 0 0 1  , then we pay duv only for elements u, v that end up in the same cluster. Metric Min-k-Sum (Bartal, Charikar, and Raz 2001) Also termed Min-k-Uncut, this is the natural extension of the previous problem to k clusters, where we want to minimize the sum of distances between pairs of points assigned to the same cluster. Fixing A = Ik×k to be the k×k identity matrix yields the problem. Metric Multiway Cut (Dahlhaus, Johnson, Papadimitriou, Seymour, and Yannakakis 1994) We can also model problems where the cost is based on the separated u, v pairs. For example, taking A = Jk×k−Ik×k, where J is the all-ones matrix, yields the Min-k-Cut objective, with the goal of minimizing the sum of distances among all pairs of separated points. Min-k-Cut problem additionally requires that all clusters are non-empty, and one possible approach is to fix one point per cluster; this variant of the problem, known as a multiway cut, is MAX SNP-hard (Dahlhaus et al. 1994) even for k = 3. Our algorithms are robust to such modifications of the objective and provide a PTAS for the metric case for fixed k. A related problem is a multicut problem (see e.g. Costa, L´etocart, and Roupin (2005)), where, given a set of k pairs The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7926 {(si, ti)}k i=1, we need to remove the edges with the smallest possible weight so that si and ti are disconnected for all i. For fixed k, similarly to the multiway cut problem, we can guess clusters for all si and ti. Metric Clustering on Manifolds Our formulation can also capture problems where data points reside on a manifold. In this case, the clusters are related (they can form a chain, a ring, or a grid) and we would like to find a clustering by grouping adjacent data points. As an example, the chain topology on four clusters, i.e. C1 −C2 −C3 −C4, can be represent by matrix A =    2 1 0 0 1 2 1 0 0 1 2 1 0 0 1 2   , indicating that pairs of points in the same cluster pay 2, pairs in the neighboring clusters pay 1, and pairs in nonneighboring clusters pay 0. Understanding such problems on manifolds served as motivation for the original work by Song, Smola, Gretton, and Borgwardt (2007) that introduced the maximization variant of a special case of (⋆) called Kernel Clustering. For such problems, to the best of our knowledge, no approximation was known for the minimization versions and our results provide the first PTAS. New Application to Metric Hierarchical Clustering To highlight the versatility of objective (⋆), we present an application to hierarchical clustering motivated by graph compression and graph reordering problems in social networks (Dhulipala, Kabiljo, Karrer, Ottaviano, Pupyrev, and Shalita 2016). We introduce a novel family of minimization objectives over hierarchies which depend on the depth of the Lowest Common Ancestors (LCA) for pairs of leaves. In contrast, almost all prior works considered hierarchical clustering objectives based on the size of the LCA (Dasgupta 2016). 2 Previous Work and Our Results While there were important works on obtaining PTAS’s for minimization problems (Indyk 1999; de la Vega, Karpinski, and Kenyon 2004; de la Vega, Karpinski, Kenyon, and Rabani 2003), it was not a priori clear whether a PTAS for these problems could exist. This is mainly due to pessimistic hardness results that hold for related minimization problems: for example, for every k > 2 and ε > 0, the Min-k-Sum problem cannot be approximated within n2−ε, even for dense graphs (Kann, Khanna, Lagergren, and Panconesi 1996). For more background on maximization and MIN-CSPs (Appendix A). Surprisingly, we show that every problem within our Weighted Metric Clustering (⋆) framework admits a PTAS. As a consequence, this gives alternative PTAS for various problems, e.g., it subsumes known PTAS results for Metric Min-Uncut (Indyk 1999) and Metric Min-k-Sum (Bartal, Charikar, and Raz 2001). Furthermore, we give new PTAS’s for various other problems, since any matrix A gives rise to a new clustering problem. In particular, our framework gives the first PTAS for metric minimization version of clustering on manifolds mentioned above (Song et al. 2007), multiway cut (Dahlhaus et al. 1994), and multicut (Costa, L´etocart, and Roupin 2005) problems. Furthermore, we give PTAS for a new family of hierarchical clustering objectives motivated by graph compression and graph relabeling. An interesting aspect of our result is that a single algorithmic technique based on the Sherali–Adams LP relaxation can accommodate all problems. Notice that just Min-k-Sum required a variety of tools (and often ad hoc ideas) to get a PTAS: for example, the PTAS of Indyk (1999) for k = 2 relied on the already known PTAS for metric Max-Cut, the first non-trivial approximation of Min-k-Sum (for general k) relied on metric embeddings into hierarchically separated trees combined with dynamic programming, and finally, the PTAS of de la Vega, Karpinski, Kenyon, and Rabani (2003) used sampling and exhaustive search combined with careful reassignment of nodes to the k clusters. Our main result can be seen as a unified method that provides PTAS not only for Min-k-Sum, but all other metric problems in our framework. Sherali–Adams. The Sherali-Adams lift-and-project method (Sherali and Adams 1990) is a powerful technique for strengthening linear programming relaxations. This as well as other lift-and-project methods (e.g., by Lov´asz and Schrijver (1991)) have been extensively studied in Computer Science and Operations Research.1 They asked if Sherali-Adams can be used to improve approximation guarantees for constraint satisfaction and combinatorial optimization problems. It turns out, that in many cases, the answer to this question is negative. Yannakakis (1988) proved the Traveling Salesman Problem (TSP) cannot be solved exactly using a symmetric “extended formulation” of polynomial size and, in particular, by a Sherali-Adams relaxation of polynomial size. De la Vega and KenyonMathieu (2007) and Charikar, Makarychev, and Makarychev (2009a) showed that Sherali-Adams relaxation can not be used to improve approximation guarantees for many constraint satisfaction problems if we do not make additional assumptions about the structure of the CSP instances (see also Alekhnovich, Arora, and Tourlakis (2011)). However, in some cases, Sherali-Adams can be used to obtain better approximations for MAX-CSPs. In particular, Yoshida and Zhou (2014) gave a PTAS for dense instances of MAX-CSPs (but not MIN-CSPs!). For additional examples of MAX-CSP approximations using Sherali-Adams, we refer the reader to recent papers by Thapper and Zivny (2017); Hopkins, Schramm, and Trevisan (2020); Romero, Wrochna, and ˇZivn`y (2021); Cohen-Addad, Lee, and Newman (2022); Mezei, Wrochna, and ˇZivn`y (2023). Kernel Clustering Motivated by applications in machine learning and statistics, Kernel Clustering was proposed by Song, Smola, Gretton, and Borgwardt (2007) as a broad family of clustering methods based on the maximization of dependence between the input variables and their cluster labels. It is a unified framework for various clustering methods 1See the survey by Chlamtac and Tulsiani (2012) for an overview of results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7927 arising from geometric, spectral or statistical considerations, and it has connections to k-means, clustering under topological constraints, and hierarchical clustering. Formally, their goal is to maximize objective (⋆) under the assumption that both the distance matrix d and the cost matrix A are positive semidefinite. On the other hand, while we require d to be a metric, we don’t require d and A to be positive semidefinite. Kernel Clustering is a generalization of the positive semidefinite Grothendieck problem (Nesterov 1998) that has found many algorithmic applications (Alon and Naor 2004; Charikar and Wirth 2004; Charikar, Makarychev, and Makarychev 2009b), and has further connections to semidefinite programming, non-convex optimization and the Unique Games Conjecture (Khot and Naor 2008, 2013). Khot and Naor (2008, 2013) studied Kernel Clustering, presenting constant factor approximations and hardness results. In our paper, we show a PTAS for the minimization version of the problem under metric assumption. 2.1 Main Result The main question we address here is the following: What is the best approximation for the Weighted Metric Clustering objective (⋆)? Our main result shows that we can get an arbitrary good approximation. Theorem 2.1. (Informal) There is a PTAS2 for the Weighted Metric Clustering objective (⋆). As a corollary, we get a PTAS not only for all the abovementioned problems, but also many more, since any choice of the matrix A generates a new, different clustering objective. In particular, with careful choice of A, we provide PTAS’s for problems where the PTAS’s were not previously known, such as clustering on manifolds and a family of hierarchical clustering objectives (Section 3), where each pair of elements is penalized depending on the depth of their least common ancestors. We describe the depth-based hierarchical clustering objectives in Section 3, with additional motivation based on the Minimum Logarithmic Arrangement presented in Appendix F, and we empirically demonstrate the advantage of these objectives in Section 6. Note that without metric assumption, we cannot have a PTAS even when k = 3 (Khot and Naor 2013) under the Unique Games Conjecture, hence it’s remarkable that a PTAS for the metric minimization version is possible. Moreover, we handle Weighted Metric Clustering using a single algorithmic technique via the Sherali–Adams linear programming (Sherali and Adams 1990). This subsumes several prior works, unifies many of the techniques on various clustering objectives, and yields PTAS’s for new problems, including a new family of hierarchical clustering objectives. Our Techniques While it is already known that the Sherali–Adams hierarchy can be used to get PTAS’s for 2For a minimization problem, a PTAS is an algorithm that, given ε > 0 as a parameter, returns a (1 + ε)-approximation to the optimal value and runs in polynomial time for any constant ε. For maximization, we seek a (1 −ε)-approximation. CSPs, the na¨ıve approach would result in additive error terms, which can be acceptable for maximization objectives but are intolerable for minimization objectives, such as (⋆). Our algorithm makes Sherali–Adams relaxations applicable to a wide class of minimization objectives and has two stages: Stage I assigns most of the elements via independent rounding and Stage II carefully handles the rest of the points, which we refer to as outliers. To handle the outliers, we rely on the second objective LPII, which is optimized simultaneously with the Sherali–Adams relaxation LPI; formally, we minimize max(LPI, LPII) and ensure that it is upper-bounded by OPT. On the other hand, a solution to LPII simplifies the process of assigning the outliers to the clusters. See Section 4 for the details. Practical Algorithm and Experiments In Section 6, we introduce a practical version of our algorithm based on LPII, which provides a constant-factor approximation to objective (⋆). We run our experiments on 104 data points and show that our hierarchical clustering objective recovers a groundtruth clustering better compared to the popular Dasgupta’s objective (Dasgupta 2016). 3 Application to Hierarchical Clustering We showcase how our general Weighted Metric Clustering framework (⋆) can be applied to the problem of finding a hierarchy over clusters rather than a partition. In Hierarchical Clustering (HC), given a set of points V , the goal is to bijectively map the points on the leaves of a tree T . HC is a very popular method with a wide range of applications (Leskovec, Rajaraman, and Ullman 2020). Recent literature (Dasgupta 2016; Moseley and Wang 2017; CohenAddad, Kanade, Mallmann-Trenn, and Mathieu 2019) introduces a number of HC objectives where, for the hierarchical tree T , each pair of elements (u, v) is penalized based on the number of leaves under the Lowest Common Ancestor (LCA) of u and v in T , denoted as LCAT (u, v) (for literature review, see Appendix F). Instead of using the number of leaves under the LCA, here we propose an optimization objective for HC where the penalty term is defined based on the depth of the LCA. For a node v ∈T , let h(v) denote the depth of v in the tree, defined as the number of edges on the shortest path from the root to v (e.g. h(r) = 0 if r is the root node). Our goal is to minimize the following over all possible binary trees T : H(T ) = X u,v∈V duv h(LCAT (u, v)) (Depth-HC) Here d is a metric, and we shall note that HC has been extensively studied for metric spaces (Agarwala, Bafna, Farach, Paterson, and Thorup 1998; Ailon and Charikar 2005; Dasgupta and Long 2005). Objective Depth-HC captures the fact that it is better to separate the distant points early in the hierarchical structure, i.e. h(LCAT (u, v)) should be small when duv is large. For HC, we show the following result (proof in Appendix F). Theorem 3.1. For any metric d, there exists a PTAS for minimizing the objective Depth-HC. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7928 In order to map the HC objective to Weighted Metric Clustering (⋆), we must appropriately choose matrix A. The main idea is to show that it suffices to recover the tree up to the depth log(1/ε) and build random trees on deeper levels. Let T◦be a full binary tree T◦of depth log(1/ε), and we associate the k = 1/ε leaves ℓ1, . . . , ℓk of T◦with corresponding clusters C1, . . . , Ck. For different clusters Ci and Cj, we define Aij as the depth of their LCA, i.e. h(LCAT◦(ℓi, ℓj)). Depth-based objectives are useful in Graph Compression and Vertex Reordering problems (Raghavan and GarciaMolina 2003; Boldi and Vigna 2004; Chierichetti et al. 2009; Dhulipala et al. 2016), where the goal is to find space-efficient labeling schemes for the nodes in the graph. Roughly speaking, the depth h(LCAT (i, j)) corresponds to the bits needed to represent a vertex in the graph, and, exploiting the fact that similar nodes tend to have similar sets of neighbors, one can significantly reduce the bit-complexity of the graph representation. A more in-depth discussion and the proof of Theorem 3.1 are deferred to Appendix F. Extensions. We note that our result for objective DepthHC in Theorem 3.1 also holds for more general cost functions than the hierarchical clustering objective DepthHC specified above. For example, instead of the depth of the lowest-common ancestor, h(LCAT (i, j)), we could also penalize according to the logarithm of the depth, i.e. log h(LCAT (i, j)), or the square of the depth, i.e. h2(LCAT (i, j)); our algorithms and proofs would still guarantee a PTAS in these cases. In fact, any function which depends on the depth subexponentially works. For the formal statement regarding the more general hierarchical clustering objectives, see Appendix F. 4 Sherali–Adams and Local Probability Distributions Our (1 + ε)-approximation algorithm for Weighted Metric Clustering uses a Sherali–Adams relaxation for the problem. Sherali–Adams (Sherali and Adams 1990) is a lift-and-project method for strengthening linear programming (LP) relaxations. In this paper, we will use a “local probability distribution” approach to Sherali–Adams (de la Vega and Kenyon-Mathieu 2007; Charikar, Makarychev, and Makarychev 2009a). We also use a method for removing dependencies between random variables in local distributions, which was developed by Raghavendra and Tan (2012) (see also Barak, Raghavendra, and Steurer (2011) and Yoshida and Zhou (2014)). We now describe the Sherali–Adams LP relaxation. For every tuple of points v ∈V r, where r ≥2 is a fixed integer parameter, we have a set of LP variables that defines a probability distribution of “labels” on v1, . . . , vr. For every ℓ∈{1, . . . , k}r, we introduce a variable Pv  v1 ∈ Cℓ1, . . . , vr ∈Cℓr  . Each of these kr variables (sometimes called pseudo-probabilities) lies in [0, 1] and represents the probability that point vi is assigned to cluster Cℓi for all i.3 For every v ∈V k, the linear programming relaxation has 3Formally, one should think about assigning point vi to Cℓi as of assigning label ℓi to point vi the constraint P ℓ∈{1,...,k}r Pv  v1 ∈Cℓ1, . . . , vr ∈Cℓr  = 1. This constraint ensures that in a feasible LP solution, every Pv indeed defines a local probability distribution on points v1, . . . , vr. We also add a constraint that guarantees that this probability does not depend on the order of points v1, . . . , vr. For example, for r = 2, we impose constraint P[a ∈C1, b ∈ C2] = P[b ∈C2, a ∈C1], where a and b are arbitrary points from V . Specifically, for every permutation σ of {1, . . . , k}: Pv  v1 ∈Cℓ1, . . . , vr ∈Cℓr  = Pv  vσ(1) ∈Cℓσ1 , . . . , vσk ∈Cℓσk  . LP variables Pv  v1 ∈Cℓ1, . . . , vr ∈Cℓr  prescribe probabilities to elementary events  v1 ∈Cℓ1, . . . , vr ∈Cℓr and thus define probabilities for all events: for E ⊆{1, . . . , k}r, we let Pv[v ∈E] = P ℓ∈E Pv  v1 ∈Cℓ1, . . . , vr ∈Cℓr  . In other words, Pv[v ∈E] is the probability that labels for v1, . . . , vr drawn from local distribution P are Cℓ1, . . . , Cℓr (respectively) with ℓ∈E. To avoid ambiguity, we will use a different notation to denote probabilities associated with our algorithm. We shall write Pr[v1 ∈X1, . . . , vr ∈Xr] to denote the probability that points v1, . . . , vr belong to random sets X1, . . . , Xr chosen by the algorithm. An important constraint of the Sherali–Adams relaxation is that all local distributions are locally consistent, as we explain next. Consider two tuples u and v. Let z be the set of common points in u and v. Both u and v define marginal probability distributions on cluster labels for points in z. We require that these marginal distributions be the same. Specifically, we add a constraint to the linear program that enforces that label distributions on u and v agree on the intersection z = u ∩v. We denote the marginal probability distribution on every set z of size at most r by Pz. If z consists of one point u or two points u,v, we write Pu and Puv, respectively. We stress that even though all local distributions P are locally consistent, generally speaking, there is no global distribution of cluster labels that is consistent with all local distributions. We also note that the size of the Sherali–Adams is exponential in r, since the number of variables equals nr·kr. Thus, if we want to solve a Sherali–Adams relaxation in polynomial time, the parameter r must be a constant. When each variable in a solution to the Sherali–Adams relaxation is equal to 0 or 1, we call the solution integral. An integral solution corresponds to an actual clustering in which u belongs to Ci if and only if Pu[u ∈Ci] = 1. Moreover, Pv  v1 ∈Cℓ1, . . . , vr ∈Cℓr  = 1 if and only if v1 ∈ Cℓ1,. .. , vr ∈Cℓr. That is, Pv [v1 ∈Cℓ1, . . . , vr ∈Cℓr] = 1{v1 ∈Cℓr, . . . , vr ∈Cℓr}, where 1{E} is the indicator of the event E. We now define the objective function for our Sherali–Adams relaxation and introduce some additional constraints. We assume that we know the sizes of the optimal clusters n1 = |C∗ 1|,. .. , nk = |C∗ k|. We additionally assume that we know their centers c1 ∈C∗ 1, . . . , ck ∈C∗ k which guarantee 3-approximation (see Lemma B.2). Note that there are at most O(n2k) combinations of different ci’s and nj’s, and hence we can try all possibilities. We use Π to denote the particular choice of ci’s and nj’s and call it the clustering profile. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7929 The objective of our linear programming relaxation is the maximum of LPI and LPII under the constraints above: minimize LP = max(LPI, LPII), (1) LPI = 1 2 k X i=1 k X j=1 X u,v∈V Aij duv Puv  u ∈Ci, v ∈Cj  LPII = 1 3 k X i=1 X u∈V FΠ(u, i) Pu  u ∈Ci  , FΠ(u, i) = k X j=1 nj aij duci∧j, (2) where i ∧j is defined as min(i, j). The first objective LPI is a direct relaxation of the objective function of Weighted Metric Clustering: in an integral LP solution – when each Puv  u ∈Ci, v ∈Cj  is 0 or 1 – the value of LPI equals the cost of the corresponding combinatorial solution to Weighted Metric Clustering. Consequently, in the optimal integral solution to the problem, LPI = OPT, where OPT = COST(C∗ 1, . . . , C∗ k) is the value of the optimal solution. The second objective LPII is upper bounded by OPT in the optimal integral solution by Lemma B.2 when ci’s and nj’s are guessed correctly. This is due to the fact that for any u ∈V , i ∈[k], and the correct guess of ni and ci, ni duci is a good approximation of P v∈Cj duv (de la Vega, Karpinski, Kenyon, and Rabani 2003). Therefore, OPTLPI ≤OPT and OPTLPII ≤OPT, where OPTLPI and OPTLPII are the values of LPI and LPII in the optimal solution to our linear program. Intuitively, LPII is used for bounding the error terms in the analysis and, compared to LPII, has the following advantages. First, every term involves a single point u (note that other variables in each term are either guessed or fixed), and hence it’s easy to optimize. Second, LPII refers to cluster centers instead of clusters themselves, which is important for the case when there are multiple equivalent solutions to the original problem, e.g. in the case of Min-Uncut. In the analysis, we often use triangle inequality to bound duv ≤duc + dcv, with the choice of c being crucial. LPII forces the center for each cluster, which makes the choice of c clear in each particular case. Finally, we add capacity constraints to our relaxation, which are satisfied in the integral solution to Weighted Metric Clustering. For all i ∈{1, . . . , k}: P u∈V Pu[u ∈Ci] ≤ ni. These constraints are important since LPII is a good approximation of (⋆) only if cardinalities are guessed and enforced correctly. 4.1 Making Point Distributions Nearly Independent We now define nearly independent local distributions and then describe a procedure MAKEINDEPENDENT that transforms local distributions P obtained by solving the Sherali– Adams LP relaxation into a nearly independent local distributions P∗. This procedure uses the conditional probability technique for Sherali–Adams (Raghavendra and Tan 2012). The main difference between our result and theirs is that we require that local distributions P∗(see below) are simultaneously nearly independent for k sets D1, . . . , Dk, while Raghavendra and Tan (2012) obtain a globally uncorrelated solution which corresponds to the case when we have only one set A = V . For us, it is crucial to have sets D1, . . . , Dk in the definition because some sets Di may have size o(n) (e.g., √n). In that case, the guarantees of the algorithm by Raghavendra and Tan (2012) are not sufficient for us. First, we introduce some notation. Denote the distribution of pairs u and v in which u and v are sampled independently with distributions Pu and Pv by Pu ⊗Pv: (Pu ⊗Pv)[u ∈Ci, v ∈Cj] = Pu[u ∈Ci] · Pv[v ∈Cj]. Definition 4.1. Let D1, . . . , Dk be subsets of V . We say that a family of local probability distributions {P} are (γ, δ)nearly independent for sets D1, . . . , Dk if the following condition holds: for every u ∈V and every j ∈{1, . . . , k}, for all but γ fraction of v in Dj, we have ∥Pu⊗Pv −Pu,v∥T V ≤ δ. Equivalently, for all u ∈V and i ∈[k], the number of elements v ∈Dj such that ∥Pu ⊗Pv −Pu,v∥T V > δ must be at most γ|Di|. If ∥Pu ⊗Pv −Pu,v∥T V ≤δ, we say that u and v are δ-nearly independent according to Pu,v. Theorem 4.2. For every δ, γ, η ∈(0, 1) and integer k > 1, there exists a randomized polynomial-time procedure that given a solution P to the Sherali–Adams relaxation with r ≥2 + k log2 k 2δ2γη rounds, outputs a family of local probability distributions {P∗ u}u and {P∗ uv}uv and exit status (“success” or “failure”) such that 1. If the algorithm succeeds, then P∗is (γ, δ)-nearly independent. 2. For all u, v ∈V and i, j ∈{1, . . . , k}, E [P∗ u[u ∈Ci]] = Pu[u ∈Ci] E [P∗ uv[u ∈Ci, v ∈Cj]] = Puv[u ∈Ci, v ∈Cj]. 3. The algorithm fails with probability at most η. The goal of algorithm MAKEINDEPENDENT is to build a (γ, δ)-nearly independent family {P∗} while preserving the expectation of the LP value. The algorithm builds a sequence of distributions {P(0)} = {P}, {P(1)}, . . .. At iteration t, it finds a point u violating the (γ, δ)-nearly independence condition, and then conditions local distributions on the event P(t)[u ∈Ci] for i drawn from distribution P(t) u . Loosely speaking, every time we do the conditioning step, we make more pairs (u, v) nearly independent. We show that a certain measure – entropy – decreases with each iteration by at least a fixed amount, and hence in approximately r steps, we get nearly independence with the desired parameters. We provide more details and prove this theorem in Appendix C. 5 Main Algorithm In this section, we outline our (1 + ε)-approximation algorithm or PTAS (polynomial-time approximation scheme) for the Weighted Metric Clustering problem. We provide full details in Appendices D and E. The pseudocode is provided in Algorithm 1. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7930 Algorithm 1: PTAS for Weighted Metric Clustering input : V – set of points, {duv}u,v∈V – pairwise distances, {Aij}k i,j=1 – inter-cluster costs 1 parameters: r – number of rounds of SA relaxation, η – outlier probability threshold, δ – fraction of dependent points, γ – independence threshold 2 Guess cluster centers c1, . . . , ck and sizes n1, . . . , nk 3 Let {P} be the r-round solution to SA relaxation for Problem (1) 4 Di = {u ∈V : Pu[u ∈Ci] ≥η} for all i 5 {P∗} = MAKEINDEPENDENT({P}, {Di}, δ, γ) 6 // Tentative assignment via independent rounding 7 for all u ∈V do 8 Assign u to Ci with probability P∗[u ∈Ci] 9 // Stage I: Assigning non-outliers 10 if P∗is (γ, δ)-nearly independent for D1, . . . , Dk then 11 Xi = Ci ∩Di, O = S i(Ci \ Di) 12 else 13 O = V // Every point is outlier 14 // Stage II: Assigning outliers 15 for all u ∈O do 16 Assign u to Yi with probability P[u ∈Ci]. 17 return (X1 ∪Y1, . . . , Xk ∪Yk) Algorithm 2: MAKEINDEPENDENT({P}, {Di}, δ, γ) input : {P} – r-round solution to SA relaxation, {Di}k i=1 – candidate sets for each cluster, δ – fraction of dependent points, γ – independence threshold 1 Let {P(0)} be {P} 2 for t = 0, 1, . . . , r −3 do 3 if {P(t)} is (γ, δ)-nearly independent for sets D1, . . . , Dk (Def. 4.1) then 4 return {P(t)} 5 Let u be a point violating the (γ, δ)-nearly independence condition. 6 Assign u to Ci with probability P(t)[u ∈Ci]. 7 Let {P(t+1)} be {P(t)} conditioned on u ∈Ci. 8 return {P(r−2)} Algorithm Outline In the first step, the algorithm guesses the cluster centers {ci} and sizes {nj}, which we call the clustering profile and denote by Π. Note that all choices of {ci} and {nj} can be enumerated in polynomial time, and our analysis assumes the correct choice. Then, the algorithm solves the r-round Sherali–Adams relaxation for Weighted Metric Clustering (see Section 4) and obtains local distributions P. For constant r, the size of the relaxation is polynomial in n, and thus it can be solved in polynomial time. We then assign points to clusters using a two-stage algorithm. At Stage I, we assign most points to clusters X1, . . . , Xk and place the remaining points, which we call “outliers”, in set O. We guarantee that the cost of the partial clustering X1, . . . , Xk is at most (1 + ε)OPT in expectation and each point is an outlier with probability at most ηk (see Lemma D.2 for the formal statement), where η is a small parameter depending on ε. At Stage II, we cluster the outliers from set O. For this purpose, we use a variant of the 3-approximation algorithm, which we provide in Section B. Since the number of outliers is very small, the cost of clustering them is also small despite the fact that we use a constant factor approximation for outliers. Finally, we combine the clusterings obtained at Stage I and Stage II and get a clustering of cost at most (1 + ε)OPT. The algorithm for clustering outliers is discussed in Appendix E. Stage I We now examine the first stage of the algorithm in more detail. It is inspired by Yoshida and Zhou (2014) and Raghavendra and Tan (2012). The general idea is to transform the solution for the Sherali–Adams relaxation to a family of local distributions {P∗} such that P∗ uv[u ∈Ci; v ∈Cj] ≈P∗ u[u ∈Ci] · P∗ v[v ∈Cj] (3) for most pairs of points. This can be done using the method discussed in the previous section. Next, we want to independently assign every point u to cluster i with probability Pu[u ∈Ci]. If condition (3) holds for some pair (u, v) and all i, j, then the expected cost this algorithm pays for clustering pair (u, v), P ij duv Aij Pu  u ∈Ci] · Pv  v ∈ Cj  , is approximately equal to the LP cost of this pair, P ij duv Aij Puv  u ∈Ci, v ∈Cj  . The problem, however, is that condition (3) does not hold for all pairs (u, v). Furthermore, it may happen that the algorithm creates a very expensive small cluster Xi such that for all pairs u, v ∈Xi we do not have approximate equality (3). Consequently, the cost of such a cluster cannot be charged to the LP relaxation. The discussion above leads to the following idea: let us make local distributions not only nearly independent for most pairs (u, v) but nearly independent for each point u and most v’s in each cluster the algorithm creates. This is formally stated in Definition 4.1. However, the problem is that the algorithm does not know in advance what clusters it is going to produce. So, it uses a proxy for these clusters – sets of candidate points D1, . . . , Dk. Set Di contains points that are somewhat likely to be assigned to cluster i. We now summarize Stage I. First, the algorithm solves the Sherali–Adams relaxation. Then, it defines sets of candidates D1, . . . , Dk, where each Di contains points u for which P[u ∈Ci] ≥η (where η is a small constant depending on ε). It calls algorithm MAKEINDEPENDENT (described Section 4.1) with sets D1, . . . , Dk and obtains (γ, δ)-nearly independent local distributions P∗. Next, it randomly assigns points to clusters using distribution P∗. To make sure that we can pay for each created cluster Xi, this cluster needs to be a subset of the corresponding candidate set Di. Thus, if point u is assigned to Xi but u is not in Di, we remove u from Xi and mark u as an outlier. Stage I returns sets X1, . . . , Xk The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7931 Figure 1: Comparison of the depth-based objective (Depth-HC) and the Dasgupta’s objective. Data points correspond to averages over 10 runs, and error bars correspond to the 10% and 90% quantiles. The experiments are performed on a single-core Intel Xeon 2.2GHz CPU. along with the set of outliers O, which are assigned to clusters at Stage II. We can now charge the cost of all pairs (u, v) that are nearly independent to the LP objective. Using triangle inequalities, we can also bound the cost of all other pairs (u, v) in V \ O. We provide all details in Appendix D. Stage II At Stage II, we assign outliers to clusters. Our approach to dealing with outliers is somewhat similar to the approach introduced by Makarychev, Makarychev, and Razenshteyn (2019). As discussed above, the number of outliers is small, which is one of the main reasons why their assignment does not significantly change the objective. The outliers are assigned using independent rounding based on P (instead of P∗for non-outliers). In Theorem E.3, we analyze the cost of assigning outliers to clusters. Putting everything together, we prove that our algorithm provides a PTAS. Theorem 5.1. For δ = γ = η2ε 9 and r = 2 + k log2 k 2δ2γη , Algorithm 1 finds clustering with the expected objective value within (1 + ε)-factor of OPT with probability at least 1 −η for any η ≤ ε2 90k2 . 6 Experiments In this section, we perform experiments on the hierarchical clustering objective (Depth-HC) defined in Section 3: H(T ) = X u,v∈V duv h(LCAT (u, v)) For our experiments, we use a simplified version of the algorithm, based on the LPII relaxation from Section 4, which achieves 3-approximation (Appendix B): k X i=1 k X j=1 X u∈V nj Aij duci∧j Pu  u ∈Ci  , where n1, . . . , nk are cluster cardinalities and c1, . . . , ck are cluster centers. This objective can be optimized efficiently as an instance of a minimum-cost flow problem, while precisely satisfying the imposed cardinality constraints. We run the algorithm multiple times with different guesses of {ni} and {ci}, and, since the guesses might be not precise, we improve the resulting solution using local search. Datasets We perform evaluation on various hierarchical datasets. In this section, we present experiments on random subsamples (of sizes 102, 103, and 104) of a well-known 20 NEWSGROUPS dataset (Lang 1995), and in Appendix G we present additional experiments on ZEBRAFISH (Wagner et al. 2018), CIFAR-10 (Krizhevsky and Hinton 2009), and other datasets. The inputs in 20 NEWSGROUPS are text documents, which we transform to the Euclidean vectors using a pre-trained language model (see Appendix G for details). Finally, we use the ground-truth hierarchical structure to obtain a flat clustering based on the top-level split. Objectives We compare the following objectives: • Depth-based objective (Depth-HC). Based on the algorithm from Section 3, we approximate the objective by building a hierarchical tree up to a certain level and building random trees on the resulting clusters. We select the level ℓso that the number of clusters 2ℓis close to the number of ground-truth clusters. • Dasgupta’s objective (Dasgupta 2016), defined as P u<v w(u, v) |LCAT (u, v)|, where w is the similarity between items. We convert distances to similarities using the standard RBF kernel: w(x, y) = exp  −∥x−y∥2 2  . We optimize Dasgupta’s objective using recursive MinCut (Chatziafratis et al. 2020), for which we use METIS (Karypis and Kumar 1995). Evaluation and Results We evaluate how well the above objectives recover the ground-truth clustering information using the dendrogram purity objective: DP(T ) = 1 Pm i=1 |Ci|2 m X i=1 X u,v∈Ci |Ci ∩LCAT (u, v)| |LCAT (u, v)| , where C1, . . . , Cm are the ground-truth clusters. Intuitively, this objective measures how well-separated are the groundtruth clusters in the tree. Figure 1 shows that (Depth-HC) objective achieves significantly better dendrogram purity compared with Dasgupta’s objective. Moreover, the complexity of our algorithm is noticeably slower, and, with the increase in the number of data points, the gap in quality increases, exceeding the factor of two for 104 points. To conclude, these experiments demonstrate the usefulness of our hierarchical objective as well as the existence of efficient approaches for its optimization. We provide additional experiments in Appendix G. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7932 Acknowledgements Konstantin Makarychev is supported by the NSF Awards CCF-1955351, CCF-1934931, and EECS-2216970. References Agarwala, R.; Bafna, V.; Farach, M.; Paterson, M.; and Thorup, M. 1998. On the approximability of numerical taxonomy (fitting distances by tree metrics). SIAM Journal on Computing, 28(3): 1073–1085. Ailon, N.; and Alon, N. 2007. Hardness of fully dense problems. Information and Computation, 205(8): 1117–1129. Ailon, N.; and Charikar, M. 2005. Fitting tree metrics: Hierarchical clustering and phylogeny. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), 73–82. IEEE. Alekhnovich, M.; Arora, S.; and Tourlakis, I. 2011. Towards strong nonapproximability results in the Lov´asz-Schrijver hierarchy. computational complexity, 20: 615–648. Alon, N.; Azar, Y.; and Vainstein, D. 2020. Hierarchical clustering: A 0.585 revenue approximation. In Conference on Learning Theory, 153–162. PMLR. Alon, N.; de la Vega, W. F.; Kannan, R.; and Karpinski, M. 2002. Random sampling and approximation of MAX-CSP problems. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, 232–239. Alon, N.; and Naor, A. 2004. Approximating the cut-norm via Grothendieck’s inequality. In Proceedings of the 36th annual ACM symposium on Theory of computing, 72–80. Arora, S.; Karger, D. R.; and Karpinski, M. 1999. Polynomial Time Approximation Schemes for Dense Instances of NP-Hard Problems. J. Comput. Syst. Sci., 58(1): 193–210. Bansal, N.; Blum, A.; and Chawla, S. 2004. Correlation clustering. Machine learning, 56(1): 89–113. Barak, B.; Raghavendra, P.; and Steurer, D. 2011. Rounding semidefinite programming hierarchies via global correlation. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, 472–481. IEEE. Bartal, Y.; Charikar, M.; and Raz, D. 2001. Approximating min-sum k-clustering in metric spaces. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, 11–20. Bazgan, C.; de la Vega, W. F.; and Karpinski, M. 2003. Polynomial time approximation schemes for dense instances of minimum constraint satisfaction. Random Struct. Algorithms, 23(1): 73–91. Boldi, P.; and Vigna, S. 2004. The webgraph framework I: compression techniques. In Proceedings of the 13th international conference on World Wide Web, 595–602. Charikar, M.; and Chatziafratis, V. 2017. Approximate hierarchical clustering via sparsest cut and spreading metrics. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 841–854. SIAM. Charikar, M.; Guruswami, V.; and Wirth, A. 2005. Clustering with qualitative information. Journal of Computer and System Sciences, 71(3): 360–383. Charikar, M.; Hajiaghayi, M. T.; Karloff, H.; and Rao, S. 2010. ℓ2 2 spreading metrics for vertex ordering problems. Algorithmica, 56(4): 577–604. Charikar, M.; Makarychev, K.; and Makarychev, Y. 2009a. Integrality gaps for Sherali-Adams relaxations. In Proceedings of the forty-first annual ACM symposium on Theory of computing, 283–292. Charikar, M.; Makarychev, K.; and Makarychev, Y. 2009b. Near-optimal algorithms for maximum constraint satisfaction problems. ACM Transactions on Algorithms (TALG), 5(3): 1–14. Charikar, M.; and Wirth, A. 2004. Maximizing quadratic programs: Extending Grothendieck’s inequality. In 45th Annual IEEE Symposium on Foundations of Computer Science, 54–60. IEEE. Chatziafratis, V.; Yaroslavtsev, G.; Lee, E.; Makarychev, K.; Ahmadian, S.; Epasto, A.; and Mahdian, M. 2020. Bisect and Conquer: Hierarchical Clustering via Max-Uncut Bisection. In Proceedings of the 33rd International Conference on Artificial Intelligence and Statistics, 3121–3132. PMLR. Chierichetti, F.; Kumar, R.; Lattanzi, S.; Mitzenmacher, M.; Panconesi, A.; and Raghavan, P. 2009. On compressing social networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, 219–228. Chlamtac, E.; and Tulsiani, M. 2012. Convex relaxations and integrality gaps. Handbook on semidefinite, conic and polynomial optimization, 139–169. Cohen-Addad, V.; Das, D.; Kipouridis, E.; Parotsidis, N.; and Thorup, M. 2022. Fitting distances by tree metrics minimizing the total error within a constant factor. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), 468–479. IEEE. Cohen-Addad, V.; Kanade, V.; Mallmann-Trenn, F.; and Mathieu, C. 2019. Hierarchical clustering: Objective functions and algorithms. Journal of the ACM, 66(4): 1–42. Cohen-Addad, V.; Lee, E.; and Newman, A. 2022. Correlation clustering with sherali-adams. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), 651–661. IEEE. Costa, M.-C.; L´etocart, L.; and Roupin, F. 2005. Minimal Multicut and Maximal Integer Multiflow: A Survey. European Journal of Operational Research, 162(1): 55–69. Dahlhaus, E.; Johnson, D. S.; Papadimitriou, C. H.; Seymour, P. D.; and Yannakakis, M. 1994. The Complexity of Multiterminal Cuts. SIAM Journal on Computing, 23(4): 864–894. Dasgupta, S. 2016. A cost function for similarity-based hierarchical clustering. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, 118–127. Dasgupta, S.; and Long, P. M. 2005. Performance guarantees for hierarchical clustering. Journal of Computer and System Sciences, 70(4): 555–569. de la Vega, W. F.; Karpinski, M.; and Kenyon, C. 2004. Approximation schemes for Metric Bisection and partitioning. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7933 In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2004, New Orleans, Louisiana, USA, January 11-14, 2004, 506–515. SIAM. de la Vega, W. F.; Karpinski, M.; Kenyon, C.; and Rabani, Y. 2003. Approximation schemes for clustering problems. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, 50–58. de la Vega, W. F.; and Kenyon-Mathieu, C. 2007. Linear programming relaxations of maxcut. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 53–61. Dhulipala, L.; Kabiljo, I.; Karrer, B.; Ottaviano, G.; Pupyrev, S.; and Shalita, A. 2016. Compressing graphs and indexes with recursive graph bisection. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1535–1544. Fernandez de la Vega, W. 1996. MAX-CUT has a randomized approximation scheme in dense graphs. Random Structures & Algorithms, 8(3): 187–198. Frieze, A. M.; and Kannan, R. 1996. The Regularity Lemma and Approximation Schemes for Dense Problems. In 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, Vermont, USA, 14-16 October, 1996, 12–20. IEEE Computer Society. Giotis, I.; and Guruswami, V. 2005. Correlation clustering with a fixed number of clusters. arXiv preprint cs/0504023. H˚astad, J. 2001. Some optimal inapproximability results. Journal of the ACM (JACM), 48(4): 798–859. Hopkins, S. B.; Schramm, T.; and Trevisan, L. 2020. Subexponential LPs approximate max-cut. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), 943–953. IEEE. Indyk, P. 1999. A Sublinear Time Approximation Scheme for Clustering in Metric Spaces. In 40th Annual Symposium on Foundations of Computer Science, 154–159. Kann, V.; Khanna, S.; Lagergren, J.; and Panconesi, A. 1996. On the Hardness of Approximating Max k-Cut and Its Dual. In Israeli Symposium on Theoretical Computer Science. Karypis, G.; and Kumar, V. 1995. Metis-Unstructured Graph Partitioning and Sparse Matrix Ordering System, Version 2.0. University of Minnesota. Khot, S. 2002. On the power of unique 2-prover 1-round games. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, 767–775. Khot, S.; and Naor, A. 2008. Approximate Kernel Clustering. In 2008 49th Annual IEEE Symposium on Foundations of Computer Science, 561–570. IEEE Computer Society. Khot, S.; and Naor, A. 2013. Sharp kernel clustering algorithms and their associated Grothendieck inequalities. Random Structures & Algorithms, 42(3): 269–300. Krizhevsky, A.; and Hinton, G. 2009. Learning Multiple Layers of Features from Tiny Images. Lang, K. 1995. NewsWeeder: Learning to Filter Netnews. In Machine Learning Proceedings 1995, 331–339. San Francisco (CA): Morgan Kaufmann. ISBN 978-1-55860-377-6. Leskovec, J.; Rajaraman, A.; and Ullman, J. D. 2020. Mining of massive data sets. Cambridge university press. Lov´asz, L.; and Schrijver, A. 1991. Cones of Matrices and Set-Functions and 0–1 Optimization. SIAM Journal on Optimization, 1(2): 166–190. Makarychev, K.; Makarychev, Y.; and Razenshteyn, I. 2019. Performance of Johnson-Lindenstrauss Transform for kMeans and k-Medians Clustering. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, 1027–1038. Association for Computing Machinery. Mezei, B. F.; Wrochna, M.; and ˇZivn`y, S. 2023. PTAS for sparse general-valued CSPs. ACM Transactions on Algorithms, 19(2): 1–31. Moseley, B.; Vassilvtiskii, S.; and Wang, Y. 2021. Hierarchical clustering in general metric spaces using approximate nearest neighbors. In International Conference on Artificial Intelligence and Statistics, 2440–2448. PMLR. Moseley, B.; and Wang, J. 2017. Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting Kmeans, and Local Search. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Nesterov, Y. 1998. Semidefinite relaxation and nonconvex quadratic optimization. Optimization methods and software, 9(1-3): 141–160. Raghavan, S.; and Garcia-Molina, H. 2003. Representing web graphs. In Proceedings 19th International Conference on Data Engineering, 405–416. IEEE. Raghavendra, P.; and Tan, N. 2012. Approximating CSPs with global cardinality constraints using SDP hierarchies. In Proceedings of the 33rd annual ACM-SIAM symposium on Discrete Algorithms, 373–387. SIAM. Romero, M.; Wrochna, M.; and ˇZivn`y, S. 2021. Treewidthpliability and PTAS for Max-CSPs. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), 473–483. SIAM. Sherali, H. D.; and Adams, W. P. 1990. A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM Journal on Discrete Mathematics, 3(3): 411–430. Song, L.; Smola, A.; Gretton, A.; and Borgwardt, K. M. 2007. A dependence maximization view of clustering. In Proceedings of the 24th international conference on Machine learning, 815–822. Thapper, J.; and Zivny, S. 2017. The power of Sherali– Adams relaxations for general-valued CSPs. SIAM Journal on Computing, 46(4): 1241–1279. Wagner, D. E.; Weinreb, C.; Collins, Z. M.; Briggs, J. A.; Megason, S. G.; and Klein, A. M. 2018. Single-Cell Mapping of Gene Expression Landscapes and Lineage in the Zebrafish Embryo. Science, 360(6392): 981–987. Yannakakis, M. 1988. Expressing combinatorial optimization problems by linear programs. In Proceedings of the 20th annual ACM symposium on Theory of computing, 223–228. Yoshida, Y.; and Zhou, Y. 2014. Approximation schemes via Sherali-Adams hierarchy for dense constraint satisfaction problems and assignment problems. In Innovations in Theoretical Computer Science, 2014, 423–438. ACM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7934
2024
881
18,719
Neural Time-Reversed Generalized Riccati Equation Alessandro Betti1, Michele Casoni2, Marco Gori1, 2, Simone Marullo2, 3, Stefano Melacci2, Matteo Tiezzi2 1 Inria, Lab I3S, MAASAI, Universit`e Cˆote d’Azur, Nice, France, 2 DIISM, University of Siena, Siena, Italy, 3 DINFO, University of Florence, Florence, Italy [email protected], [email protected], {marco,mela, mtiezzi}@diism.unisi.it, [email protected] Abstract Optimal control deals with optimization problems in which variables steer a dynamical system, and its outcome contributes to the objective function. Two classical approaches to solving these problems are Dynamic Programming and the Pontryagin Maximum Principle. In both approaches, Hamiltonian equations offer an interpretation of optimality through auxiliary variables known as costates. However, Hamiltonian equations are rarely used due to their reliance on forwardbackward algorithms across the entire temporal domain. This paper introduces a novel neural-based approach to optimal control, with the aim of working forward-in-time. Neural networks are employed not only for implementing state dynamics but also for estimating costate variables. The parameters of the latter network are determined at each time step using a newly introduced local policy referred to as the time-reversed generalized Riccati equation. This policy is inspired by a result discussed in the Linear Quadratic (LQ) problem, which we conjecture stabilizes state dynamics. We support this conjecture by discussing experimental results from a range of optimal control case studies. Introduction Optimal control (Lewis, Vrabie, and Syrmos 2012) offers a wide framework to set up optimization problems that are concerned with the steering of a dynamical system in some parsimonious way. It is therefore clear that its scope is quite large and it intersects many areas such as, for instance, pure math, natural sciences and engineering. Being the optimization problem objective defined on the solution of a system of ODEs over a certain temporal horizon [t0, T], it has a global-in-time nature. Indeed, classical approaches to optimal control such as the Pontryagin Maximum Principle (see (Gamkrelidze, Pontrjagin, and Boltjanskij 1964; Giaquinta and Hildebrandt 2013)) and dynamic programming (see (Bardi, Dolcetta et al. 1997)) both characterize solutions in terms of a boundary problem for some differential conditions (usually a PDE in dynamic programming and a system of ODEs with the Pontryagin maximum principle). This means that, in general, the algorithms to find solutions require iterative forward/backward approaches to glue the local-in-time computations of the differential equations with Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the boundary conditions at opposite sides of the temporal interval. In many instances of control problems, where either the complexity of the model is high and/or the temporal horizon could be very long, as it could happen for instance in Reinforcement Learning (Sutton and Barto 2018; Bertsekas 2019) or Lifelong Learning (Betti et al. 2022; Mai et al. 2022), these methods are unfeasible and we usually need to resort to different control strategies. A typical approach is that of using Model Predictive Control (Garcia, Prett, and Morari 1989) (also known as receding horizon control), with a real-time iteration (RTI) scheme for solving the online optimization problem (Diehl, Bock, and Schl¨oder 2005). The necessity of finding optimization procedures that only exploit forward (in time) computations is an especially sensible matter within the machine learning community, where the possibility of performing a backpropagation through the entire temporal horizon (backpropagation through time) is considered to be extremely implausible from a biological point of view (Hinton 2022) and in some cases prohibitively costly. In this work we present a novel approach that makes use of Hamilton Equations giving an estimate of the costate function through a neural-network computation, working forward-in-time. The basic idea of our approach is to estimate the parameters of this network by means on an indirect usage of the Hamilton equations. Recently, in (Jin et al. 2019), the possibility of exploiting Hamiltonian equations for learning system dynamics and controlling policies forward in time has been investigated. In this paper, the authors introduced Pontryagin Differentiable Programming (PDP) to efficiently compute the gradients of the state trajectory with respect to the system parameters using an auxiliary control system. This approach differs from the method proposed in this paper by the fact that, instead, in the present work, we use Hamilton equations indirectly for defining an optimization problem for the temporal variations of the model parameters. In doing so, we are basically defining a dynamic on the parameters that estimate the costate in a similar manner as we would do with the Riccati equation in the Linear Quadratic control problem. We conjecture that the resulting time-reversed dynamics will lead to a stabilizing effect on the state equation, hence opening the possibility to use this method forward-in-time. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7935 This approach has been inspired by the possibility of using optimal control techniques in the continual online learning scenario recently proposed in (Betti et al. 2022) to formulate a class of lifelong problems using the formalism of control theory. For this reason, throughout the paper we assume that the dynamical system that defines the evolution of the state is also expressed by a neural model, in the form of a continuous time recurrent neural network (Zhang, Wang, and Liu 2014). The authors in (Betti et al. 2022) proposed a method to enforce stability by pushing the costate dynamic to converge to zero, and hence directly interfering with the dynamics prescribed by Hamilton equations. Conversely, we directly leverage on Hamilton equations to devise a stabilizing policy for the system. The paper is organized as follows. In Section we describe the class of dynamical models that we take into account, Section is devoted to the formulation of the control problem and contains a short review of the main results from optimal control that will be used in the reminder of the paper. Section constitutes the core part of the contribution and it is where we introduce the time-reversed generalized Riccati equation. Section contains the experimental observations that have been organized in three different case studies, while conclusions and ideas for future work are the subjects of Section . Continuous Time State Model Let us focus on models that depend on N parameters, whose values at time t are yielded by α(t), and that are based on an internal state x(t) of size n which dynamically changes over time. We consider a classic state model x′(t) = f(x(t), α(t), t), t ∈(t0, T] (1) where f : Rn × RN × [t0, T] →Rn is a Lipschitz function, t 7→α(t) is the trajectory of the parameters of the model, which is assumed to be a measurable function, and T is the temporal horizon on which the model is defined; t0 ≥0. We assume that the p-components output of the model is computed by a fixed transformation of the state, π: Rn → Rp, usually a projection of class C∞(Rn; Rp). The initial state of the model is assigned to a fixed vector x0 ∈Rn, that is x(t0) = x0. (2) Let us now pose A := {α: [t0, T] → RN : α is measurable}. Definition 1. Given a β ∈A, and given an initial state x0, we define the state trajectory, that we indicate with t 7→ x(t; β, x0, t0), the solution of (1) with initial condition (2). The goal of this work is to define a procedure to estimate with a forward-in-time scheme an approximation of the optimal control parameters α.1 Notice that the explicit time dependence t of Eq. (1) is necessary to take into account the provision over time of some input data to the model. In the next section, we will give a more precise structure to such temporal dependence. 1The meaning of optimality will be described in details in Section Neural State Model We implement the function f of Eq. (1) by a neural network γ, where the dependence on time t is indirectly modeled by a novel function u(t), that yields the d-dimensional input data provided at time t to the network. Formally, for all ξ ∈Rn, for all s ∈[t0, T] and all a ∈RN, f(ξ, a, s) := γ(ξ, u(s), a), where, for all fixed a ∈RN, the map γ(·, ·, a): Rn × Rd →Rn is a neural network and u: [t0, T] →Rd is the input signal, being u ∈BV ((t0, T)) an assigned input map of bounded variation2. More directly we can assume that we are dealing with a Continuous Time Recurrent Neural Network (CTRNN) (see (Zhang, Wang, and Liu 2014)) that at each instant estimates the variation of the state based on the current value of the state itself and on an external input. The network γ(·, ·, a) represents the transition function of the state. In this new notation, the dynamic of the state, given by Eq. (1) together with Eq. (2), is described by the following Cauchy problem for x: x′(t) = γ(x(t), u(t), α(t)), for t ∈(t0, T] x(t0) = x0. (3) To help the reader in giving an initial interpretation to the parameters α, at this stage it is enough to assume that α could basically represent the weights and the biases of the network γ. Similarly, the state x could be imagined as the usual state in a CTRNN. However, there are still some steps to take before providing both α and x the exact role we have considered in this paper. First, we need to define the way α participates in an optimization problem, defining a control problem whose control parameters are α. This will be the main topic of the next Section , where we will start from the generic state model of the beginning of Section , and then cast the descriptions on the neural state model γ—Section . When doing it, we will also reconsider the role of α in the context of the neural network γ, due to some requirements introduced by the optimization procedure over time. Control Problem Suppose now that we want to use the model described in Eq. (1) paired with Eq. (2) to solve some task that can be expressed as a minimization problem for a cost functional α 7→C(α). We recall the notation x(t; α, x0, t0), introduced in Def. (1), to compactly indicate the state x and all its dependencies as a solution of Eq. (1) with initial values (2). The cost functional has the following form: Cx0,t0(α) := Z T t0 ℓ(α(t), x(t; α, x0, t0), t) dt, (4) where ℓ(a, ·, s) is bounded and Lipshitz ∀a ∈RN and ∀s ∈[t0, T]. The function ℓis usually called Lagrangian and it can be thought as the counterpart of a classic machinelearning loss function in control theory. Because the term x(t; α, x0, t0) in Eq. (4) depends on the variables α through 2Here the space BV (t0, T) is the functional space of functions of bounded variation, see (Ambrosio, Fusco, and Pallara 2000). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7936 the integration of a first-order dynamical system, the problem min α∈A Cx0,t0(α) (5) is a constrained minimization problem which is usually denoted as control problem (Bardi, Dolcetta et al. 1997), assuming that a solution exists. A classical way to address problem (5) is through dynamic programming and the Hamilton-Jacobi-Bellman equation (Bardi, Dolcetta et al. 1997), that will be the key approach on which we will build the ideas of this paper, paired with some intuitions to yield a forward solution over time. We briefly summarize such classical approach in the following. The first step to address our constrained minimization problem is to define the value function or cost to go, that is a map v: Rn × [t0, T] →R defined as v(ξ, s) := inf α∈A Cξ,s(α), ∀(ξ, s) ∈Rn × [t0, T]. The optimality condition of the cost C then translates into an infinitesimal condition (PDE) for the value function v (see (Bardi, Dolcetta et al. 1997)); such result can be more succinctly stated once we define the Hamiltonian function H : Rn × Rn × [t0, T] →R H(ξ, ρ, s) := min a∈RN{ρ · f(ξ, a, s) + ℓ(a, ξ, s)}, (6) being · the dot product. Then the following well-known result holds. Theorem 1 (Hamilton-Jacobi-Bellman). Let us assume that D denotes the gradient operator with respect to ξ. Furthermore, let us assume that v ∈C1(Rn × [t0, T], R) and that the minimum of Cξ,s, Eq. (5), exists for every ξ ∈Rn and for every s ∈[t0, T]. Then v solves the PDE vs(ξ, s) + H(ξ, Dv(ξ, s), s) = 0, (7) (ξ, s) ∈Rn × [t0, T), with terminal condition v(ξ, T) = 0, ∀ξ ∈Rn. Equation (7) is usually referred to as HamiltonJacobi-Bellman equation. Proof. See appendix A of (Betti et al. 2023). The result stated in Theorem 1 gives a characterization of the value function; the knowledge of the value function in turn gives a direct way to construct a solution of the problem defined in Eq. (5) by a standard procedure called synthesis procedure (Evans 2022; Bardi, Dolcetta et al. 1997), for which we summarize its main ingredients. The first step is, once a solution of Eq. (7) with the terminal condition v(ξ, T) = 0, ∀ξ ∈Rn is known, to find an optimal feedback map S : Rn × [t0, T] →RN defined by the condition S(ξ, s) ∈arg min a∈RN {Dv(ξ, s) · f(ξ, a, s) + ℓ(a, ξ, s)}. (8) Once a function S with such property is computed, the second step is to solve x′(t) = f(x(t), S(x(t), t), t), for t ∈(t0, T), with initial condition x(t0) = x0, and call a solution of this equation x∗. Then the optimal control α∗is directly given by the feedback map: α∗(t) = S(x∗(t), t). (9) Hamilton Equations There exists another route that can be followed to face the problem of Eq. (5), and that does not directly make use of Hamilton-Jacobi-Bellman equation (7). Such a route, that we will exploit in the rest of the paper, mainly rely on an alternative representation of the value function which is obtained through the the method of characteristics (Courant and Hilbert 2008) and which basically makes it possible to compute the solution of HamiltonJacobi-Bellman equation along a family of curves that satisfy a set of ordinary differential equations (ODEs). This approach is also equivalent (see (Bardi, Dolcetta et al. 1997)) to the Pontryagin Maximum Principle (Giaquinta and Hildebrandt 2013). Let us define the costate p(t) := Dv(x(t), t) and consider the following system of ODEs known as Hamilton Equations,        x′(t) = Hρ(x(t), p(t), t); t ∈(t0, T] p′(t) = −Hξ(x(t), p(t), t); t ∈(t0, T] x(t0) = x0; p(T) = 0, (10) being Hρ and Hξ the derivatives of H with respect to its second and first argument, respectively. Given a solution to Eq. (10), we can find a solution of Eq. (7) with the appropriate terminal conditions (see (Bardi, Dolcetta et al. 1997)). More importantly, this means that instead of directly finding the value function v, in order to find the optimal control α∗we can solve Eq. (10) to find p∗and x∗and then, as we describe in Eq. (9) and (8), choose α∗∈arg min a∈A {p∗(t) · f(x∗(t), a, t) + ℓ(a, x∗(t), s)}. (11) While the problem reformulated in this way appears to be significantly more tractable, having traded a PDE for a system of ODEs, the inherent difficulty of solving a global-intime optimization problem remains and can be understood as soon as one realizes that Eq. (10) is a problem with both initial and terminal boundary conditions. From a numerical point of view, this means that in general an iterative procedure over the whole temporal interval is needed (for instance shooting methods (Osborne 1969)), making this approach, based on Hamilton equations, unfeasible for a large class of problems when the dimension of the state and/or the length of the temporal interval is big. Finding a forward approach to deal with this issue will be the subject of Section , while our next immediate goal is to bridge the just introduced notions to formalize the role of α in our neural-based approach. Controlling the Parameters of the Network Let us now bridge the just described notions with the neuralbased implementation γ of the state model, as we have already discussed in Section . In this section, we will give a detailed description of the state variable x(t) associated to the neural computation and we will discuss the specific instance of the controls α(t) we consider, that are both related to the parameters of net γ. We consider a digraph G = (V, E) and, without loss of generality, let us assume that V = {1, . . . , m}. Remember that given a digraph, for each i ∈V we can always define The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7937 two sets ch(i) := {j ∈V : (i, j) ∈E} and pa(i) := {j ∈ V : (j, i) ∈E}. The digraph becomes a network as soon as we decorate each arc (j, i) ∈E with a weight wij(t) and each vertex i ∈V with a neuron output yi(t) and a bias bi(t), for every temporal instant t ∈[t0, T]. Then the typical CTRNN computation can be written as y′ i(t) = −yi(t) + σ  X j∈pa(i) wij(t)yj(t) + bi(t) + d X j=1 kij(t)uj(t)  , (12) where kij(t) is a component of the weight matrix associated with the input u and σ: R →R is an activation function. In general, when dealing with optimization over time, we want to be able to impose a regularization not only on the values of the parameters of the network, but also on their temporal variations; as a matter of fact, in many applications one would consider preferable slow variations or, if the optimization fully converged, even constant parameters of the network. For this reason it is convenient to associate the control variables α with the temporal variations (derivatives) of the network’s parameters. The classic learnable parameters (weights and biases) of the network can be considered as part of the state x, together with the neuron outputs y. This requires (i.) to extend the neural state model of Eq. (3), in order to provide a dynamic to the newly introduced state components, and (ii.) to take into account the novel definition of α. Formally, the state at time t becomes x(t) = (y(t), w(t), b(t), k(t)) and γ is only responsible of computing the dynamic of the y-portion of it. 3 The state model of Eq. (3), involving all the components of x above, is then,        y′(t) = γ(y(t), u(t), w(t), b(t), k(t)) w′ ij(t) = ωij(t), (j, i) ∈E b′ i(t) = νi(t), i ∈V k′ ij(t) = χij(t), i ∈V and j = 1, . . . , d. (13) We can finally formalize the control variables4 α(t) = (ω(t), ν(t), χ(t)), that, when paired with the previous system of equations, allows us to view such a system as a neural state model in the form x′(t) = f(x(t), α(t), t), coherently with Eq. (1) and (3). Due to the definition of α, a quadratic penalization in α amounts to a penalization on the “velocities” of the parameters of the network. Time-Reversed Generalized Riccati Equation As we briefly discussed in Section , the approach to problem (5) based on Hamilton equations is not usually computationally feasible, mostly due to the fact that it involves boundary conditions on both temporal extrema t0 and T. 3We have overloaded the symbol γ: in Eq. (3) was defined as the transition function of the whole state, here only of the y part. 4To avoid a cumbersome notation we will denote with the name of a state variable or control variable without specifying any index simply the list of those variables. More dramatically, Hamilton equations are not generally stable. Consider, for instance, the following example of a widely known control problem: Example 1 (Linear Quadratic Problem). The Scalar Linear Quadratic problem is obtained by choosing f(ξ, a, s) = Aξ + Ba and ℓ(a, ξ, s) = Qξ2/2 + Ra2/2 with Q and R positive and A ∈R, B ∈R. In this specific case, it turns out that the Hamiltonian can be computed in closed form: H(ξ, ρ, s) = Qξ2/2−B2ρ2/(2R)+Aξρ. Hence, the Hamilton equations of Eq. (10) become x′(t) = −B2p(t)/R + Ax(t) and p′(t) = −Qx(t) −Ap(t). The solution of such system, for general initial conditions, have positive exponential modes exp(ωt) with ω = p A2 + B2Q/R, that obviously generates instabilities. However, it turns out that the LQ problem of Example 1 can be approached with a novel solution strategy, which yields stability and is the key element we propose and exploit in this paper to motivate our novel approach to forwardonly optimization in neural nets. We can in fact assume that the costate is estimated by p(t) = µ(x(t), θ(t)), that is defined by µ(ξ, ϑ) = ϑξ, ∀(ξ, ϑ) ∈R2. In other words, in this example at each time instant t the costate p(t) is a linear function of the state x(t) with parameter θ(t). By the definition of the costate, this is equivalent to assume that the value function v is a quadratic function of the state. We then proceed as follows: 1. We randomly initialize θ(0) and set x(0) = x0; 2. At a generic temporal instant t, under the assumption that p(t) = µ(x(t), θ(t)), we consider the condition p′(t) = dµ(x(t), θ(t))/dt with p′ computed with the LQ Hamilton equation (10): µξ(x(t), θ(t)) · x′(t) + µϑ(x(t), θ(t)) · θ′(t) = −Qx(t) −Aθ(t)x(t). Solving this for θ′(t) we obtain the Riccati equation: θ′(t) = (B2/R)θ2(t) −2Sθ(t) −Q; 3. We change the sign of the temporal derivative in the Riccati equation θ′(t) = −(B2/R)θ2(t) + 2Sθ(t) + Q, (14) and we use it with initial conditions to compute t 7→θ(t); 4. Finally, we compute the control parameter using Eq. (11), where the optimal costate is replaced with its estimation given with the network µ. As it is known, Riccati equation must be solved with terminal conditions; in our case, since we do not have any terminal cost, the optimal solution would be recovered imposing the boundary condition θ(T) = 0. Solving this equation with initial conditions, however, does not have any interpretation in terms of the optimization problem (differently from the forward solution of the costate in Hamilton equations). Instead, let us set for simplicity t0 = 0 and define Φ: t ∈[0, T] →s ∈[0, T], with s := T −t, so that if we let ˆθ := θ ◦Φ−1, we have ∀s ∈[0, T] that ˆθ(s) = θ(Φ−1(s)) = θ(T −s). This time, ˆθ will satisfy exactly Eq. (14). The solution of this equation with initial The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7938 condition ˆθ(0) = 0 can be found explicitly by standard techniques: ˆθ(s) = R B2 λ1λ2 eλ1s −eλ2s λ2eλ1s −λ1eλ2s , with λ1,2 = A ± r A2 + QB2 R . This solution has the interesting property that as T →∞ and s →∞with s < T we have that ˆθ(s) →λ1R/B2 which is the optimal solution on the infinite temporal horizon. The transformation Φ defined above acts on the temporal domain [0, T] and implements a reversal of time, that we can also denote as t →T −t. Given a trajectory on [0, T], applying a time reversal transformation to it (as we did for the parameter θ) entails considering the trajectory in which the direction of time is reversed. The dynamics that we observe while moving forward with this new temporal variable are the same dynamics that we would observe when starting from T and moving backward in the original variable. This comment should also give an intuitive justification of why we could trade final conditions with initial conditions when this transformation is applied. Neural Costate Estimation The main contribution of this work is the proposal of a novel method to find a forward approximation of the costate trajectory by making use of an additional Feed-forward Neural Network (FNN) to predict its values. We assume that the costate p is estimated by a FNN µ(·, ·, ϑ): Rn × Rd →Rn with parameters ϑ ∈RM and then we generalize steps 1–4 that we employed in the LQ problem in the previous subsection as follows: 5 1. We randomly initialize the parameters of the network µ to the values θ(0) and select an initial state x0. This allows us to compute µ(x0, θ(0)) and in turn x′(0), using Hamilton equations with µ(x0, θ(0)) in place of p(0).6 2. At a generic temporal instant t, we assume to know x(t) and θ(t), we compute x′(t) = Hρ(x(t), µ(x(t), θ(t)), t) and define the loss function (see Remark 2) Ωt(ϕ) := 1 2∥µξ(x(t), θ(t)) · x′(t) + µϑ(x(t), θ(t)) · ϕ + Hξ(x(t), µ(x(t), θ(t)), t)∥2 + ε 2∥ϕ∥2. (15) We choose δθ(t) ∈arg minϕ∈RM Ωt(ϕ) by performing a gradient descent method on Ωt. 5Here we have assumed, mainly to avoid unnecessary long equations, that the µ(·, ϑ) take as input only the state; however, more generally its domain could also be enriched with the input signal u. Indeed, in the experimental section we will show some case-studies where this is the case. 6Due to the fact that the controls enters in the state equation linearly (see Eq. (13)) if the Lagrangian is quadratic in the controls, like in Eq. (17), then the Hamiltonian (6) can be computed in closed form. 3. We numerically integrate the equation θ′(t) = −δθ(t) (16) with an explicit Euler step, in order to update the values of θ. We denote this equation (see Remark 4) the timereversed generalized Riccati equation. 4. Finally, we compute the control parameter using Eq. (11) where the optimal costate is replaced with its estimation given with the network µ. Remark 1. Notice that the assumption that the costate is computed as a function of the state is consistent with its definition in terms of the value function p(t) = Dv(x(t), t). The only real assumption that we are making is that the explicit temporal dependence in Dv(x(t), t) is captured by the dynamic of the parameters θ(t) of the network µ. Remark 2. The loss function Ωt defined in Eq. (15) is designed to enforce the consistency between the following two different estimates of the temporal variations of the costate: i. the one that comes from the explicit temporal differentiation of dµ(x(t), θ(t))/dt = µξ(x(t), θ(t)) · x′(t) + µϑ(x(t), θ(t)) · θ′(t) and ii. the estimate −Hξ(x(t), µ(x(t), θ(t)), t) obtained from the Hamilton equations. Remark 3. Eq. (16) prescribes a dynamics for the parameters θ that can be interpreted as a time reversal transformation t 7→T −t on the dynamics of the parameters of the network µ induced by Hamilton equations (see Remark 2). Our conjecture is that this prescription implements a policy that induces stability to the Hamilton equations (see Section ). Remark 4. Notice that in the LQ case described in the previous section, Eq. (16) indeed reduces to Eq. (14). Indeed, if µ(ξ, ϑ) = ϑξ and ε ≡0 we have that for LQ arg minϕ∈RM Ωt(ϕ) = {(B2/R)θ2(t)−2Sθ(t)−Q}, hence δθ(t) = (B2/R)θ2(t) −2Sθ(t) −Q. In this case the equation θ′(t) = +δθ(t) would be exactly the Riccati equation with the correct sign. Experiments In the previous sections, we have presented our proposal within the framework of a continuous time setting. In the subsequent segment of our study, which is dedicated to experimentation, we employ explicit Euler steps of magnitude τ to approximate the differential equations. The number of time steps will be denoted as nT . Moreover, we assume that the gradient descent procedure mentioned in Sec. is characterized by a number of iterations niter and a learning rate λ. In appendix B of (Betti et al. 2023) we report a summarizing algorithm of all the procedure presented so far. In order to provide a proof-of-concept of the ideas of this paper, we analyze the capability of our forward-optimization procedure of solving three different tasks with neural estimators: (a) tracking a reference signal, (b) predicting the sign of an input signal and (c) classifying different wave-shapes provided as input signal. The experiences of this section are based on a shared definition of the Lagrangian function ℓof Eq. (4), which consists of a penalty term on the tracking quality of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7939 0 50 100 −1 0 1 steps (×102) Figure 1: Tracking of a sinusoidal target signal using a recurrent network γ of 2 neurons. Black dashed line: target signal, continuous green line: response of γ. The recurrent network has no input and the target is a sine wave with frequency 0.001 Hz. the target signal, referred to as z, and regularization terms both on the outputs of the neurons in network γ and on the velocities of its parameters, i.e., the control. Formally, ℓ(a, ξ, s) = 1 2q(π(ξ) −z(s))2 + 1 2r1 n X i=p ξ2 i + 1 2r2 N X i=0 a2 i , (17) where z(s) is the task-specific target signal at time s and q, r1, r2 ≥0 are customizable constant coefficients. We recall that π is a fixed map that, in this case, we assume to simply select one of the neurons in the output of γ (i.e., the first one). Basically, minimizing the Lagrangian implies forcing the output of a neuron, π(ξ(s)), to reproduce the target signal for every s ∈[t0, T]. The goal of our experiences is to find the optimal control α which minimizes the cost functional defined in Eq. (4). In the following subsections we report the results obtained for each experiment, where the initial time step is set to t0 = 0, the outputs of the neurons of γ at the t0 = 0 are initialized to 0, and the parameters of both networks γ and µ start from random values. All the experiments have been conducted using Python 3.9 with PyTorch 2.0.0 on a Windows 10 Pro OS with an Intel Core i7 CPU and 16GB of memory. Case (a): tracking a target signal Let us consider the case where the target signal is given by z(s) = sin(2πφs), where φ = 0.001 Hz is the frequency of z, and we want the recurrent network γ to track it. Let us choose the model of the network γ as composed of 2 recurrent neurons fully connected to their inputs, y0 and y1, with tanh activation function, following Eq. (12) (of course, in this experience there is no u). We also downscale the −yi term by 0.5. Moreover, we choose the network µ as a fully-connected feed-forward net, with 1 hidden layer made up of 20 neurons with ReLU activation functions. The output layer of µ has linear activation. With the choice of τ = 0.5 s, nT = 104 time steps, q = 104, r1 = 103, r2 = 105, we get the results plotted in Fig. 1. The target signal is the black dashed line, the response of γ is the continuous green line. The number of iterations for updating the derivatives of the weights of µ is set to niter = 100, with a learning rate λ = 10−5 and a decay factor ε = 103. It is possible to see how the response of γ is able to track the target signal and the accuracy of the tracking quickly improves in the early time steps. The amplitude reached by the response of γ is affected by a slight reduction with respect to z, due to the regularization terms in the Lagrangian function. This experiment confirms that the tracking information, 0 50 100 150 −1 0 1 steps (×102) Figure 2: Sign prediction of a sinusoidal signal using a recurrent network γ of 2 neurons. Black dashed line: target signal, continuous green line: response of γ. The input of the network is a sinusoidal wave with frequency 0.002 Hz. The target to track is the sign of the input signal. 0 100 200 −1 0 1 steps (×102) Figure 3: Classification of wave-shapes using a recurrent network γ of 2 neurons. Black dashed line: target signal, continuous green line: response of γ, continuous blue line: input signal. The input of the network is a sequence of sines and square waves, multiplied by a smoothing factor 1 −exp(−s/ψ), where ψ = 2000 s−1. provided through ℓ, is able to induce appropriate changes in the parameters of γ, that allow such network to follow the signal. Notice that this signal propagates through the stateto-costate map µ and it is not directly attached to the neurons of γ, as in common machine learning problems. Case (b): predicting the sign of an input signal Let us now assume that both networks γ and µ receive as input a sinusoidal signal u(s) = sin(2πφs) with frequency φ = 0.002 Hz. The task of predicting the sign of u(s) can be translated in a tracking control problem, where the target signal z(s) is defined as z(s) = 1, if u(s) ≥0 −1, otherwise. Here, we choose the model of the network γ as in Case (a), while the network µ has tanh activation functions in the hidden layer. With the choice of τ = 0.5 s, nT = 1.5 × 104 time steps, q = 105, r1 = 103, r2 = 102, we get the results plotted in Fig. 2. The target signal is the black dashed line, the response of γ is the continuous green line. The maximum number of iterations for updating the derivatives of the weights of µ is again set to niter = 100, with an adaptive learning rate λ which starts from 10−3 and a decay factor ε = 104. Here, the adaptive strategy for λ is the one used by the Adam optimizer. This task is clearly more challenging than the previous one, since we ask γ to react in function of the input u, still using the state-to-costate map µ as a bridge to carry the information. Interestingly, also in this case, the response of γ is able to track the target signal, correctly interleaving the information from the previous state and the current input. Case (c): classifying the different wave-shapes of an input signal Finally, we consider the case in which both The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7940 0 50 100 10 25 steps (×102) (a) 0 50 150 20 45 steps (×102) (b) 0 100 200 30 45 steps (×102) (c) Figure 4: Average value of the Lagrangian function for Case (a), (b) and (c). networks γ and µ get as input a piece-wise defined signal characterized by two different wave-shapes. More precisely, we assume that u(s) = u1(s) = (1/2) sin(2πφs) or u(s) = u2(s) = −(1/2)(−1)⌊2φs⌋, with frequency φ = 0.002 Hz, in different time intervals randomly sampled on the whole time horizon. Moreover, we multiply u(s) by a smoothing factor 1 −exp(−s/ψ), where ψ = 2000 s−1, in order to help the network µ learning to estimate the costate. The task of classifying the wave-shape of u(s) can be again translated in a tracking control problem, where the target signal z(s) is defined as z(s) = 1, if u(s) = u1(s) −1, if u(s) = u2(s). In this case, the networks have to deal with the need of differently reacting in different time spans. We choose the models of the networks γ and µ as in Case (b). The maximum number of iterations for updating the derivatives of the weights of µ is again set to niter = 100, with an adaptive learning rate λ which starts from 10−3 and a decay factor ε = 104. The adaptive strategy for λ is the same as in Case(b). With the choice of τ = 0.5 s, nT = 2 × 104 time steps, q = 105, r1 = 103, r2 = 102, we get the results plotted in Fig. 3. The target signal is the black dashed line, the response of γ is the continuous green line, the input signal is the continuous blue line. Also in this case, the response of γ is able to track the target signal, even if we experienced a small delay in the tracking process that we believe to be motivated by the need of smoothly updating the state, in order to favour the transition in switching from predicting −1 to 1 and vice-versa. In Fig. 4 we report the average value of the Lagrangian for all the tasks that we exposed, obtained dividing the integral of ℓ in [0, s] by s, for each s ∈(0, T]. It is possible to notice that the mean value of the Lagrangian function decreases as time goes by, reflecting the improvement of the model in tracking the different target signals. Conclusions and Future Work This paper introduced a novel theory of optimization that points out a new perspective in the field of optimal control. The forward-in-time Hamiltonian optimization opens 0 100 200 −1 0 1 steps (×102) Figure 5: Example of lack of generalization of our approach in the wave-shape classification task. Black dashed line: target signal, continuous green line: response of γ, continuous blue line: input signal. The target signal is provided up to 20000 steps and it is masked up to the time horizon. 0 100 200 300 −1 0 1 steps (×102) Figure 6: Example of generalization in the sign prediction task. Black dashed line: target signal, continuous green line: response of γ. The target is given up to 15000 steps. up new possibilities for real-time adaptation, tracking and control in lifelong learning scenarios. By bridging the gap between optimal control and deep learning, this innovative methodology paves the way for significant advancements in the learning and adaptation capabilities of autonomous systems in dynamic environments. The paper delved into the theoretical foundations of forward-in-time Hamiltonian optimization, with a particular emphasis on the concept of time-reversed generalized Riccati equation. Future research will focus on enhancing the learning capabilities of the model to facilitate its application in lifelong learning tasks. In our experiments we have shown that our proposal can be used to efficiently solve different kinds of tracking control problems, where the target signal is always present for each time step. It is important to emphasize that the model is contingent upon a considerable number of parameters and exhibits a high degree of sensitivity to their variations. Consequently, tuning these parameters can be challenging. We recall that gaining explicit generalization capabilities (i.e., when the target signal is not given) is not a goal pursued within the scope of this study, but it will be the main point of our future work. Indeed, we mention that this novel approach still has limitations in such a direction. In most of the experiments, the response of γ is not able to generate the target signal if we mask it after a certain number of time steps, freezing the weights of µ up to the time horizon. An example of this behavior can be seen in Fig. 5. However, in Case (b) we have registered that γ is able to reproduce the target z even if it is masked after 15000 steps, as shown in Fig. 6. This result suggests further investigations on this direction and we will orient our research interest on the capability of learning of our proposal, in order to apply it to lifelong learning tasks (De Lange et al. 2021). Acknowledgments This work has been supported by the French government, through the 3IA Cˆote d’Azur, Investment in the Future, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7941 project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002 and it has also been supported by TAILOR and by HumanE-AI-Net projects funded by EU Horizon 2020 research and innovation programme under GA No 952215 and No 952026, respectively. References Ambrosio, L.; Fusco, N.; and Pallara, D. 2000. Functions of Bounded Variation and Free continuity Problems. Oxford Mathematical Monographs, The Clarendon Press Oxford University Press. Bardi, M.; Dolcetta, I. C.; et al. 1997. Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations, volume 12. Springer. Bertsekas, D. 2019. Reinforcement learning and optimal control. Athena Scientific. Betti, A.; Casoni, M.; Gori, M.; Marullo, S.; Melacci, S.; and Tiezzi, M. 2023. Neural Time-Reversed Generalized Riccati Equation. arXiv:2312.09310. Betti, A.; Faggi, L.; Gori, M.; Tiezzi, M.; Marullo, S.; Meloni, E.; and Melacci, S. 2022. Continual Learning through Hamilton Equations. In Conference on Lifelong Learning Agents, 201–212. PMLR. Courant, R.; and Hilbert, D. 2008. Methods of mathematical physics: partial differential equations. John Wiley & Sons. De Lange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; and Tuytelaars, T. 2021. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7): 3366–3385. Diehl, M.; Bock, H. G.; and Schl¨oder, J. P. 2005. A RealTime Iteration Scheme for Nonlinear Optimization in Optimal Feedback Control. SIAM Journal on Control and Optimization, 43(5): 1714–1736. Evans, L. C. 2022. Partial differential equations, volume 19. American Mathematical Society. Gamkrelidze, R.; Pontrjagin, L. S.; and Boltjanskij, V. G. 1964. The mathematical theory of optimal processes. Macmillan Company. Garcia, C. E.; Prett, D. M.; and Morari, M. 1989. Model predictive control: Theory and practice—A survey. Automatica, 25(3): 335–348. Giaquinta, M.; and Hildebrandt, S. 2013. Calculus of variations II, volume 311. Springer Science & Business Media. Hinton, G. 2022. The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345. Jin, W.; Wang, Z.; Yang, Z.; and Mou, S. 2019. Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework. ArXiv, abs/1912.12970. Lewis, F. L.; Vrabie, D.; and Syrmos, V. L. 2012. Optimal control. John Wiley & Sons. Mai, Z.; Li, R.; Jeong, J.; Quispe, D.; Kim, H.; and Sanner, S. 2022. Online continual learning in image classification: An empirical survey. Neurocomputing, 469: 28–51. Osborne, M. R. 1969. On shooting methods for boundary value problems. Journal of mathematical analysis and applications, 27(2): 417–433. Sutton, R. S.; and Barto, A. G. 2018. Reinforcement learning: An introduction. MIT press. Zhang, H.; Wang, Z.; and Liu, D. 2014. A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 25(7): 1229–1262. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7942
2024
882
18,720
Runtime vs. Extracted Proof Size: An Exponential Gap for CDCL on QBFs Olaf Beyersdorff1, Benjamin B¨ohm1, Meena Mahajan2 1Institute of Computer Science, Friedrich Schiller University Jena, Germany 2The Institute of Mathematical Sciences (CI of Homi Bhabha National Institute), Chennai, India [email protected], [email protected], [email protected] Abstract Conflict-driven clause learning (CDCL) is the dominating algorithmic paradigm for SAT solving and hugely successful in practice. In its lifted version QCDCL, it is one of the main approaches for solving quantified Boolean formulas (QBF). In both SAT and QBF, proofs can be efficiently extracted from runs of (Q)CDCL solvers. While for CDCL, it is known that the proof size in the underlying proof system propositional resolution matches the CDCL runtime up to a polynomial factor, we show that in QBF there is an exponential gap between QCDCL runtime and the size of the extracted proofs in QBF resolution systems. We demonstrate that this is not just a gap between QCDCL runtime and the size of any QBF resolution proof, but even the extracted proofs are exponentially smaller for some instances. Hence searching for a small proof via QCDCL (even with non-deterministic decision policies) will provably incur an exponential overhead for some instances. 1 Introduction SAT solving has revolutionised the way we approach computationally hard problems (Vardi 2014). While SAT – determining whether a propositional formula is satisfiable – is the canonical NP-complete problem, modern SAT solvers successfully tackle huge instances of industrial problems from virtually all application domains (Biere et al. 2021). This algorithmic success has been extended to computationally even harder problems, in particular to the PSPACE-complete problem of solving quantified Boolean formulas (QBF), thus reaching to even further applications (Shukla et al. 2019). SAT solving is dominated by the algorithmic paradigm of conflict-driven clause learning (CDCL) on which almost all contemporary SAT solvers are based (Marques Silva, Lynce, and Malik 2021). This approach was lifted to QBF in the form of QCDCL (Zhang and Malik 2002), one of the principal, but not the only (Biere 2004; Janota and Marques-Silva 2015), competitive QBF solving techniques, implemented e.g. in the state-of-the-art solvers DepQBF (Lonsing and Egly 2017) and Qute (Peitl, Slivovsky, and Szeider 2019).1 Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1These solvers go beyond plain QCDCL and use advanced techniques such as dependency learning that can improve performance. We will only consider plain QCDCL in this paper. Both in SAT and in QBF there are intimate connections between solving techniques and proof systems (Buss and Nordstr¨om 2021; Beyersdorff et al. 2021). This manifests in the fact that each run of a solver on an unsatisfiable propositional formula (resp. a true or false QBF) can be understood as a proof of unsatisfiability (resp. truth or falsity) of the formula. While in principle, every solver gives thus rise to a proof system, CDCL corresponds to propositional resolution, which is arguably the most-studied and best-understood proof system in proof complexity. One direction of this correspondence arises from the efficient extraction of resolution proofs from CDCL traces (Beame, Kautz, and Sabharwal 2004). A similar proof extraction also works from QCDCL to the QBF resolution system long-distance Q-Resolution (LD-Q-Res) (Zhang and Malik 2002; Balabanov and Jiang 2012). These connections open the door towards analysing the runtime of (Q)CDCL via proof complexity: formulas without short resolution or LD-Q-Res proofs cannot possibly be solved efficiently by (Q)CDCL. Proof complexity – both in SAT and QBF – offers a wealth of techniques and crafted formulas on which exponential lower bounds for (QBF) resolution can be shown, implying analogous runtime lower bounds for the corresponding solvers (Buss and Nordstr¨om 2021; Beyersdorff 2022; Kraj´ıˇcek 2019). Further, proof extraction is of huge practical importance as the extracted proofs can be used to certify answers of (potentially buggy) solvers (though practical proof logging employs more succinct proof formats than resolution (Wetzler, Heule, and Jr. 2014; Heule, Seidl, and Biere 2017)). It is important, both theoretically and practically, to understand how tight this proof extraction is. This manifests in two related, yet different questions: Q1: Solver runtime vs minimal proof size: Are there formulas that are hard for (Q)CDCL, but with short resolution (LD-Q-Res) proofs? Q2: Solver runtime vs minimal extracted proof size: Are there formulas that are hard for (Q)CDCL, but where short resolution (LD-Q-Res) proofs can be extracted from (Q)CDCL runs? Clearly, a positive answer for Q2 implies a positive answer for Q1 (hence answering Q2 positively is harder). For SAT, Q1 received a negative answer in seminal work (Pipatsrisawat and Darwiche 2011; Atserias, Fichte, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7943 Thurley 2011), whereby resolution is polynomially equivalent to CDCL under a strong non-deterministic decision policy. Yet, the answer to Q1 is positive, once a practical decision scheme such as VSIDS is employed (Vinyals 2020). For QBF, the situation is different: Q1 has a positive answer even for strong non-deterministic QCDCL models (Beyersdorff and B¨ohm 2023) (and hence also for practical QCDCL, which was known even earlier (Janota 2016)). In contrast, Q2 has not been considered before (to the best of our knowledge), even though the question is arguably even more natural: when modelling solvers by proofs it makes sense to only consider extracted proofs that actually correspond to solver runs and disregard any other (possibly shorter) proofs of the same formula not stemming from a solver trace. One reason for this apparent neglect might be that the negative answer to Q1 for SAT (Pipatsrisawat and Darwiche 2011) obviously implies a negative answer to Q2 (we will argue in Section 6 that even for SAT there are subtle differences between Q1 and Q2). The situation is quite different for QBF as we show in this paper. Our contributions. Our main result is a positive answer to Q2 for QBF. For this we construct specific QBF families that are exponentially hard for various QCDCL models, but with (exponentially long) QCDCL traces from which quadratic (in the size of the formula) QBF resolution proofs can be extracted. This considerably strengthens the previously known positive answer to Q1 (Beyersdorff and B¨ohm 2023). Hence the obstacle for QCDCL is not to find the short proofs, it actually finds them, yet inevitably producing a huge overhead in the search. This overhead results from the fact that in contrast to CDCL, where clauses are learnt on conflicts, QCDCL also learns cubes (i.e. conjunctions of literals) from satisfying assignments. In QCDCL, clauses and cubes are used for unit propagations. Yet, as in CDCL, only clauses contribute to the extracted proof of false QBFs. Intuitively, we show that each QCDCL run on our target QBFs learns many cubes that do not appear in the extracted proof. Technically, we achieve our results by (1) precisely modelling the QCDCL systems by proof systems in which the traces and learnt clauses/cubes are recorded, following the approach of (Beyersdorff and B¨ohm 2023); (2) showing a Master Theorem 4.8 that combines lower and upper bounds, employing a lower bound technique for QCDCL from (B¨ohm and Beyersdorff 2021); and (3) crafting new QBF families to which we apply our master theorem. Our positive answer to Q2 is quite general in that it applies to three different QCDCL models, corresponding to the main QBF resolution systems Q-Res (Kleine B¨uning, Karpinski, and Fl¨ogel 1995), QU-Res (Van Gelder 2012) (implemented in (Slivovsky 2022)), and the previously mentioned system LD-Q-Res, underlying standard QCDCL. While our results are purely theoretical, we believe that our findings will also be relevant to practitioners. In fact, our lower bounds imply that cube generation can provide a substantial bottleneck, even for false QBFs. Organisation. We start in Section 2 by reviewing QBFs and relevant proof systems. Section 3 models different QCDCL paradigms as rigorous QCDCL proof systems, amenable to proof complexity analysis. In Section 4 we prove our master theorem for the combined upper and lower bounds, which we apply in Section 5 to answer Q2 for three QCDCL models. We conclude in Section 6 with a discussion. Due to space constraints, we omit some auxiliary results and proofs. 2 Preliminaries Propositional and quantified formulas. Variables x and negated variables ¯x are called literals. We denote the corresponding variable as varpxq :“ varp¯xq :“ x. A clause is a disjunction of literals, sometimes interpreted as a set of literals. A unit clause pℓq is a clause consisting of only one literal. The empty clause pKq has zero literals. A clause C is tautological if tℓ, ¯ℓu Ď C for some literal ℓ. A cube is a conjunction of literals, sometimes viewed as a set of literals. We define a unit cube of a literal ℓ, denoted by rℓs, and the empty cube rJs with ‘empty literal’ J. A cube D is contradictory if tℓ, ¯ℓu Ď D for some literal ℓ. If C is a clause or a cube, we define varpCq :“ tvarpℓq : ℓP Cu. The negation of a clause C “ ℓ1 _ . . . _ ℓm is the cube ␣C :“ C :“ ¯ℓ1 ^ . . . ^ ¯ℓm. A (total) assignment σ of a set of variables V is a nontautological set of literals such that for all x P V there is some ℓP σ with varpℓq “ x. A partial assignment σ of V is an assignment of a subset of V . A clause C is satisfied by σ if C X σ ‰ H. A cube D is falsified by σ if ␣D X σ ‰ H. A CNF (conjunctive normal form) is a conjunction of clauses and a DNF (disjunctive normal form) is a disjunction of cubes. A CNF (DNF) is satisfied (falsified) by σ if all its clauses (cubes) are satisfied (falsified) by σ. A QBF (quantified Boolean formula) Φ “ Q ¨ φ consists of a propositional formula φ, called the matrix, and a prefix Q. A prefix Q “ Q1V1 . . . QsVs consists of non-empty and pairwise disjoint sets of variables V1, . . . , Vs and quantifiers Q1, . . . , Qs P tD, @u with Qi ‰ Qi`1 for i P rs ´ 1s. For a variable x in Q, the quantifier level is lvpxq :“ lvΦpxq :“ i, if x P Vi. For lvΦpℓ1q ă lvΦpℓ2q we write ℓ1 ăΦ ℓ2. For a QBF Φ “ Q ¨ φ with φ a CNF (DNF), we call Φ a QCNF (QDNF). We define CpΦq :“ φ (resp. DpΦq :“ φ). Φ is an AQBF (augmented QBF), if φ “ ψ _ χ with CNF ψ and DNF χ. We define varpΦq :“ Ť CPΦ varpCq. (Long-distance) Q(U)-resolution and Q(U)-consensus. Let C1 and C2 be two clauses (cubes) from a QCNF (QDNF) or AQBF Φ. Let ℓbe an existential (universal) literal with varpℓq R varpC1q Y varpC2q. The resolvent of C1 _ ℓand C2 _ ¯ℓover ℓis defined as pC1 _ ℓq ℓ’Φ pC2 _ ¯ℓq :“ C1 _ C2 (resp. pC1 ^ ℓq ℓ’Φ pC2 ^ ¯ℓq :“ C1 ^ C2q. Let C :“ ℓ1_. . ._ℓm be a clause from a QCNF or AQBF Φ such that ℓi ďΦ ℓj for all i ă j, while i, j P rms. Let k be minimal such that ℓk, . . . , ℓm are universal. Then we can perform a universal reduction step and obtain red@ ΦpCq :“ ℓ1 _ . . . _ ℓk´1. Analogously, we perform existential reduction on cubes, which we denote as redD ΦpCq. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7944 If it is clear whether C is a clause or a cube, we can just write redΦpCq or even redpCq, if the QBF Φ is also obvious. (Kleine B¨uning, Karpinski, and Fl¨ogel 1995) defined a QRes (Q-Con) proof π from a QCNF (QDNF) Φ of a clause (cube) C as a sequence π “ pCiqm i“1 of clauses (cubes) with Cm “ C, and for each i P rms one of the following holds: • Axiom: Ci P CpΦq; (resp. Ci P DpΦqq; • Resolution: Ci “ Cj x’Φ Ck with x existential (universal), j, k ă i, and Ci non-tautological (noncontradictory); • Reduction: Ci “ red@ ΦpCjq (resp. Ci “ redD ΦpCjq) for some j ă i. We call C the root of π. (Balabanov and Jiang 2012) introduced an extension of Q-Res (Q-Con) proofs to LD-Q-Res (LD-Q-Con) proofs by replacing the resolution rule by • Resolution (long-distance): Ci “ Cj x’Φ Ck with x existential (universal) and j, k ă i. The resolvent Ci is allowed to contain tautologies such as u _ ¯u (resp. contradictions u ^ ¯u). If there is such a universal (existential) u P varpCjq X varpCkq, then we require x ăΦ u. (Van Gelder 2012) presented a further extension QU-Res, (QU-Con), of Q-Res, (Q-Con), where we can resolve over arbitrary literals. Formally, it replaces the resolution rule by • Resolution (QU-Res): Ci “ Cj x’Φ Ck with x existential or universal, j, k ă i, and Ci non-tautological (noncontradictory). A proof from Φ of the empty clause pKq (resp. the empty cube rJs) is called a refutation (verification) of Φ. In that case, Φ is called false (true). 3 A Framework for QCDCL Systems First, we formalise QCDCL procedures as proof systems in order to analyse their complexity. We follow the approach initiated in (Beyersdorff and B¨ohm 2023; B¨ohm and Beyersdorff 2021; B¨ohm, Peitl, and Beyersdorff 2022a,b; B¨ohm and Beyersdorff 2023). We store all relevant information of a QCDCL run in trails. Since QCDCL uses several runs and potentially also restarts, a QCDCL proof typically consists of many trails. A trail T for a QCNF or AQBF Φ is a sequence of literals of Φ, including K and J. In general, a trail has the form T “ ppp0,1q, . . . , pp0,g0q; d1, pp1,1q, . . . , pp1,g1q; . . . ; dr, ppr,1q, . . . , ppr,grqq, (3.1) such that the di are decision literals and ppi,jq are propagated literals. We write x ăT y if x, y P T and x is left of y in T . Trails can be thought of as non-tautological sets of literals, and therefore as (partial) assignments. A trail T has run into conflict if K P T or J P T . Simply put, our QCDCL proofs can be viewed as sequences of trails. These trails cannot be created arbitrarily, but have to follow special rules, depending on the model. We consider three different variants of QCDCL, each with a different underlying proof system, meaning that each variant generates proofs in its corresponding proof system. We use the policy notation from (B¨ohm and Beyersdorff 2023) to preserve consistency with previous works. • MLD :“ QCDCLLEV-ORD ALL-RED,EXI-PROP, which can be interpreted as the classic QCDCL variant that generates LDQ-Res and LD-Q-Con proofs. All decisions have to follow quantification order (LEV-ORD). Reductions during unit propagation are always performed when possible (ALL-RED). Clauses can only propagate existential while cubes can only propagate universal literals (EXI-PROP). • MQ :“ QCDCLLEV-ORD NO-RED,EXI-PROP is defined almost as MLD, but reductions during unit propagations are turned off (NO-RED). MQ generates Q-Res or Q-Con proofs. • MQU :“ QCDCLLEV-ORD NO-RED,ALL-PROP is an extension of MQ where clauses can also propagate universal literals and analogously cubes propagate existentially (ALL-PROP). This generates QU-Res or QU-Con proofs. Decisions can only be made when no more propagations are possible. Conflicts have a higher priority than propagations of literals. Hence, we never skip conflicts or propagations. For each propagated literal ppi,jq in a trail T the formula must contain a clause or a cube that caused this propagation by becoming a unit clause/cube. We denote such an antecedent clause/cube by anteT pppi,jqq. After a trail has run into a conflict, or if all variables were assigned, we start the learning process. Definition 3.1 (learnable constraints). Let T be a trail for Φ of the form (3.1) with ppr,grq P tK, Ju. Starting with anteT pKq (resp. anteT pJq) we reversely resolve over the antecedent clauses (cubes) that propagated the existential (universal) variables, until we stop at some arbitrarily chosen point. Each antecedent and resolvent is reduced as soon as possible, regardless of the choice of policies. The clause (cube) we so derive is a learnable constraint. We denote the set of learnable constraints by LpT q. We can also learn cubes from trails that did not run into conflict. If T is a total assignment of the variables from Φ, then we define the set of learnable constraints as the set of cubes LpT q :“ tredD ΦpDq| D Ď T and D satisfies CpΦqu. Definition 3.2 (QCDCL proof systems). Let S be one of MLD, MQ, MQU. An S proof ι from a QCNF Φ “ Q ¨ φ of a clause or cube C is a sequence of triples ι :“ rpTi, Ci, πiqsm i“1, where Cm “ C, each Ti is a trail following the policies of S, each Ci P LpTiq is one of the constraints we can learn from the trail, and πi is the derivation of Ci we get by performing the steps in Definition 3.1. We define Rpιq as the extracted proof of C that we get by sticking together suitable πi. For C “ pKq, ι is an S refutation of Φ. For C “ rJs, ι is an S verification of Φ. The trail size of ι is defined as trail-sizepιq :“ |ι| :“ řm i“1 |Ti| and the extracted proof size of ι is defined as the size of the extracted proof, i.e. extr-sizepιq :“ |Rpιq|. Obviously, we have extr-sizepιq ď trail-sizepιq. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7945 On true formulas, these three variants generate consensus (Q-Con, LD-Q-Con or QU-Con) verifications. As such proofs are only defined on QDNFs, they are formally not verifications of the QCNF Φ, but of a QDNF consisting of cubes that satisfy CpΦq. These cubes (often called initial cubes) correspond to the cubes that can be learned whenever a trail does not run into conflict (cf. 3.1). From now on, we will refer to these verifications as verifications of Φ. All combinations of the above policies lead to sound and complete proof systems (and algorithms). Theorem 3.3 ((B¨ohm and Beyersdorff 2023)). All defined QCDCL variants are sound and complete. 4 Combining Lower and Upper Bounds In this section we provide a general framework for showing a lower bound for trail size combined with an upper bound for the extracted proof size of QCDCL systems. For the lower bound we employ the gauge technique from (B¨ohm and Beyersdorff 2021) which we review first. The Gauge Lower Bound Technique To ease notation, we will assume that prefixes of Σb 3 QCNFs have the form DX@UDT, for sets of literals X, U, T, and we will use the notions of X-, U- and T-variables and -literals. Further, we define certain types of clauses: • X-clauses consist of X-literals only (analogously we define U-clauses and T-clauses), • XT-clauses consist of at least one X- and at least one Tliteral, but no U-literals, • XUT-clauses consist of at least one X-, U- and T-literal. For our lower bounds we will make use of a technique that only holds for a particular (but still sufficiently large) class of formulas which is determined by the so-called XTproperty. Intuitively, this property ensures that there cannot be any direct connections between the inner and the outer quantifier block in Σb 3 QCNFs. Definition 4.1 ((Beyersdorff and B¨ohm 2023)). We say that Φ fulfils the XT-property, if CpΦq contains no XT-clauses, no T-clauses that are unit (or empty) and no two T-clauses from CpΦq are resolvable. This property does not only hold for the initial formula, but also for all clauses that can be derived via LD-Q-Res. Lemma 4.2 ((Beyersdorff and B¨ohm 2023)). If Φ is a Σb 3 QCNF that fulfils the XT-property, then it is not possible to derive XT-clauses or new T-clauses via LD-Q-Res from Φ. The lower bound technique requires two further notions – one which is quite natural for proofs that are extracted from QCDCL runs (fully reduced), and one which is closely related to the XT-property (primitive). Definition 4.3 (fully reduced proofs (B¨ohm and Beyersdorff 2021; B¨ohm, Peitl, and Beyersdorff 2022a)). A LD-Q-Res refutation π of a QCNF Φ is fully reduced, if for each clause C P π that contains universal literals that are reducible, the reduction step is performed immediately and C is not used otherwise in the proof. Definition 4.4 (primitive proofs (B¨ohm and Beyersdorff 2021; B¨ohm, Peitl, and Beyersdorff 2022a)). A LD-Q-Res proof π from a Σb 3 formula is primitive, if there are no two XUT-clauses in π that are resolved over an X-variable. Note that every fully reduced primitive LD-Q-Res proof has to be a Q-Res proof. Therefore, from now on we will only mention fully reduced primitive Q-Res proofs. Furthermore, it is easy to see that each QCDCL variant generates fully reduced proofs by Definition 3.1. Now that we have defined the class of formulas and the class of proofs for which the lower bound technique is applicable, we introduce the measure that determines the lower bound itself. Intuitively, the gauge of a Σb 3 QCNF is the minimal number of X-literals that will be necessarily piled up in any derivation of an X-clause in which it is only allowed to resolve over T-literals. Definition 4.5 ((B¨ohm and Beyersdorff 2021)). Let Φ be a Σb 3 QCNF with prefix DX@UDT. We define WΦ as the set of all Q-Res proofs π from Φ of X-clauses Cπ, such that π consists of resolutions over T-literals and reductions only. We define gaugepΦq :“ mint|Cπ| : Cπ is the root of some π P WΦu. Combining these notions and conditions we obtain the gauge lower bound method. Theorem 4.6 ((B¨ohm and Beyersdorff 2021)). Each fully reduced primitive Q-Res refutation of a Σb 3 QCNF Φ that fulfils the XT-property has size 2ΩpgaugepΦqq. To obtain an exponential lower bound for a QCDCL variant via this technique, we will construct Σb 3 QCNFs that fulfil the XT-property and have linear gauge, such that the QCDCL variant generates primitive proofs on these QBFs. The Master Theorem We can now approach our main Theorem 4.8. Throughout the paper we will construct QBFs by combining two QCNFs into one single QCNF by concatenating the two quantifier prefixes and conjoining both matrices. Definition 4.7. Let Φ “ Q ¨ φ and Ψ “ R ¨ ψ be two QCNFs. Let Ψ1 “ R1 ¨ψ1 be the QCNF that is obtained after renaming the variables from Ψ such that varpΦqXvarpΨ1q “ H. Then we define the disjoint composition of Φ and Ψ as the QCNF dcpΦ, Ψq :“ QR1 ¨ φ ^ ψ1. From now on, we will assume that variables from φ and variables from ψ1 do not share the same quantifier level in QR1. In particular, Φ will always end with an existential quantifier and Ψ will start with a universal quantifier. The next theorem is our main technical result which will be used for all following main results in the paper. Theorem 4.8. Let Φn and Ψn be two formulas with the following properties: 1. Φn is a Σb 3 QCNF that fulfils the XT-property, is false, and has MLD (resp. MQ and MQU) refutations with trail size s. 2. (for MQU) Each QU-Res refutation of Φn is a Q-Res refutation. I.e., it is not possible to resolve two clauses that were derived from Φn over a universal variable. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7946 3. Ψn is true, and for each MLD (resp. MQ and MQU) proof ιE of some cube E from dcpΦn, Ψnq with varpEq Ď varpΦnq we have |ιE| ě r. Then for each MLD (resp. MQ and MQU) refutation θ of dcpΦn, Ψnq we have trail-sizepθq P minp2ΩpgaugepΦnqq, rq, but there exists a MLD (resp. MQ and MQU) refutation ι of dcpΦn, Ψnq with extr-sizepιq ď s. Before proving the theorem, let us provide some intuition on this result and the way we will apply it later in Section 5. One can obtain an exponential separation between the trail size and the extracted size of a proof by choosing Φn and Ψn in such a way that s is polynomial in n, gaugepΦnq is linear in n, and r is exponential in n. Intuitively, in order to refute dcpΦn, Ψnq, we can consider two possibilities: 1. We never learn a cube E with varpEq Ď varpΦnq. Then each learned cube contains some variables from Ψn. We will show in the proof of Theorem 4.8 that we will then generate fully reduced primitive proofs and therefore the gauge lower bound applies (Theorem 4.6), resulting in an exponential lower bound. 2. We learn at least one cube E with varpEq Ď varpΦnq. But then the derivation of this cube E itself needs QCDCL proofs of size r, which is exponential by assumption. For the polynomial upper bound on the extracted proof size, we are allowed to learn and derive any cube that is useful. Since the derivations of these cubes do not appear in the extracted proof, they may even have exponential size without increasing the size of the extracted proof. As we assume that Φn is easy to refute in the considered QCDCL variant, we can reproduce this refutation, as verifying Ψn comes for free when only measuring the extracted proof size. A visualisation of this separation is depicted in Figure 1. Proof of Theorem 4.8. Obviously, dcpΦn, Ψnq is a false formula. It suffices to show that each MLD (resp. MQ and MQU) refutation θ of dcpΦn, Ψnq, in which each learned cube contains some literals from Ψn, has trail-sizepθq “ 2ΩpgaugepΦnqq. We show that such θ generates fully reduced primitive Q-Res refutations of Φn. The lower bound then follows by Theorem 4.6. Assume, for the sake of contradiction, that there is such a MLD (resp. MQ and MQU) refutation θ of dcpΦn, Ψnq such that the extracted proof Rpθq is not a fully reduced primitive Q-Res refutation of Φn. By the definition of a disjoint composition, Rpθq is a refutation of Φn. By condition 2 of the theorem, Rpθq does not contain a resolution step over a universal variable, i.e. all resolutions are over existential variables. There must be a resolution step in Rpθq between two XUT-clauses over an X-literal. Consider the first trail T in θ in which such a resolution, say C x’ D with x P C and ¯x P D, appeared in the learning phase. Then one of these two clauses must have been an antecedent clause for the pivot, say anteT pxq “ C. The clause C must contain at least one T-literal, say t1 P C. Then we need ¯t1 ăT x and therefore there exists an antecedent clause A1 :“ anteT p¯t1q. Because of the XT-property, A1 cannot be trail-size extr-size Ti πi Rpιq ι Figure 1: Visualisation of the separations following from Theorem 4.8: The rectangles symbolise the trails of a proof, the triangles represent the derivations of the learned constraints. Black rectangles and triangles denote trails and derivations for learned cubes, while blue rectangles and triangles denote trails and derivations of learned clauses. As the last learned constraint is empty, i.e. the empty clause, all derivations of learned clauses can be stuck together to obtain a refutation of the original formula. The derivations of learned cubes will not be used for the extracted refutation The separation between the measures trail-size and extr-size occurs when the number of black trails is exponential while the number of blue trails is polynomial. a unit clause, hence it must be either a non-unit T-clause, or a clause with a U-literal. If A1 is a non-unit T-clause, then we can find another ¯t1 ‰ t2 P A1, for which we would find another antecedent clause A2 :“ anteT p¯t2q. We can repeat this argument, until at some point the antecedent clause Aj “ anteT p¯tjq for some T-literal ¯tj contains a U-literal, say u P Aj. Because u ăΦn tj, we need ¯u ăT ¯tj ăT x in order to propagate ¯tj. Because our decisions need to be levelordered, we conclude that ¯u was propagated. It is not possible for ¯u to have been propagated by a clause, because otherwise we could perform a universal resolution step in Rpθq. Therefore ¯u must have been propagated by a cube F :“ anteT p¯uq (note that u P F). From now on, we assume that ¯u is the first U-literal that was propagated in T by a cube and let F be the corresponding antecedent cube (we do not need the connection to Aj anymore). By our assumption from the beginning of the proof, F contains some literals from Ψn. We can safely assume that at least one of these is universal, otherwise all literals from Ψn would have been reduced away during the learning of F. Let w P F be such a universal literal from Ψn. Then we need w ăT ¯u because we cannot reduce universally in cubes. That means w was also propagated by some constraint G :“ anteT pwq, which is either a clause or a cube. We distinguish these two cases. Case 1. G is a cube. This cube G cannot contain any Uliterals from Φn, otherwise they must have been propagated via a cube (not possible because ¯u was the first propagated U-literal via a cube in T ) or decided (not possible because decisions need to be level-ordered) before ¯u and x in T . We The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7947 conclude that only existential literals from Φn appear in G. Let πG be the LD-Q-Con (resp. QU-Con) subproof of G from dcpΦn, Ψnq. We can restrict πG to an LD-Q-Con (resp. QU-Con) proof ρ from Φn by just deleting all literals from Ψn (and, if necessary, delete redundant cubes). But then ρ is a LD-Q-Con (resp. QU-Con) proof of G1, where G1 is a cube that only contains existential literals that are also contained in Φn. If we reduce G1 existentially, we obtain a verification of Φn, contradicting the falsity of Φn. Case 2. G is a clause. This case is only relevant for MQU. Again we obtain a contradiction. We omit the proof. We thus obtain contradictions in both cases. Hence, our assumption that Rpθq was not fully reduced primitive is false. We conclude that |θ| P 2ΩpgaugepΦnqq by Theorem 4.6. The upper bound can be shown by the following construction: Given an MLD (resp. MQ and MQU) refutation ι1 of Φn with trail size s, we can essentially reproduce all trails from ι1 for a MLD (resp. MQ and MQU) refutation ι of dcpΦn, Ψnq. The only difference is that, whenever a cube is learned in ι1, we need to verify Ψn to learn this cube in ι. However, as this verification of Ψn does not appear in Rpιq, we conclude Rpιq “ Rpι1q and therefore Rpιq is of size at most s. 5 Separations for QCDCL Models We will now put this general idea into action and construct separations between the trail size and the extracted proof size for each of the three QCDCL variants MLD, MQ and MQU, corresponding to their respective underlying proof systems LD-Q-Res, Q-Res and QU-Res. QCDCL Based on LD-Q-Resolution We start with MLD which corresponds to standard QCDCL as used in modern state-of-the art QBF solvers (Lonsing and Egly 2017; Peitl, Slivovsky, and Szeider 2019). First, we need to find a true QCNF that is hard for MLD and fulfils the properties of Ψn from Theorem 4.8. We recall the lower bound on true formulas from (B¨ohm, Peitl, and Beyersdorff 2022b) which uses the notion of the twin formula of a QCNF as well as the reversion. Definition 5.1 (twin formulas, (B¨ohm, Peitl, and Beyersdorff 2022b)). Let Λ “ DX@UDT ¨ CpΛq be a QCNF. Let U “ tu1, . . . , umu and let v1, . . . , vm be variables not occuring in Λ. Then the twin formula of Λ is the QCNF TwinΛ defined as TwinΛ :“DX@pU Y tv1, . . . , vmuqDT ¨ CpΛq ^ ľ CPCpΛq Cru1{v1, . . . , um{vms, where ui{vi indicates that all occurrences of ui are substituted by vi. Definition 5.2 ((B¨ohm, Peitl, and Beyersdorff 2022b)). If Λ “ Q1V1Q2V2 . . . QkVk ¨ Źm j“1 Cj is a QCNF with Qi P tD, @u and disjoint sets of variables Vi for i “ 1, . . . , k, then the reversion RevpΛq of Λ is the QCNF Q1 1V1Q1 2V2 . . . Q1 kVk@wDc1, . . . , cm ¨ p¯c1 _ . . . _ ¯cmq ^ m ľ j“1 ľ ℓPCj p¯ℓ_ w _ cjq ^ p¯ℓ_ ¯w _ cjq where Q1 i “ @ if Qi “ D, and Q1 i “ D if Qi “ @, and w, c1, . . . , cm are new variables not contained in Λ. It is easy to see that the reversion flips the truth value: Lemma 5.3 ((B¨ohm, Peitl, and Beyersdorff 2022b)). If Λ is a QCNF, then RevpΛq is true if and only if Λ is false. The next theorem is a generalization of one of the main results from (B¨ohm, Peitl, and Beyersdorff 2022b). Instead of proving a lower bound on verifications of a particular formula Ψ (as done in that paper), we consider derivations of any cube from a formula dcpΓ, Ψq, such that this cube does not contain literals from Ψ. Hence, by choosing Γ as the empty formula, one obtains the result from (B¨ohm, Peitl, and Beyersdorff 2022b). Theorem 5.4. Let Λ be a false Σb 3 QCNF with the prefix DX@UDT and let Γ be an arbitrary QCNF. Let ιE be an MLD proof of some cube E from dcpΓ, RevpTwinΛqq such that varpEq Ď varpΓq. Additionally, let all clauses C P CpΛq contain at least one U- and one T-literal. If the QCNF TwinΛ needs fully reduced primitive Q-Res refutations of size s, then |ιE| ě s. The proof of the above theorem follows along the same lines as the corresponding theorem from (B¨ohm, Peitl, and Beyersdorff 2022b). Next, we want to find specific formulas to which Theorem 5.4 can be applied. We recall the well-known equality formulas, introduced in (Beyersdorff, Blinkhorn, and Hinde 2019), as well as a modification from (B¨ohm, Peitl, and Beyersdorff 2022b). This modification adds a U-literal to some clauses such that each clause now contains at least one Uand one T-literal, which is a precondition for Theorem 5.4. Definition 5.5 ((Beyersdorff, Blinkhorn, and Hinde 2019; B¨ohm, Peitl, and Beyersdorff 2022b)). The QCNF Eqn consists of the prefix Dx1, . . . , xn@u1, . . . , unDt1, . . . , tn and the matrix xi _ ui _ ti, ¯xi _ ¯ui _ ti, ¯t1 _ . . . _ ¯tn for i “ 1, . . . , n. The QCNF ModEqn consists of the prefix Dx1, . . . , xn@u1, . . . , un, pDt1, . . . , tn and the matrix xi _ui _ti, ¯xi _ ¯ui _ti, p_¯t1 _. . ._¯tn, ¯p_¯t1 _. . ._¯tn for i “ 1, . . . , n. Many properties of Eqn carry over to ModEqn or even TwinModEqn. This is important for obtaining exponential lower bounds via Theorem 4.8. Proposition 5.6 ((Beyersdorff, Blinkhorn, and Hinde 2019; Beyersdorff and B¨ohm 2023; B¨ohm and Beyersdorff 2021; B¨ohm, Peitl, and Beyersdorff 2022a,b)). Eqn needs QURes refutations of size 2Ωpnq but has MLD refutations of quadratic trail size. Furthermore, Eqn and ModEqn fulfil the XT-property and gaugepEqnq “ gaugepTwinModEqnq “ n. Hence, Eqn and TwinModEqn need exponential-size fully reduced primitive Q-Res refutations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7948 Using Theorem 5.4 and Proposition 5.6, we obtain: Corollary 5.7. Let RTMEn :“ RevpTwinModEqnq. Then for each QCNF Γn, all MLD proofs of any cube E with varpEq Ď varpΓnq from the disjoint composition dcpΓn, RTMEnq have exponential trail size. Definition 5.8. We define the QCNF ERTMEn as the disjoint composition ERTMEn :“ dcpEqn, RTMEnq. After applying our Master Theorem 4.8 by setting Φn :“ Eqn and Ψn :“ RTMEn, where s is quadratic by Proposition 5.6 and r is exponential by Corollary 5.7, we conclude: Corollary 5.9. For each MLD refutation θn of ERTMEn we have trail-sizepθnq P 2Ωpnq, but there exists an MLD refutation ιn of ERTMEn with extr-sizepιnq P Opn2q. QCDCL Based on Q-Resolution For the separation on the QCDCL model MQ we need a formula with linear gauge that is still easy for MQ. Since MQ generates Q-Res proofs, Eqn will not work because it is hard for Q-Res (and even QU-Res) (Beyersdorff, Blinkhorn, and Hinde 2019). Therefore we introduce the simplicity formulas Simn, which are similar to Eqn, but now all universal literals occur in only one polarity. Definition 5.10 (Simplicity formula). The QCNF Simn consists of the prefix Dx1, . . . , xn@u1, . . . , unDt1, . . . , tn and the matrix xi _ ui _ ti, ¯xi _ ui _ ti, ¯t1 _ . . . _ ¯tn for i “ 1, . . . , n. One can easily construct short MQ and MQU refutations for Simn. Proposition 5.11. Simn has MQ and MQU refutations with quadratic trail size. Yet, the gauge properties of Eqn carry over to Simn. Proposition 5.12. The QCNF Simn fulfils the XT-property and gaugepSimnq “ n. Hence, each fully reduced primitive Q-Res refutation of Simn has exponential size. Now that we have found our candidate for Φn in Theorem 4.8, let us construct a suitable formula for Ψn. For this, we need to find a true formula that is hard to verify in MQ. We will again make use of the idea of a reversion of a formula that is already hard for Q-Res (and QU-Res). Proposition 5.13. The true QCNF RevpEqnq needs exponential-trail-size QU-Con verifications. Next, we have to show that deriving a cube from dcpΓ, Ψq without variables from Ψ is as hard as verifying Ψ. Proposition 5.14. For each QCNF Γ and Ψ, from each MQ (resp. MQU) proof ιE of some cube E from dcpΓ, Ψq with varpEq Ď varpΓq we can extract a Q-Con (resp. QU-Con) verification ρ of Ψ with |ρ| ď |ιE|. From Propositions 5.13 and 5.14 we conclude: Corollary 5.15. For each QCNF Γn, the disjoint composition dcpΓn, RevpEqnqq needs exponential-trail-size MQ and MQU proofs of any cube E with varpEq Ď varpΓnq. We combine Simn with RevpEqnq and obtain our formula for the separation. Definition 5.16. We define the QCNF SREn as the disjoint composition SREn :“ dcpSimn, RevpEqnqq. Applying Theorem 4.8 with Φn :“ Simn and Ψn :“ RevpEqnq where s is quadratic by Proposition 5.11 and r is exponential by Corollary 5.15, we conclude: Corollary 5.17. For each MQ refutation θn of SREn we have trail-sizepθnq P 2Ωpnq, but there exists an MQ refutation ιn of SREn with extr-sizepιnq P Opn2q. QCDCL Based on QU-Resolution For the last separation in the QCDCL model MQU – recently implemented as a QBF solver (Slivovsky 2022) – we can use the same formulas as for MQ and Q-Res, as we only have to prove that no genuine QU-Res proofs can be generated. Lemma 5.18. Each QU-Res refutation of Simn is a Q-Res refutation. Proof. All universal variables from Simn occur only in one polarity, hence we cannot resolve over them. As for Corollary 5.17, we conclude: Corollary 5.19. For each MQU refutation θn of SREn we have trail-sizepθnq P 2Ωpnq, but there exists a MQU refutation ιn of SREn with extr-sizepιnq P Opn2q. 6 Conclusion Separating the two measures trail size and extracted proof size yields interesting consequences for QCDCL: lower bounds on trail size do not necessarily carry over to extracted proof size. This indicates that traditional proof systems such as Q-Res (and their extensions) are not perfectly suited to model QCDCL runs. Instead the systems need to track all non-redundant learned constraints (clauses and cubes). Our separations only hold if decisions follow the quantifier prefix (using LEV-ORD). Allowing arbitrary decisions strengthens the three QCDCL variants to the point that they polynomially simulate the underlying proof systems Q-Res, QU-Res and LD-Q-Res (B¨ohm and Beyersdorff 2023). Besides changing the order of decisions, there are other approaches trying to avoid problems caused by the asymmetrical behaviour of clauses and cubes by changing the encoding (Zhang 2006; Goultiaeva and Bacchus 2013; Tu, Hsu, and Jiang 2015). In conclusion, our results depict another crucial difference between SAT and QBF solving. In fact, as SAT solving only works on clauses, trail size and extracted proof size in SAT differ by exactly a linear factor, leading to a precise negative answer to Q2. Q1 also has a negative answer in SAT (Pipatsrisawat and Darwiche 2011), yet the precise overhead of CDCL trail size over minimal resolution size is still subject to ongoing research: it is at most cubic (Beyersdorff and B¨ohm 2023), but at least linear as very recently shown (Vinyals et al. 2023). This confirms again that comparing trail size and extracted proof size (Q2) is more challenging than comparing trail size and minimal proof size (Q1), not only for QBF as done here, but also for SAT. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7949 Acknowledgments This work was done in part while the authors were visiting the Simons Institute for the Theory of Computing during the Extended Reunion on Satisfiability in 2023. Research was supported by grants from the CarlZeiss Foundation, DFG grant BE 4209/3-1, and a joint DAAD/DST grant. References Atserias, A.; Fichte, J. K.; and Thurley, M. 2011. ClauseLearning Algorithms with Many Restarts and BoundedWidth Resolution. J. Artif. Intell. Res., 40: 353–373. Balabanov, V.; and Jiang, J.-H. R. 2012. Unified QBF Certification and Its Applications. Form. Methods Syst. Des., 41(1): 45–65. Beame, P.; Kautz, H. A.; and Sabharwal, A. 2004. Towards Understanding and Harnessing the Potential of Clause Learning. J. Artif. Intell. Res. (JAIR), 22: 319–351. Beyersdorff, O. 2022. Proof Complexity of Quantified Boolean Logic – a Survey. In Benini, M.; Beyersdorff, O.; Rathjen, M.; and Schuster, P., eds., Mathematics for Computation (M4C), 353–391. World Scientific. Beyersdorff, O.; Blinkhorn, J.; and Hinde, L. 2019. Size, Cost, and Capacity: A Semantic Technique for Hard Random QBFs. Logical Methods in Computer Science, 15(1). Beyersdorff, O.; and B¨ohm, B. 2023. Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution. Log. Methods Comput. Sci., 19(2). Beyersdorff, O.; Janota, M.; Lonsing, F.; and Seidl, M. 2021. Quantified Boolean Formulas. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, 1177– 1221. IOS Press. Biere, A. 2004. Resolve and Expand. In Hoos, H. H.; and Mitchell, D. G., eds., Theory and Applications of Satisfiability Testing SAT, volume 3542 of Lecture Notes in Computer Science, 59–70. Springer. Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds. 2021. Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications. IOS Press. B¨ohm, B.; and Beyersdorff, O. 2021. Lower Bounds for QCDCL via Formula Gauge. In Li, C.-M.; and Many`a, F., eds., Theory and Applications of Satisfiability Testing (SAT), 47–63. Cham: Springer International Publishing. B¨ohm, B.; and Beyersdorff, O. 2023. QCDCL vs QBF Resolution: Further Insights. In Mahajan, M.; and Slivovsky, F., eds., 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023), volume 271 of Leibniz International Proceedings in Informatics (LIPIcs), 4:1–4:17. Dagstuhl, Germany: Schloss Dagstuhl – LeibnizZentrum f¨ur Informatik. ISBN 978-3-95977-286-0. B¨ohm, B.; Peitl, T.; and Beyersdorff, O. 2022a. QCDCL with Cube Learning or Pure Literal Elimination - What is Best? In Raedt, L. D., ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI), 1781–1787. ijcai.org. B¨ohm, B.; Peitl, T.; and Beyersdorff, O. 2022b. Should Decisions in QCDCL Follow Prefix Order? In Meel, K. S.; and Strichman, O., eds., 25th International Conference on Theory and Applications of Satisfiability Testing (SAT), volume 236 of LIPIcs, 11:1–11:19. Schloss Dagstuhl - LeibnizZentrum f¨ur Informatik. Buss, S.; and Nordstr¨om, J. 2021. Proof Complexity and SAT Solving. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, 233–350. IOS Press. Goultiaeva, A.; and Bacchus, F. 2013. Recovering and Utilizing Partial Duality in QBF. In J¨arvisalo, M.; and Gelder, A. V., eds., Theory and Applications of Satisfiability Testing - SAT 2013 - 16th International Conference, Helsinki, Finland, July 8-12, 2013. Proceedings, volume 7962 of Lecture Notes in Computer Science, 83–99. Springer. Heule, M. J. H.; Seidl, M.; and Biere, A. 2017. Solution Validation and Extraction for QBF Preprocessing. J. Autom. Reason., 58(1): 97–125. Janota, M. 2016. On Q-Resolution and CDCL QBF Solving. In Proc. International Conference on Theory and Applications of Satisfiability Testing (SAT), 402–418. Janota, M.; and Marques-Silva, J. 2015. Expansion-based QBF solving versus Q-resolution. Theor. Comput. Sci., 577: 25–42. Kleine B¨uning, H.; Karpinski, M.; and Fl¨ogel, A. 1995. Resolution for Quantified Boolean Formulas. Inf. Comput., 117(1): 12–18. Kraj´ıˇcek, J. 2019. Proof complexity, volume 170 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press. Lonsing, F.; and Egly, U. 2017. DepQBF 6.0: A SearchBased QBF Solver Beyond Traditional QCDCL. In Proc. International Conference on Automated Deduction (CADE), 371–384. Marques Silva, J. P.; Lynce, I.; and Malik, S. 2021. ConflictDriven Clause Learning SAT Solvers. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications. IOS Press. Peitl, T.; Slivovsky, F.; and Szeider, S. 2019. Dependency Learning for QBF. J. Artif. Intell. Res., 65: 180–208. Pipatsrisawat, K.; and Darwiche, A. 2011. On the power of clause-learning SAT solvers as resolution engines. Artif. Intell., 175(2): 512–525. Shukla, A.; Biere, A.; Pulina, L.; and Seidl, M. 2019. A Survey on Applications of Quantified Boolean Formulas. In Proc. IEEE International Conference on Tools with Artificial Intelligence (ICTAI), 78–84. Slivovsky, F. 2022. Quantified CDCL with Universal Resolution. In Meel, K. S.; and Strichman, O., eds., 25th International Conference on Theory and Applications of Satisfiability Testing (SAT), volume 236 of LIPIcs, 24:1–24:16. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Tu, K.; Hsu, T.; and Jiang, J. R. 2015. QELL: QBF Reasoning with Extended Clause Learning and Levelized SAT The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7950 Solving. In Heule, M.; and Weaver, S. A., eds., Theory and Applications of Satisfiability Testing - SAT 2015 - 18th International Conference, Austin, TX, USA, September 24-27, 2015, Proceedings, volume 9340 of Lecture Notes in Computer Science, 343–359. Springer. Van Gelder, A. 2012. Contributions to the Theory of Practical Quantified Boolean Formula Solving. In Proc. Principles and Practice of Constraint Programming (CP), 647–663. Vardi, M. Y. 2014. Boolean satisfiability: theory and engineering. Commun. ACM, 57(3): 5. Vinyals, M. 2020. Hard Examples for Common Variable Decision Heuristics. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Vinyals, M.; Li, C.; Fleming, N.; Kolokolova, A.; and Ganesh, V. 2023. Limits of CDCL Learning via Merge Resolution. In Mahajan, M.; and Slivovsky, F., eds., 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023), volume 271 of Leibniz International Proceedings in Informatics (LIPIcs), 27:1–27:19. Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum f¨ur Informatik. ISBN 978-3-95977-286-0. Wetzler, N.; Heule, M.; and Jr., W. A. H. 2014. DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs. In Sinz, C.; and Egly, U., eds., Theory and Applications of Satisfiability Testing (SAT, volume 8561 of Lecture Notes in Computer Science, 422–429. Springer. Zhang, L. 2006. Solving QBF by Combining Conjunctive and Disjunctive Normal Forms. In Proceedings, The TwentyFirst National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 16-20, 2006, Boston, Massachusetts, USA, 143–150. AAAI Press. Zhang, L.; and Malik, S. 2002. Conflict driven learning in a quantified Boolean Satisfiability solver. In Proc. IEEE/ACM International Conference on Computer-aided Design (ICCAD), 442–449. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7951
2024
883
18,721
Testing Self-Reducible Samplers Rishiraj Bhattacharyya1*, Sourav Chakraborty 2*, Yash Pote 3,4*, Uddalok Sarkar 2*, Sayantan Sen3* 1University of Birmingham 2 Indian Statistical Institute Kolkata 3 National University of Singapore 4 CREATE [email protected], {sourav, uddalok_r}@isical.ac.in, {yashppote, sayantan789}@gmail.com Abstract Samplers are the backbone of the implementations of any randomised algorithm. Unfortunately, obtaining an efficient algorithm to test the correctness of samplers is very hard to find. Recently, in a series of works, testers like Barbarik, Teq, Flash for testing of some particular kinds of samplers, like CNF-samplers and Horn-samplers, were obtained. But their techniques have a significant limitation because one can not expect to use their methods to test for other samplers, such as perfect matching samplers or samplers for sampling linear extensions in posets. In this paper, we present a new testing algorithm that works for such samplers and can estimate the distance of a new sampler from a known sampler (say, uniform sampler). Testing the identity of distributions is the heart of testing the correctness of samplers. This paper’s main technical contribution is developing a new distance estimation algorithm for distributions over high-dimensional cubes using the recently proposed sub-cube conditioning sampling model. Given subcube conditioning access to an unknown distribution P, and a known distribution Q defined over {0, 1}n, our algorithm CubeProbeEst estimates the variation distance between P and Q within additive error ζ using O n2/ζ4 subcube conditional samples from P. Following the testing-via-learning paradigm, we also get a tester which distinguishes between the cases when P and Q are ε-close or η-far in variation distance with probability at least 0.99 using O(n2/(η −ε)4) subcube conditional samples. The estimation algorithm in the sub-cube conditioning sampling model helps us to design the first tester for selfreducible samplers. The correctness of the testers is formally proved. On the other hand, we implement our algorithm to create CubeProbeEst and use it to test the quality of three samplers for sampling linear extensions in posets. Introduction Sampling algorithms play a pivotal role in enhancing the efficiency and accuracy of data analysis and decision-making across diverse domains (Chandra and Iyengar 1992; Yuan et al. 2004; Naveh et al. 2006; Mironov and Zhang 2006; Soos, Nohl, and Castelluccia 2009; Morawiecki and Srebrny 2013; Ashur, De Witte, and Liu 2017). With the exponential *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. surge in data volume, these algorithms provide the means to derive meaningful insights from massive datasets without the burden of processing the complete information. Additionally, they aid in pinpointing and mitigating biases inherent in data, ensuring the attainment of more precise and equitable conclusions. From enabling statistical inferences to propelling advancements in machine learning, safeguarding privacy, and facilitating real-time decision-making, sampling algorithms stand as a cornerstone in extracting information from the vast data landscape of our modern world. However, many advanced sampling algorithms are often prohibitively slow (hash-based techniques of (Chakraborty, Meel, and Vardi 2013; Ermon et al. 2013; Chakraborty et al. 2014; Meel et al. 2016) and MCMC-based methods of (Andrieu et al. 2003; Brooks et al. 2011; Jerrum 1998)) or lack comprehensive verification ((Ermon, Gomes, and Selman 2012), (Dutra et al. 2018), (Golia et al. 2021)). Many popular methods like “statistical tests” rely on heuristics without guarantees of their efficacy. Utilizing unverified sampling algorithms can lead to significant pitfalls, including compromised conclusion accuracy, potential privacy, and security vulnerabilities. Moreover, the absence of verification hampers transparency and reproducibility, underscoring the critical need for rigorous validation through testing, comparison, and consideration of statistical properties. Consequently, a central challenge in this field revolves around designing tools to certify sampling quality and verify correctness, which necessitates overcoming the intricate task of validating probabilistic programs and ensuring their distributions adhere to desired properties. A notable breakthrough in addressing this verification challenge was achieved by (Chakraborty and Meel 2019), who introduced the statistical testing framework known as “Barbarik”. This method proved instrumental in testing the correctness of uniform CNF (Conjunctive Normal Form) samplers by drawing samples from conditional distributions. Barbarik demonstrated three key properties: accepting an almost correct sampler with high probability, rejecting a far-from-correct sampler with high probability, and rejecting a “well-behaved” but far-from-correct sampler with high probability. There have been a series of follow-up works (Meel, Pote, and Chakraborty 2020; Pote and Meel 2021, 2022; Banerjee et al. 2023). However, in this frameThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7952 work, conditioning is achieved using a gadget that does not quite generalize to applications beyond CNF sampling. For instance, for linear-extension sampling (Huber 2014), where the goal is to sample a linear ordering agreeing with a given poset, the test requires that the post-conditioning residual input be a supergraph of the original input, with the property that it has exactly two user-specified linear-extensions. This requirement is hard to fulfill in general. On the other hand, a generic tester that would work for any sampler implementation without any additional constraints and simultaneously be sample efficient is too good to be true (Paninski 2008). From a practical perspective, the question is: Can we design an algorithmic framework for testers that would work for most deployed samplers and still have practical sample complexity? We answer the question positively. We propose algorithms that offer a generic approach to estimating the distance between a known and an unknown sampler, assuming both follow the ubiquitous self-reducible sampling strategy. Our techniques follow a constrained sampling approach, extending its applicability to wide range of samplers without mandating such specific structural conditions. A key foundational contribution of this paper includes leveraging the subcube conditional sampling techniques (Bhattacharyya and Chakraborty 2018) and devising a method to estimate the distance between samplers – a challenge often more intricate than simple correctness testing. Organization of our paper We first present the preliminaries followed by a description of our results and their relevance. We then give a detailed description of our main algorithms CubeProbeEst and CubeProbeTester. The detailed theoretical analysis is presented in the supplementary material. We only present a high-level technical overview. Finally, we present our experimental results and conclude. The extended version of the paper is available at www.arxiv.org/abs/2312.10999. Preliminaries In this paper, we are dealing with discrete probability distributions whose sample space is an n-dimensional Boolean hypercube, {0, 1}n. For a distribution D over a universe Ω, and for any x ∈Ω, we denote by D(x) the probability mass of the point x in D. [n] denotes the set {1, . . . , n}. For concise expressions and readability, we use the asymptotic complexity notion of e O, where we hide polylogarithmic dependencies of the parameters. Samplers, Estimators, and Testers A sampler I : Domain →Range is a randomized algorithm which, given an input x ∈Domain, outputs an element in Range. For a sampler I, DI,ψ denotes the probability distribution of the output of I when the input is ψ ∈Domain. In other words, ∀x ∈Range, DI,ψ(x) = Pr[I(ψ) = x], where the probability is over the internal random coins of I. We define a sampler IW to be a known sampler if, for any input ψ ∈Domain, we know its probability distribution DIW,ψ explicitly. We note that the input ψ depends on the application. For example, in the perfect-matching and linearextension samplers, ψ is a graph, whereas, for the CNF sampler, ψ is a CNF formula. Definition 1 (Total variation distance). Let IW and IG be two samplers. For an input ψ ∈Domain, the variation distance between IG and IW is defined as: dψ TV(IG, IW) = max A⊆Range{DIG,ψ(A) −DIW,ψ(A)}. Definition 2 ((ζ, δ)-approx dTV estimator). A (ζ, δ)approx dTV estimator is a randomized approximation algorithm that given two sampler IG and IW, an input ψ, tolerance parameter ζ ∈(0, 1/3] and a confidence parameter δ ∈(0, 1), with probability (1 −δ) returns an estimation [ distψ of dψ TV(IG, IW) such that: dψ TV(IG, IW) −ζ ≤[ distψ ≤dψ TV(IG, IW) + ζ Definition 3 (ε-closeness and η-farness). Consider any sampler IG. IG is said to be ε-close to another sampler IW on input ψ, if dψ TV (IG, IW) ≤ε holds. On the other hand, IG is said to be η-far from IW with respect to some input ψ if dψ TV (IG, IW) ≥η holds. Definition 4 ((ε, η, δ)-identity tester)). An (ε, η, δ)identity tester takes as input an unknown sampler IG, a known sampler IW, an input ψ to the samplers, a tolerance parameter ε ∈(0, 1/3), an intolerance parameter η ∈(0, 1] with η > ε, a confidence parameter δ ∈(0, 1), and with probability at least (1 −δ): (1) outputs ACCEPT if IG is εclose to IW on input ψ, (2) outputs REJECT if IG is η-far from IW on input ψ. For practical purposes, δ can be 0.99 or any close-to-one constant. From now onwards, we shall consider the input domain and output range of a sampler to be a Boolean hypercube, that is, Domain = {0, 1}m and Range = {0, 1}n for some integers m and n. Therefore the universe of probability distributions of samplers is n-dimensional binary strings. Self-reducible sampler. A self-reducible sampler I : {0, 1}m →{0, 1}n generates a sample x by first sampling a bit and then sampling the rest of the substring. Formally, we can define a self-reducible sampler as follows: Definition 5 (Self-reducible sampler). A sampler I : {0, 1}m →{0, 1}n is said to be a self-reducible sampler if, for any input ψ ∈{0, 1}m, there exists bψ ∈{0, 1}m for which the following is true: DI,ψ(x1x2...xn)|x1=b1,...,xi=bi = DI, b ψ(b1...bixi+1...xn) where bi ∈{0, 1} for all i. The concept of self-reducibility has been influential in the field of sampling since the work of (Jerrum, Valiant, and Vazirani 1986), which showed the computational complexity equivalence of approximate sampling and counting for problems in #P. Intuitively, self-reducibility is the idea that one can construct the solution to a given problem from the solutions of subproblems of the same problem. Selfreducibility is a critical requirement for simulating subcube The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7953 conditioning. Also, it does not hamper the model’s generality too much. As observed in (Khuller and Vazirani 1991; Große, Rothe, and Wechsung 2006; Talvitie, Vuoksenmaa, and Koivisto 2020), all except a few known problems are self-reducible. Subcube Conditioning over Boolean hypercubes Let P be a probability distribution over {0, 1}n. Sampling using subcube conditioning accepts A1, A2, . . . , An ⊆{0, 1}, constructs S = A1 ×A2 ×. . .×An as the condition set, and returns a vector x = (x1, x2, . . . , xn), such that xi ∈Ai, with probability P(x)/(P w∈S P(w)). If P(S) = 0, we assume the sampling process would return an element from S uniformly at random. A sampler that follows this technique is called a subcube conditioning sampler. Linear-Extension of a Poset We applied our prototype implementation on verifying linear-extension samplers of a poset. Let us first start with the definition of a poset. Definition 6 (Partially ordered set (Poset)). Let S be a set on k elements. A relation ⪯(subset of S × S) is said to be a partial order if ⪯is (i) reflexive (a ⪯a for every a ∈S) (ii) anti-symmetric (a ⪯b and b ⪯a implies a = b for every a, b ∈S) and (iii) transitive (a ⪯b and b ⪯c implies a ⪯c for every a, b, c ∈S). We say (S, ⪯) is a partially ordered set or poset in short. If all pairs of S are comparable, that is, for any a, b ∈S, either a ⪯b or b ⪯a then (S, ⪯) is called a linear ordered set. Definition 7 (Linear-extension of poset). A relation ⪯l ⊇ ⪯is called a linear-extension of ⪯, if (S, ⪯l) is linearly ordered. Given a poset P = (S, ⪯), we denote the set of all possible linear-extensions by L(P). Definition 8 (Linear-extension sampler). Given a poset P = (S, ⪯), a linear-extension sampler ILext samples a possible linear-extension ⪯l of P from the set of all possible linear-extensions L(P). Linear-extension to Boolean Hypercube Let us define a base linear ordering on S as ⪯′ l. We order the elements of S as S1 ⪯′ l S2 ⪯′ l ... ⪯′ l Sk based on ⪯′ l, where k = |S|. For a poset P = (S, ⪯), we construct a k × k matrix MP such that for all i, MP(i, i) := 1 and for all i ̸= j, if Si ⪯Sj then MP(i, j) := 1 and MP(i, j) := 0 when Sj ⪯Si, if (Si, Sj) /∈⪯, that is if Si, Sj are not comparable in ⪯, then MP(i, j) := ∗. The matrix MP is a unique representation of the poset P = (S, ⪯). MP is anti-symmetric, i.e., the upper triangle of MP is exactly the opposite of the lower triangle (apart from the ∗and the diagonal entries). So only the upper triangle of MP without the diagonal entries can represent P. Now unrolling of the upper triangle of MP (without the diagonal) creates a {0, 1, ∗} kC2 string xMP. Suppose for a P there are n ∗’s in the unrolling. Then we can say sampling a linear-extension of P is equivalent to sampling from a {0, 1}n subcube of the Boolean hypercube {0, 1} kC2, where P induces subcube conditioning by fixing the bits of non-∗dimensions. Adding one more new pair, say (Si′, Sj′), to P results in fixing one more bit of xMP and vice versa. We introduce a mapping SubCond that can incorporate a new pair into poset P and subsequently fixes 4 2 3 1 1 3 2 4 1 3 2 4   1 1 1 1 0 1 ∗1 0 ∗1 ∗ 0 0 ∗1   Figure 1: Top-left graph represents cover graph of a poset P over S = {1, 2, 3, 4}, and poset relation ⪯= {(1, 2), (1, 3), (2, 4), (1, 4)}. The bottom row shows two possible linear-extensions 1 ⪯2 ⪯3 ⪯4 and 1 ⪯3 ⪯2 ⪯ 4, with corresponding cover graphs (red arrows). The matrix on the top-right corresponds to MP. Unrolling of the upper triangle (bold-underline) of MP gives xMP = 111 ∗1∗. Fixing the 4th bit of xMP to 0 is equivalent to including the relation (3, 2) into ⪯. Here SubCond(⪯, 4) =⪯∪{(3, 2)}. the corresponding bit in bit string xMP. Thus SubCond provides a method to achieve subcube conditioning on a poset. Basic Probability Facts We will use the following probability notations in our algorithm. A random variable X is said to follow the exponential distribution with parameter λ if Pr(X = x) = λe−λx if x ≥0 and 0 otherwise. This is represented as X ∼Exp(λ). A random variable X is said to be sub-Gaussian (SubG in short) with parameter α2 if and only if its tails are dominated by a Gaussian of parameter α2. We include formal definitions and related concentration bounds in the supplementary material. Our Results The main technical contribution of this work is the algorithm CubeProbeEst that can estimate the variation distance between a known and an unknown self-reducible sampler. The following informal theorem captures the details. Theorem 9. For an error parameter ζ ∈(0, 1), and a constant δ < 1/3, CubeProbeEst is (ζ, δ)-approx dTV estimator between a known and unknown self-reducible samplers IW and IG respectively with sample complexity of e O n2/ζ4 . Our framework seamlessly extends to yield an (ε, η, δ)tester CubeProbeTester through the “testing-via-learning” paradigm (Diakonikolas et al. 2007; Gopalan et al. 2009; Servedio 2010). To test whether the sampler’s output distribution is ε-close or η-far from the target output distribution, the resultant tester requires e O  n2/ (η −ε)4 samples. To demonstrate the usefulness of CubeProbeEst, we developed a prototype implementation with experimental evaluations in gauging the correctness of linear-extension samplers while emulating uniform samplers. Counting the size of the set of linear extensions and sampling from them has been widely studied in a series of works by (Huber 2014; Talvitie et al. 2018a,b). The problem found extensive applications in artificial intelligence, particularly in learning The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7954 graphical models (Wallace, Korb, and Dai 1996), in sorting (Peczarski 2004), sequence analysis (Mannila and Meek 2000), convex rank tests (Morton et al. 2009), preference reasoning (Lukasiewicz, Martinez, and Simari 2014), partial order plans (Muise, Beck, and McIlraith 2016) etc. Our implementation extends to a closeness tester that accepts “close to uniform” samplers and rejects “far from uniform” samplers. Moreover, while rejecting, our implementation can produce a certificate of non-uniformity. CubeProbeEst and CubeProbeTester are the first estimator and tester for general self-reducible samplers. Novelty in Our Contributions In relation to the previous works, we emphasize our two crucial novel contributions. • Our algorithm is grounded in a notably refined form of “grey-box” sampling methodology, setting it apart from prior research endeavors (Chakraborty and Meel 2019; Meel, Pote, and Chakraborty 2020; Banerjee et al. 2023). While prior approaches required arbitrary conditioning, our algorithm builds on the significantly weaker subcube conditional sampling paradigm (Bhattacharyya and Chakraborty 2018). Subcube conditioning is a natural fit for ubiquitous self-reducible sampling, and thus our algorithm accommodates a considerably broader spectrum of sampling scenarios. • All previous works produced testers crafted to produce a “yes” or “no” answer to ascertain correctness of samplers. In essence, these testers strive to endorse samplers that exhibit “good” behavior while identifying and rejecting those that deviate significantly from this standard. However, inherent technical ambiguity exists in setting the thresholds of the distances (η and ε) that would label a sampler as good or bad. In contrast, the CubeProbeEst framework produces the estimated statistical distance that allows a practitioner to make informed and precise choices while selecting a sampler implementation. In this context CubeProbeEst is the first of its kind. Our Contribution in the Context of Distribution Testing with Subcube Conditional Samples. The crucial component in designing our self-reducible-sampler-tester CubeProbeEst is a novel algorithm for estimating the variation distance in the subcube conditioning model in distribution testing. Given sampling access to an unknown distribution P and a known distribution Q over {0, 1}n, the distance estimation problem asks to estimate the variation distance between P and Q. The corresponding testing problem is the tolerant identity testing of P and Q. Distance estimation and tolerant testing with subcube conditional samples have been open since the introduction of the framework five years ago. The following theorem formalizes our result in the context of distance estimation/tolerant testing using subcube conditional samples. Theorem 10. Let P be an unknown distribution and Q be a known distribution defined over {0, 1}n. Given subcube conditioning access to P, an approximation parameter γ ∈ (0, 1) and a confidence parameter δ ∈(0, 1), there exists an algorithm that takes e O(n2) subcube-conditional samples from P on expectation and outputs an estimate of dTV(P, Q) with an additive error ζ with probability at least 1 −δ. This is the first algorithm that solves the variation distance estimation problem in e O(n2) subcube conditioning samples. Related Works The state-of-the-art approach for efficiently testing CNF samplers was initiated by Meel and Chakraborty (Chakraborty and Meel 2019). They employed the concept of hypothesis testing with conditional samples (Chakraborty et al. 2016; Canonne, Ron, and Servedio 2015) and showed that such samples could be “simulated” in the case of CNF samplers. The approach produced mathematical guarantees on the correctness of their tester. Their idea was extended to design a series of testers for various types of CNF samplers (Barbarik (Chakraborty and Meel 2019) for uniform CNF samplers, Barbarik2 (Meel, Pote, and Chakraborty 2020) for weighted CNF samplers, Teq (Pote and Meel 2021) for testing probabilistic circuits, Flash (Banerjee et al. 2023) for Horn samplers, Barbarik3 (Pote and Meel 2022) for constrained samplers). The theoretical foundation of our work follows the subcube conditioning model of property testing of probability distributions. This model was introduced by (Bhattacharyya and Chakraborty 2018) as a special case of the conditional sampling model (Chakraborty et al. 2016; Canonne, Ron, and Servedio 2015) targeted towards high-dimensional distributions. Almost all the known results in the subcube conditioning framework deal with problems in the non-tolerant regime: testing uniformity, identity, and equivalence of distributions. (Canonne et al. 2021) presented optimal algorithm for (non-tolerant) uniformity testing in this model. (Chen et al. 2021) studied the problem of learning and testing junta distributions. Recently (Mahajan et al. 2023) studied the problem of learning Hidden Markov models. (Blanca et al. 2023) studied identity testing in related coordinate conditional sampling model. (Fotakis, Kalavasis, and Tzamos 2020) studied parameter estimation problem for truncated Boolean product distributions. Recently (Chen and Marcussen 2023) studied the problem of uniformity testing in hypergrids. Very recently, in a concurrent work, the authors in (Kumar, Meel, and Pote 2023) studied the problem of tolerant equivalence testing where both the samplers are unknown and designed an algorithm that takes e O(n3) samples. Estimator of Self-reducible Samplers Our estimator utilizes the subcube conditional sampling technique. The main program CubeProbeEst works with two subroutines: Est and GBAS. The algorithm GBAS is adopted from the Gamma Bernoulli Approximation Scheme (Huber 2014). Since its intricacies are crucial for our algorithm, we include the algorithm here for completeness. CubeProbeEst: In this algorithm, given a known selfreducible sampler IW, subcube conditioning access to an unknown self-reducible sampler IG, along with an input ψ, an approximation parameter ζ and a confidence parameter δ, it estimates the variation distance between IG and IW with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7955 Algorithm 1: CubeProbeEst (IG, IW, ψ, ζ, δ) 1 α = 2 ζ2 log 4 δ 2 γ = ζ 1.11(2+ζ) 3 δ′ = δ/2α 4 S = ∅ 5 S ←α iid samples from IG(ψ) 6 val = 0 7 for x ∈S do 8 val = val + max  0, 1 − DIW ,ψ(x) Est(IG,ψ,n,x,γ,δ′)  9 return val α Algorithm 2: Est (IG, ψ, n, x, γ, δ′) 1 k ←⌈3n γ2 · log 2n δ′  ⌉ 2 for i = 1 to n do 3 bψ ←SubCond (ψ, x1...xi−1) 4 bPi ←GBAS(IG, bψ, i, k, xi) 5 \ DIG,ψ x = Πn i=1 bPi 6 return \ DIG,ψ x additive error ζ. CubeProbeEst uses the algorithm Est as a subroutine. It starts by setting several parameters α, γ, δ′ in Line 1-Line 3. In Line 4, it initializes an empty multi-set S, and then takes α samples from IG(ψ) in S in Line 5. Now it defines a counter val in Line 6, initialized to 0. Now in the for loop starting from Line 7, for every sample x ∈S obtained before, CubeProbeEst calls the subroutine Est in Line 8 to estimate the probability mass of DIW,ψ at x. Finally, in Line 9, we output val/α as the estimated variation distance and terminate the algorithm. Est: Given subcube conditioning access to the unknown self-reducible sampler IG, an input ψ, the dimension n, an n-bit string x, parameters γ and δ′ and an integer t, the subroutine Est returns an estimate of the probability of DIG,ψ at x by employing the subroutine GBAS . In the for loop starting from Line 2, it first calls SubCond with ψ and x1, . . . , xi−1 which outputs bψ. Now in Line 4 it calls GBAS with IW, bψ, i, k along with the i-th bit of x, i.e , xi with the integer k (to be fixed such that δ′/n = 2 exp(−kγ2/3)) to estimate bPi, the empirical weight of DIG, b ψ. Now in Line 5, Est computes the empirical weight of DIG,ψ(x) by taking a product of all marginal distributions bP1, . . . , bPn obtained from the above for loop. Finally in Line 6, Est returns \ DIG,ψ x , the estimated weight of the distribution DIG,ψ on x. GBAS: In this algorithm, given access to an unknown selfreducible sampler IG, input bψ, integers i and k, and a bit HEAD, GBAS outputs an estimate bp of p. GBAS starts by declaring two variables s and r, initialized to 0 in Line 1. Then in the for loop starting in Line 2, as long as s < k, it first takes a sample w from the sampler IG on input bψ in Algorithm 3: GBAS (IG, bψ, i, k, HEAD) 1 s ←0, r ←0; 2 while s < k do 3 w ∼IG( bψ); 4 if HEAD = wi then 5 s ←s + 1 ; 6 a ∼Exp(1), r ←r + a ; 7 bp ←(k −1)/r ; 8 return bp; Line 3. Then in Line 4, it checks if the value of HEAD is wi where wi is the i-th bit of the n-bit sample w. If the value of HEAD equals wi, then in Line 5, it increments the value of s by 1. Then in Line 6, GBAS samples a following Exp(1), the exponential distribution with parameter 1 and assigns r + a to r. At the end of the for loop in Line 7, it assigns the estimated probability bp as (k −1)/r. Finally, in Line 8, GBAS returns the estimated probability bp. Theoretical Analysis of Our Estimator The formal result of our estimator is presented below. Theorem 9. For an error parameter ζ ∈(0, 1), and a constant δ < 1/3, CubeProbeEst is (ζ, δ)-approx dTV estimator between a known and unknown self-reducible samplers IW and IG respectively with sample complexity of e O n2/ζ4 . The formal proof is presented in the supplementary material. High-level Technical Overview The main idea of CubeProbeEst stems from an equivalent characterization of the variation distance which states that dψ TV(IG, IW) = Ex∼DIG ,ψ(1 −DIW,ψ(x)/DIG,ψ(x)). Our goal is to estimate the ratio DIW,ψ(x)/DIG,ψ(x) for some samples x-s drawn from DIG,ψ. As IW is known, it is sufficient to estimate DIG,ψ(x). It is generally difficult to estimate DIG,ψ(x). However, using self-reducibility of IG to mount subcube-conditioning access to DIG,ψ, we estimate DIG,ψ(x) by conditioning over the n conditional marginal distributions of DIG,ψ. Using the chain formula, we obtain the value of DIG,ψ(x) by multiplying a number of these conditional probabilities. This is achieved by the subroutine Est . The probability mass estimation of each conditional marginal distribution is achieved by the subroutine GBAS , which is called from Est . The idea of GBAS follows from (Huber 2017), which roughly states that to estimate the probability of head (say p) of a biased coin, within (multiplicative) error γi and success probability at least 1 −δ, it is sufficient to make T coin tosses on average, where T = k/p with k ≥3 log(2/δ)/γ2 i . The crucial parameter is the error margin γi that is used in Est . It should be set so that after taking the errors in all the marginals into account, the total error remains bounded by the target error margin γ. Our pivotal observation is that the error distribution in the subroutine GBAS , when estimating the mass of the conditional marginal distributions, is a SubGaussian distribution The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7956 (that is, a Gaussian distribution dominates its tails). Following the tail bound on the sum of SubGaussian random variables, we could afford to estimate the mass of each of the marginal with error γi = γ/√n and still get an estimation of DIG,ψ(x) with a correctness error of at most γ. That way the total sample complexity of Est reduces to e O(n/(γ/√n)2) = e O(n2/γ2). As α/γ2 = O 1/ζ4 , we get the claimed sample complexity of CubeProbeEst. From Estimator to Tester We extend our design to a tester named CubeProbeTester that tests if two samplers are close or far in variation distance. As before, the inputs to CubeProbeTester are two self-reducible samplers IG, IW, an input ψ, parameters ε, η, and the confidence parameter δ. CubeProbeTester first computes the estimation margin-of-error ζ as (η −ε)/2, and sets an intermediate confidence parameter δt as 2δ. The algorithm estimates the distance between IG and IW on input ψ, by invoking CubeProbeEst on IG, IW, ψ along with the estimation-margin ζ and δt. If the computed distance d dist is more than the threshold K = (η + ε)/2, the tester rejects. Otherwise, the tester accepts. Algorithm 4: CubeProbeTester (IG, IW, ψ, ε, η, δ) 1 ζ = (η −ε)/2 2 δt = 2δ 3 K = (η + ε)/2 4 d dist = CubeProbeEst(IG, IW, ψ, ζ, δt) 5 if d dist > K then 6 return REJECT 7 return ACCEPT The details of CubeProbeTester are summarised below. Theorem 11. Consider an unknown self-reducible sampler IG, a known self-reducible sampler IW, an input ψ, closeness parameter ε ∈(0, 1), farness parameter η ∈(0, 1) with η > ε and a confidence parameter δ ∈(0, 1). There exists a (ε, η, δ)-Self-reducible-sampler-tester CubeProbeTester that takes e O  n2/ (η −ε)4 samples. We note that our tester is general enough that when IG is ε-close to IW in ℓ∞-distance 1, then CubeProbeTester outputs ACCEPT . Moreover, If CubeProbeTester outputs reject on input ψ, then one can extract a configuration (witness of rejection) ψe such that IG and IW are η-far. Evaluation Results To evaluate the practical effectiveness of our proposed algorithms, we implemented prototype of CubeProbeEst and CubeProbeTester in Python3 2. We use CubeProbeEst to 1IG is ε-close to IW on input ψ in ℓ∞-distance if for every x ∈{0, 1}n, (1−ε)DIW ,ψ(x) ≤DIG,ψ(x) ≤(1+ε)DIW ,ψ(x). 2Codes and experimental results are available at www.github.com/uddaloksarkar/cubeprobe. estimate the variation distance (dTV) of three linear extension samplers from a perfect uniform sampler. SAT solvers power the backends of these linear extension samplers. The objective of our empirical evaluation was to answer the following: RQ1 Can CubeProbeEst estimate the distance of linear extension samplers from a known (e.g., uniform) sampler? RQ2 How many samples CubeProbeEst requires to estimate the distance? RQ3 How do the linear extension samplers behave with an increasing number of dimensions? Boolean encoding of Poset Given a poset P = (S, ⪯P ), we encode it using a Boolean formula φP in conjunctive normal form (CNF), as described in (Talvitie et al. 2018b): 1 for all elements a, b ∈S, the formula φP contains the variables of the form vab such that vab = 1 represents a ⪯b and vab = 0 represents b ⪯a. 2 The CNF formula φP contains the following clauses. Type-1: vab for all a, b ∈S such that a ⪯P b. This enforces the poset relation ⪯P. Type-2: ¬vab ∨¬vbc ∨vac for all a, b, c ∈S to guarantee the transitivity. This reduction requires |S|C2 many variables and |S|P3 many clauses of type-2. The number of clauses of type-1 depends on the number of edges in the cover graph of P. Experimental Setup Samplers Used: To assess the performance of CubeProbeEst and CubeProbeTester, we utilized three different linear extension samplers- LxtQuicksampler, LxtSTS, LxtCMSGen, to estimate their dTV distances from a uniform sampler. The backend of these samplers are powered by three state-of-the-art CNF samplers: QuickSampler (Dutra et al. 2018), STS (Ermon, Gomes, and Selman 2012), CMSGen (Golia et al. 2021). A poset-toCNF encoder precedes these CNF samplers, and a Boolean string-to-poset extractor succeeds the CNF samplers to build the linear extension samplers. We also required access to a known uniform sampler which is equivalent to having access to a linear extension counter3. We utilized an exact model counter for CNF formulas to meet this need: SharpSAT-TD (Korhonen and Järvisalo 2021). Poset Instances: We adopted a subset of the poset instances from the experimental setup of (Talvitie et al. 2018a) and (Talvitie et al. 2018b) to evaluate CubeProbeEst and CubeProbeTester. The instances include three different kinds of posets. (a) posets of type avgdegk are generated from DAGs with average indegree of k = 3, 5; (b) posets of type bipartitep have been generated by from bipartite set S = A∪B by adding the order constraint a ≺b (resp. b ≺a) with probability p (resp. 1 −p) for all (a, b) ∈A × B; (c) posets of type bayesiannetwork is obtained from a transitive closure a randomly sampled subgraph of bayesian networks, obtained from (Elidan 1998). 3For a set S if we know the size of the set |S|, we know the mass of each element to be 1/ |S| in a uniform sampler. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7957 LxtQuickSampler LxtSTS LxtCMSGen Instances dim Estd dTV #samples A/R Estd dTV #samples A/R Estd dTV #samples A/R avgdeg_3_008_2 19 0.1854 9986426 A 0.0205 11013078 A 0.1772 9914721 A avgdeg_3_010_2 30 0.1551 24537279 A 0.0155 24758147 A 0.1267 24126731 A avgdeg_5_010_3 16 0.0976 7593533 A 0.0338 7338508 A 0.1135 7261255 A avgdeg_5_010_4 11 0.0503 3486025 A 0.0387 3475635 A 0.1147 3412151 A bn_andes_010_1 35 0.2742 33557190 A 0.0396 33536595 A 0.1601 33235104 A bn_diabetes_010_3 26 0.1955 19211200 A 0.0009 18847561 A 0.1478 18539480 A bn_link_010_4 28 0.2024 21482230 A 0.0346 22377750 A 0.1635 21161624 A bn_munin_010_1 33 0.2414 30348931 A 0.0448 30693619 A 0.1230 30218998 A bn_pigs_010_1 36 0.3106 36917129 R 0.0569 36311963 A 0.1353 35978964 A bipartite_0.2_008_4 25 0.3204 17761820 R 0.0073 17840945 A 0.1153 17546682 A bipartite_0.2_010_1 41 0.3299 46244946 R 0.1528 48135745 A 0.1461 47003971 A bipartite_0.5_008_4 22 0.2977 13144132 A 0.0528 13424946 A 0.1059 13317859 A bipartite_0.5_010_1 36 0.3082 35875122 R 0.0037 36728064 A 0.1472 35823878 A Table 1: For each sampler the three columns represent the estimated dTV, number of samples consumed by CubeProbeEst and the output of CubeProbeTester. “A” and “R” represent ACCEPT and REJECT respectively. Figure 2: TV distances of samplers from uniformity are estimated across dimensions. For each dimension, we take the median dTV over all the instances of that dimension. Parameters Initialization: For our experiments with CubeProbeEst, the approximation parameter ζ and confidence parameter δ are set to be 0.3 and 0.2. Our tester CubeProbeTester takes a closeness parameter ε, farness parameter η, and confidence parameter δ. For our experiments these are set to be ε : 0.01, η : 0.61, and δ : 0.1, respectively. Environment All experiments are carried out on a highperformance computer cluster, where each node consists of AMD EPYC 7713 CPUs with 2x64 cores and 512 GB memory. All tests were run in multi-threaded mode with 8 threads per instance per sampler with a timeout of 12 hrs. Experimental Results & Discussion RQ1 Table 1 shows a subset of our experimental results. Due to space constraints, we have postponed presenting our comprehensive experimental results to the supplementary material. We found that among 90 instances: • LxtQuickSampler has the maximum dTV from uniformity in 48 instances, LxtSTS in 14 instances, and LxtCMSGen in 28 instances; • LxtQuickSampler has the minimum dTV from uniformity in 10 instances, LxtSTS in 69 instances, and LxtCMSGen in 11 instances; These observations indicate that LxtSTS serves as a linear extension sampler that closely resembles uniform distribution characteristics. At the same time, LxtQuickSampler deviates significantly from the traits of a uniform-like linear extension sampler. LxtCMSGen falls in an intermediate position between these two. RQ2 Table 1 reflects that the number of samples drawn by CubeProbeEst depends on the dimension of an instance. Again, when the dimension is kept constant, the number of samples drawn remains similar across all runs. RQ3 In Figure 2, at lower dimensions, both LxtQuickSampler and LxtCMSGen behave relatively close to uniform sampling. However, as the dimension increases, dTV between these two samplers from uniformity increases. In contrast, LxtSTS shows a different behavior. In lower dimensions, the estimated dTV distance can be notably high for certain instances, but it tends to stabilize with increasing dimension. It is worth highlighting that, in higher dimensions, LxtSTS demonstrates a more uniform-like sampling behavior compared to the other two samplers. Conclusion In this paper, we have designed the first self-reducible sampler tester, and used it to test linear extension samplers. We have also designed a novel variation distance estimator in the subcube-conditioning model along the way. Limitations of our work Our algorithm takes e O(n2) samples while the known lower bound for tolerant testing with subcube conditioning is of Ω(n/ log n) for this task (Canonne et al. 2020). Moreover, our algorithm works when the samplers are self-reducible, which is required for our analysis. So our algorithm can not handle non-self-reducible samplers, such as in (Große, Rothe, and Wechsung 2006; Talvitie, Vuoksenmaa, and Koivisto 2020). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7958 Acknowledgements Rishiraj Bhattacharyya acknowledges the support of UKRI by EPSRC grant number EP/Y001680/1. Uddalok Sarkar is supported by the Google PhD Fellowship. Sayantan Sen’s research is supported by the National Research Foundation Singapore under its NRF Fellowship Programme (NRFNRFFAI1-2019-0002). This research is part of the programme DesCartes and is supported by the National Research Foundation, Prime Minister’s Office, Singapore, under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. The computational works of this article were performed on the resources of the National Supercomputing Centre, Singapore www.nscc.sg. References Andrieu, C.; De Freitas, N.; Doucet, A.; and Jordan, M. I. 2003. An introduction to MCMC for machine learning. Machine Learning. Ashur, T.; De Witte, G.; and Liu, Y. 2017. An automated tool for rotational-xor cryptanalysis of arx-based primitives. In SITB. Banerjee, A.; Chakraborty, S.; Chakraborty, S.; Meel, K. S.; Sarkar, U.; and Sen, S. 2023. Testing of Horn Samplers. In AISTATS. Bhattacharyya, R.; and Chakraborty, S. 2018. Property testing of joint distributions using conditional samples. ACM Transactions on Computation Theory (TOCT). Blanca, A.; Chen, Z.; Štefankoviˇc, D.; and Vigoda, E. 2023. Complexity of High-Dimensional Identity Testing with Coordinate Conditional Sampling. In COLT. Brooks, S.; Gelman, A.; Jones, G.; and Meng, X.-L. 2011. Handbook of markov chain monte carlo. Cambridge University Press. Canonne, C. L.; Chen, X.; Kamath, G.; Levi, A.; and Waingarten, E. 2021. Random restrictions of high dimensional distributions and uniformity testing with subcube conditioning. In SODA. Canonne, C. L.; Diakonikolas, I.; Kane, D. M.; and Stewart, A. 2020. Testing bayesian networks. IEEE Transactions on Information Theory. Canonne, C. L.; Ron, D.; and Servedio, R. A. 2015. Testing probability distributions using conditional samples. SIAM Journal on Computing. Chakraborty, S.; Fischer, E.; Goldhirsh, Y.; and Matsliah, A. 2016. On the power of conditional samples in distribution testing. SIAM Journal on Computing. Chakraborty, S.; Fremont, D.; Meel, K.; Seshia, S.; and Vardi, M. 2014. Distribution-aware sampling and weighted model counting for SAT. In AAAI. Chakraborty, S.; and Meel, K. S. 2019. On testing of uniform samplers. In AAAI. Chakraborty, S.; Meel, K. S.; and Vardi, M. Y. 2013. A scalable and nearly uniform generator of SAT witnesses. In ICCAD. Chandra, A. K.; and Iyengar, V. S. 1992. Constraint solving for test case generation: a technique for high-level design verification. In ICCD. Chen, X.; Jayaram, R.; Levi, A.; and Waingarten, E. 2021. Learning and testing junta distributions with sub cube conditioning. In COLT. Chen, X.; and Marcussen, C. 2023. Uniformity Testing over Hypergrids with Subcube Conditioning. Diakonikolas, I.; Lee, H. K.; Matulef, K.; Onak, K.; Rubinfeld, R.; Servedio, R. A.; and Wan, A. 2007. Testing for concise representations. In FOCS. Dutra, R.; Laeufer, K.; Bachrach, J.; and Sen, K. 2018. Efficient sampling of SAT solutions for testing. In ICSE. Elidan, G. 1998. Bayesian-Network-Repository. cs.huji.ac.i l/w-galel/Repository/. Ermon, S.; Gomes, C. P.; Sabharwal, A.; and Selman, B. 2013. Embed and project: Discrete sampling with universal hashing. NeurIPS. Ermon, S.; Gomes, C. P.; and Selman, B. 2012. Uniform Solution Sampling Using a Constraint Solver As an Oracle. In UAI. Fotakis, D.; Kalavasis, A.; and Tzamos, C. 2020. Efficient parameter estimation of truncated boolean product distributions. In COLT. Golia, P.; Soos, M.; Chakraborty, S.; and Meel, K. S. 2021. Designing samplers is easy: The boon of testers. In FMCAD. Gopalan, P.; O’Donnell, R.; Servedio, R. A.; Shpilka, A.; and Wimmer, K. 2009. Testing Fourier Dimensionality and Sparsity. In ICALP. Große, A.; Rothe, J.; and Wechsung, G. 2006. On computing the smallest four-coloring of planar graphs and non-selfreducible sets in P. Information Processing Letters. Huber, M. 2014. Near-linear time simulation of linear extensions of a height-2 poset with bounded interaction. Chicago Journal of Theoretical Computer Science. Huber, M. 2017. A Bernoulli mean estimate with known relative error distribution. Random Struct. Algorithms. Jerrum, M. 1998. Mathematical foundations of the Markov chain Monte Carlo method. In Probabilistic methods for algorithmic discrete mathematics. Jerrum, M. R.; Valiant, L. G.; and Vazirani, V. V. 1986. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science. Khuller, S.; and Vazirani, V. V. 1991. Planar graph coloring is not self-reducible, assuming P̸= NP. Theoretical Computer Science. Korhonen, T.; and Järvisalo, M. 2021. SharpSAT-TD Participating in Model Counting Competition 2021. Kumar, G.; Meel, K. S.; and Pote, Y. 2023. Tolerant Testing of High-Dimensional Samplers with Subcube Conditioning. arXiv:2308.04264. Lukasiewicz, T.; Martinez, M. V.; and Simari, G. I. 2014. Probabilistic preference logic networks. In ECAI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7959 Mahajan, G.; Kakade, S.; Krishnamurthy, A.; and Zhang, C. 2023. Learning Hidden Markov Models Using Conditional Samples. In COLT. Mannila, H.; and Meek, C. 2000. Global partial orders from sequential data. In SIGKDD. Meel, K. S.; Pote, Y. P.; and Chakraborty, S. 2020. On testing of samplers. NeurIPS. Meel, K. S.; Vardi, M. Y.; Chakraborty, S.; Fremont, D. J.; Seshia, S. A.; Fried, D.; Ivrii, A.; and Malik, S. 2016. Constrained Sampling and Counting: Universal Hashing Meets SAT Solving. In Beyond NP, Papers from the 2016 AAAI Workshop, AAAI. Mironov, I.; and Zhang, L. 2006. Applications of SAT solvers to cryptanalysis of hash functions. In SAT. Morawiecki, P.; and Srebrny, M. 2013. A SAT-based preimage analysis of reduced Keccak hash functions. Information Processing Letters. Morton, J.; Pachter, L.; Shiu, A.; Sturmfels, B.; and Wienand, O. 2009. Convex rank tests and semigraphoids. SIAM Journal on Discrete Mathematics. Muise, C.; Beck, J. C.; and McIlraith, S. A. 2016. Optimal partial-order plan relaxation via MaxSAT. Journal of Artificial Intelligence Research. Naveh, Y.; Rimon, M.; Jaeger, I.; Katz, Y.; Vinov, M.; s Marcu, E.; and Shurek, G. 2006. Constraint-based random stimuli generation for hardware verification. Paninski, L. 2008. A Coincidence-Based Test for Uniformity Given Very Sparsely Sampled Discrete Data. IEEE Transactions on Information Theory. Peczarski, M. 2004. New results in minimum-comparison sorting. Algorithmica. Pote, Y.; and Meel, K. S. 2022. On Scalable Testing of Samplers. NeurIPS. Pote, Y. P.; and Meel, K. S. 2021. Testing probabilistic circuits. NeurIPS. Servedio, R. A. 2010. Testing by implicit learning: a brief survey. Property Testing. Soos, M.; Nohl, K.; and Castelluccia, C. 2009. Extending SAT solvers to cryptographic problems. In SAT. Talvitie, T.; Kangas, J.-K.; Niinimäki, T.; and Koivisto, M. 2018a. A scalable scheme for counting linear extensions. In IJCAI. Talvitie, T.; Kangas, K.; Niinimäki, T.; and Koivisto, M. 2018b. Counting linear extensions in practice: MCMC versus exponential Monte Carlo. In AAAI. Talvitie, T.; Vuoksenmaa, A.; and Koivisto, M. 2020. Exact sampling of directed acyclic graphs from modular distributions. In UAI. Wallace, C.; Korb, K. B.; and Dai, H. 1996. Causal discovery via MML. In ICML. Yuan, J.; Aziz, A.; Pixley, C.; and Albin, K. 2004. Simplifying boolean constraint solving for random simulation-vector generation. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7960
2024
884
18,722
Using Symmetries to Lift Satisfiability Checking Pierre Carbonnelle1, Gottfried Schenner2, Maurice Bruynooghe1, Bart Bogaerts3, Marc Denecker1 1KU Leuven, Belgium 2Siemens, Austria 3Vrije Universiteit Brussels, Belgium {pierre.carbonnelle, maurice.bruynooghe, marc.denecker}@kuleuven.be, [email protected], [email protected] Abstract We analyze how symmetries can be used to compress structures (also known as interpretations) onto a smaller domain without loss of information. This analysis suggests the possibility to solve satisfiability problems in the compressed domain for better performance. Thus, we propose a 2-step novel method: (i) the sentence to be satisfied is automatically translated into an equisatisfiable sentence over a “lifted” vocabulary that allows domain compression; (ii) satisfiability of the lifted sentence is checked by growing the (initially unknown) compressed domain until a satisfying structure is found. The key issue is to ensure that this satisfying structure can always be expanded into an uncompressed structure that satisfies the original sentence to be satisfied. We present an adequate translation for sentences in typed first-order logic extended with aggregates. Our experimental evaluation shows large speedups for generative configuration problems. The method also has applications in the verification of software operating on complex data structures. Our results justify further research in automatic translation of sentences for symmetry reduction. 1 Introduction In made-to-order manufacturing, the configuration problem is the problem of finding a configuration of components that satisfies the customer requirements and feasibility constraints (Felfernig et al. 2014). Such problems can be solved by choosing a formal vocabulary and by representing the customer requirements and the feasibility criteria as a logic sentence to be satisfied. A structure satisfying the sentence (a model) represents an acceptable configuration. Methods to solve configuration problems do not scale well, and various heuristics have been used to improve performance (Schenner and Taupe 2017). Configurations often have components that are interchangeable. They are the source of many redundancies in the search space that negatively impact performance. The standard approach is to add symmetry breaking rules (Crawford et al. 1996). Here, we use another approach: we reformulate the problem to reduce symmetries (Gent, Petrie, and Puget 2006). This approach has been less studied, and is more an art than a science. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We observed that solutions to configuration problems can be compressed in what we call a “lifted model”, and that the solution of the original problem can be obtained by expanding the lifted model. This suggested that configuration problems, and, more generally, satisfiability problems, could be solved in the compressed domain: if the compressed domain is significantly smaller, it could lead to better performance. As a toy example, consider the following pigeonhole problem: “given 10 pigeons and 5 pigeonholes, assign each pigeon to a pigeonhole such that each hole has (at most) 2 pigeons.” With the appropriate vocabulary, a solution is: Pigeon “ tp1, . . . , p10u (1) Hole “ th1, . . . , h5u (2) isIn “ tpp1, h1q, pp2, h2q, . . . , pp5, h5q, (3) pp6, h1q, pp7, h2q, . . . , pp10, h5qu Note the symmetries: another solution is obtained by exchanging 2 pigeons or 2 holes in the interpretation of isIn. This solution can be compressed to: Pigeon “ tp1u (4) Hole “ th1u (5) mul “ tp1 ÞÑ 10, h1 ÞÑ 5u (6) isIn “ tpp1, h1qu (7) where the mul function indicates how many concrete domain elements each “lifted” domain element represents, and thus allows the domain compression. The lifted pigeon (resp. hole) represents 10 concrete pigeons (resp. 5 concrete holes). Even though the compressed structure only has 2 domain elements (p1, h1, excluding the naturals), it contains enough information to allow us to expand it into a model isomorphic to the original model. Following the theory we develop in Section 3, the expansion is as follows: p1 ⇝p1, . . . , p10 h1 ⇝h1, . . . , h5 pp1, h1q ⇝tpp1, h1q, . . . , pp5, h5q, pp6, h1q, . . . , pp10, h5qu Furthermore, in Section 4, we present a translation of sentences in typed first-order logic extended with aggregates into sentences that allow domain compression. We show that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7961 a sentence in that language is equisatisfiable with its translation, given the number of concrete elements in each type: if either one is satisfiable, the other one is too. This analysis allows us to solve satisfiability problems in the compressed domain. The novel method operates in two steps: (i) the sentence to be satisfied is automatically translated into a “lifted” sentence over the “lifted” vocabulary; (ii) satisfiability of the lifted sentence is checked by step-wise growing the compressed domain1 until a satisfying structure is found. Crucially, this satisfying structure can always be expanded into a model of the original sentence. Notice that the lifted model for a variation of the pigeonhole problem above with 100 times more pigeons and holes has the same number of lifted domain elements as the base case: the multiplicity of each lifted domain element is simply multiplied by 100. As a result, and unlike traditional symmetry breaking methods, our method solves that pigeonhole problem in constant time with respect to the domain size (excluding the naturals). We evaluate this method by comparing the time needed to find solutions for generative configuration problems discussed in the literature. Our method has significantly better performance than the traditional one for problems whose solution can be substantially compressed. Our paper is structured as follows: after introducing our notation, we analyze how symmetries can be used to compress structures onto a smaller domain without loss of information (Section 3), and describe how to lift a concrete sentence so that it is equisatisfiable with the lifted sentence (Section 4); we describe the method, evaluate it on generative configuration problems (Section 5), and discuss applications in Boolean Algebra of sets with Presburger Arithmetic (BAPA) (Section 6) before concluding with a discussion. 2 Preliminaries This section introduces the logic language supported by our method, and the concept of permutation of a structure. Typed First Order Logic With Aggregates We call FO(Type, Aggregate) the language we support. We assume familiarity with first order logic (Enderton 1972). A vocabulary Σ is a set of type, predicate, and function symbols. Predicates and functions have a type signature (e.g. f : ¯T Ñ T, where ¯T denotes a tuple of types). Some symbols are pre-defined: type B (booleans), Q (rationals), equality, arithmetic operators and arithmetic comparisons. Terms and formulae are constructed from symbols according to the usual FO syntactic rules. We also allow sum aggregates (written as ř ¯xP ¯T :ϕ t or as ř ¯xP ¯T pt if ϕq). A cardinality aggregate #t¯x P ¯T | ϕu is a shorthand for a sum aggregate whose term t is 1. Quantification and aggregation can only be over finite types. Terms and formulas must be well-typed. A formula without free-variable is called a sentence. A Σ-structure I , consists of a domain and an interpretation of each symbol of vocabulary Σ. The interpretations 1An iterative method is required because the size of the compressed domain is not known in advance. of types are disjoint and finite (except QI which is infinite). The interpretation pI of a predicate p is a set of tuples ¯d of domain elements of appropriate types. The interpretation f I of a function f is a set of pairs p ¯d, dq, also denoted by ¯d ÞÑ d. An extended structure is a structure with a variable assignment, i.e., with a mapping from variable ¯x to values ¯d, denoted r¯x : ¯ds. The value of a formula or term in an extended structure is computed according to the usual FO semantic rules, which require that the interpretation of function symbols be total functions over their domain. A Σ-structure is a model of a sentence if the value of the sentence is true in the structure, i.e., if it satisfies it. Satisfiability checking in typed FO logic is the problem of deciding whether a sentence has a model, given the interpretation of the types. Permutations and Orbits A permutation of a set is a bijection from that set to itself. We denote a permutation by π, and its inverse by π´1. The identity permutation, π0, maps every element of the set to itself. The order of a permutation is the smallest positive number n such that πn is the identity permutation. Hence, the nth permutation of an element is equal to its “nth modulo the order” permutation. Cycles are permutations that map elements in a cyclic fashion. A cycle is denoted by pd1d2 ¨ ¨ ¨ dnq. A permutation over a finite set has a unique decomposition into disjoint cycles. Its order is the least common multiple (lcm) of the length of its cycles. The permutation of a tuple of elements ¯d is denoted by πp ¯dq and is equal to pπpd1q, ..., πpdnqq. The π-orbit of an element d (resp. of a tuple of elements ¯d) is the set of its repeated permutations oπpdq “ tπipdq | i P Nu (resp. oπp ¯dq “ tπip ¯dq | i P Nu). 3 Lossless Compression of Structures In this section, we discuss how, and when, a concrete structure with symmetries can be compressed into a lifted structure over a smaller domain without loss of information, and how a lifted structure can be expanded into a concrete one. Symmetries in a structure I are described by a domain permutation π (Devriendt et al. 2016): it is a permutation of the domain of I that maps numbers to themselves, and other domain elements in T I to domain elements in T I. Since numbers are mapped to themselves by the permutation, and since types besides Q are finite, the permutation is composed of cycles. A domain permutation induces a structure transformation. The interpretation in the transformed structure, denoted πpIq, is defined as follows: • the type of domain element d in πpIq is its type in I; • a tuple ¯d1 is in the transformed interpretation of predicate symbol p if and only if π´1p ¯d1q is in the original interpretation of p: pπpIq “ t ¯d1 | π´1p ¯d1q P pIu “ tπp ¯dq | ¯d P pIu; • a tuple p ¯d1 ÞÑ d1q is in the transformed interpretation of function symbol f if and only if π´1p ¯d1q ÞÑ π´1pd1q is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7962 in the original interpretation of f: f πpIq “ tp ¯d1 ÞÑ d1q | pπ´1p ¯d1q ÞÑ π´1pd1qq P f Iu “ tpπp ¯dq ÞÑ πpdqq | p ¯d ÞÑ dq P f Iu Definition 1 (Automorphism) A permutation π that transforms a structure into itself, i.e., such that I “ πpIq, is an automorphism of the structure. Every structure has at least one automorphism: the identity domain permutation. Note that an automorphism maps the interpretation of any constant to itself, i.e., cIpq “ πpcIpqq. Thus, the length of the cycle containing cIpq is 1. We introduce the new concept of backbone, which plays a critical role in the compression that we propose. Essentially, a backbone for an automorphism of I is a set of domain elements obtained by picking one element in each cycle, such that the interpretation of any symbol can be reconstructed from its interpretation restricted to the backbone, by applying the automorphism repeatedly. Formally: Definition 2 (Backbone) A backbone for an automorphism π of I is a subset S of the domain of I such that: • each cycle C of π has exactly one element in S; • for each predicate p{n P Σ, pI is the union of the πorbits of the tuples in pI X Sn, i.e., pI “ ď ¯dPpI XSn oπp ¯dq “ tπip ¯dq | ¯d P pI X Sn, i P Nu (8) • for each function f{n P Σ, f I is the union of the π-orbits of the tuples in f I X pSn ˆ Sq, i.e, f I “ ď p ¯dÞÑdqPf I XpSnˆSq oπp ¯d ÞÑ dq (9) “ tπip ¯d ÞÑ dq | p ¯d ÞÑ dq P f I X pSn ˆ Sq, i P Nu Example 1 For the pigeonhole example in the introduction of the paper, a backbone of automorphism pp1 ¨ ¨ ¨ p10qph1 ¨ ¨ ¨ h5q is S “ tp1, h1u. Another is S “ tp2, h2u. In all structures, the set of domain elements is a trivial backbone for the identity automorphism. However, not all automorphisms have a backbone. Example 2 Let ta, bu be a (concrete) domain with one type T. In structure I1 (resp. I2), function symbol f : T Ñ T is interpreted as ta ÞÑ a, b ÞÑ bu (resp. ta ÞÑ b, b ÞÑ au). Permutation pabq is an automorphism of both structures. The only two subsets S satisfying the first condition of backbone for pabq are tau and tbu. Since f I1 can be reconstructed from f I1 X pS ˆ Sq for both subsets S, both subsets are backbones of I1. However, none is a backbone of I2 (because f I2 X pS ˆ Sq “ H for both candidate sets S). A backbone enables us to lift a structure into a structure with a smaller domain, as we now describe. Definition 3 (Lifted vocabulary) For a vocabulary Σ, its lifted vocabulary Σl consists of the symbols of Σ and, for any type T of Σ, a symbol mul T : T Ñ N, called the multiplicity function for T. We will drop the subscript T in the function symbol mul T when this is unambiguous. Definition 4 (Lifted structure) Let I be a Σ-structure with an automorphism π having backbone S. A lifted structure L derived from I is a Σl-structure such that: • its domain is S, called the lifted domain; • for each type predicate T, T L “ T I X S; • for each predicate symbol p{n, pL “ pI X Sn; • for each function symbol f{n, f L “ f I X pSn ˆ Sq; • for each l P S, mulLplq “ |oπplq|, the size of the π-orbit of l in I. Example 3 Continuing the pigeonhole problem, the lifted structure is described by Equations 4-7. Given the lifted structure L derived from I and the automorphism π on the concrete domain I, one can reconstruct I, i.e., one can expand the lifted interpretations of the type, predicate and function symbols, essentially by closing them under repeated application of π. Formally, expπpT Lq “ ď lPT L oπplq “ tπiplq | l P T L, i P Nu (10) expπppLq “ ď ¯lPpL oπp¯lq “ tπip¯lq | ¯l P pL, i P Nu (11) expπpf Lq “ ď p¯lÞÑlqPf L oπp¯l ÞÑ lq “ tπip¯l ÞÑ lq | p¯l ÞÑ lq P f L, i P Nu. (12) Example 4 Continuing the pigeonhole problem, the expansion of the lifted structure described by Equations 4-7 for automorphism pp1 ¨ ¨ ¨ p10qph1 ¨ ¨ ¨ h5q is the structure described by Equations 1-3. In our approach, we need to find lifted structures that can be expanded into concrete ones. We observe that there is a simple construction to expand any lifted Σl-structure L with lifted domain S that results sometimes (but not always) in a concrete structure I with an automorphism π having backbone S. This construction is as follows. Definition 5 (Expansion of lifted domain) An expansion of the domain S “ tli | i P Nu of the lifted Σl-structure L is a set D “ tdj i | Di, j P N : li P S ^1 ď j ď Mul Lpliqu such that the dj i are distinct and @i : d1 i “ li. We call d1 i a base element. Notice that lifted domain elements with multiplicity zero have no image in D.2 We define a permutation on the set D having the cycles pli, d2 i . . . dMulLpliq i q: Definition 6 (Permutation of the expanded domain) The permutation of an expansion D of a lifted domain is the function π : D ÞÑ D such that πpdj iq “ dj`1 i for 1 ď j ă Mul Lpliq ´ 1, and πpdMulLpliq i q “ d1 i . 2Null multiplicities will prove useful for iterative methods in Section 5 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7963 The expansion of a lifted tuple ¯l in a lifted structure is its π-orbit: expπp¯lq “ oπp¯lq “ tπip¯lq|i P Nu (13) if none of the expansion of its elements is empty, and is empty otherwise. The concrete interpretation of symbols is obtained by expanding the tuples in their lifted interpretations, as in Equations (10-12). It follows from the correspondence between Definition 2 and Equations (10-12) that, if π and I are constructed from L in this way, and if I is actually a structure, then S is a backbone of π in I. When L only has strictly positive multiplicities, a lifted structure derived from the expansion of L will be isomorphic to L: in some sense, the compression is lossless. However, I may not be a structure because the interpretation of a function symbol in I might not be a total function on its domain, as the following example shows. Example 5 Consider the lifted structure L with domain S “ ta, bu, with types AL “ tau, BL “ tbu, and for f : A ˆ A ÞÑ B, f L “ tpa, aq ÞÑ bu. Finally, let mulpaq “ mulpbq “ 2, The expanded domain is ta1, a2, b1, b2u with permutation pa1a2qpb1b2q. The expansion of f L is tpa1, a1q ÞÑ b1, pa2, a2q ÞÑ b2u. There is no entry for pa1, a2q, so the expansion is not a total function. Consider now the lifted structure L with domain S “ ta, bu, with type AL “ tau, BL “ tbu, for f : A ÞÑ B, f L “ ta ÞÑ bu. Finally, mulpaq “ 1, mulpbq “ 2. The expanded domain is ta1, b1, b2u with permutation pa1qpb1b2q. The expansion of f L is ta1 ÞÑ b1, a1 ÞÑ b2u. This expansion gives two different values for a1, so it is not a function. To distinguish lifted structures that can be expanded into concrete ones from those that cannot, we introduce the notion of regularity. Definition 7 (Regular function, regular lifted structure) The lifted interpretation of a function symbol is regular if its expansion defines a total function in the concrete domain, i.e., if it specifies exactly one image for every tuple in the concrete domain. A regular lifted structure is a lifted structure in which the interpretation of each function symbol is regular. Now, we define the expansion of a regular lifted structure: Definition 8 (Expansion of a regular lifted structure) Let L be a regular lifted structure over a lifted vocabulary. Then, I, the expansion of L, is the structure over the concrete vocabulary defined as follows: • its domain is the expansion of the lifted domain, having permutation π derived from L; • for each type symbol T, T I “ expπpT Lq; • for each predicate symbol p, pI “ expπppLq; • for each function symbol f, f I “ expπpf Lq. The expansion of a regular lifted structure does not involve any search. The time needed for this expansion is generally negligible (e.g., less than 0.1 sec for 10,000 concrete domain elements). To further characterize regular functions, we introduce the concept of regular tuples. We use Mulpl1, . . . , lnq (resp. Lcmpl1, . . . , lnq) as a shorthand for the product of the multiplicities of li (resp. their least common multiple lcm). First, we observe that the size of the expansion (Equation 13) of a tuple ¯l is finite: it is the order of the permutation defined by the cycles of its elements, i.e., Lcmpl1, . . . , lnq.3 Also, the expansion has at most Mulpl1, . . . , lnq tuples; it is then the cross-product of the expansions of its elements. When those two numbers are identical, we say that the tuple is regular. Definition 9 (Regular lifted tuple) A lifted tuple is regular if and only if its expansion is the cross-product of the expansion of its elements. Example 6 Let pa, bq be a tuple of two lifted domain elements with mulpaq “ 2 and mulpbq “ 4. Its expansion is tpa1, b1q, pa2, b2q, pa1, b3q, pa2, b4qu, of size Lcmp2, 4q “ 4. Note that, e.g., the tuple pa1, b2q does not belong to the expansion: thus, pa, bq is not regular. It is regular when mulpbq “ 0 (the expansion of b and of pa, bq are empty), or when, e.g., mulpbq “ 3 (the expansion is tpa1, b1q, pa2, b2q, pa1, b3q, pa2, b1q, pa1, b2q, pa2, b3qu), of size Lcmp2, 3q “ 6. Nullary and unary tuples are always regular. An n-ary tuple is regular when one of its elements has multiplicity zero, or every pair of its elements have multiplicities that are coprime. Proposition 1 (Regular function) A function f L is regular if, for all tuples ¯l in the domain of f L, it holds that (i) ¯l is regular, and (ii) the multiplicity of ¯l is a multiple of the multiplicity of its image. Proof 1 First, we show that the expansion of f L gives at least one value for every tuple ¯d in the concrete domain of f. Each element di of ¯d is in the expansion of a lifted domain element li P T L i . Tuple ¯l “ pl1, ¨ ¨ ¨ , lnq P ¯T L is in the lifted domain of f L as f L is total; it is regular by (i), hence ¯d is in its expansion and f Ip ¯dq is in expπpf Lp¯lqq. Next, we show that the expansion gives at most one value for every tuple ¯d in the concrete domain of f. Let ¯d be in the expansion of ¯l, with Lcmp¯lq “ m “ Mulp¯lq by (i). We thus have 0 ă m. The expansion of f L contains the pairs πip¯l ÞÑ f Lp¯lqq and πi`nˆmp¯l ÞÑ f Lp¯lqq, for any ¯l in the domain of f L, and for any i and n. The first element of these two pairs are identical by definition of m; the second elements are identical by (ii). 4 Translation Into a Lifted Sentence We now present a translation of a sentence in FO(Type, Aggregate) into a sentence that allows domain compression, such that the translation is equisatisfiable with the original. The translation χpϕq of a sentence ϕ is the conjunction of (i) the transformed sentence χpϕq, and (ii) sentences expressing regularity conditions. The transformation χpeq of an expression e is defined recursively in Table ??. The left column shows the possible syntactical forms in the concrete sentence; the middle column shows the transformation; the third column shows the 3Taking the convention that the lcm of a tuple of numbers containing 0 is 0. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7964 regularity constraints added to the translation. The bottom part of the table shows the regularity constraints added for each function symbol in the vocabulary to ensure that the function interpretations are regular (Proposition 1). Note that sum aggregates and quantified formulas are transformed by specialized rules (Rules 18, 26, 27) when possible (see table footnote), and by general rules (Rules 19, 28, 29) otherwise. Generally, it is beneficial to do equivalence-preserving transformations of sentences to obtain sentences of the form allowing application of specialized rules. The specialized rules do not require the translation of pp¯t, sq by Rule 21, thus avoiding the regularity constraint RCpppχp¯t, sqqq. E.g., for the atom t “ s, this regularity constraint would enforce mulptq “ mulpsq “ 1 for each tuple pt, sq in the lifted equality relation, significantly reducing the possibility of compression. The transformation of a sentence consists of adding 0 ă Mulp¯xq filters (to cope with lifted domains elements with an empty expansion) and of multiplying the term t in an aggregate term with some decompression factor: in Rule 19, it is the number Mulp¯xq of possible concrete variable assignments; in Rule 18, it is that number multiplied by the fraction of concrete assignments that make pp¯t, sq true. The regularity condition added for an atom (Rule 21) ensures the translated atom is equisatisfiable with the original atom. Example 7 The sentence ”at most 2 pigeons in each hole”: @h P Hole : #tp P Pigeon | isInpp, hqu ď 2. Its transformation by Rule 18 (with ϕ “ true) and 28 is:4 @h P Hole : 0 ă mulphq ñ ř pPPigeonp Lcmpp,hq mulphq if 0 ă mulppq^isInpp, hqq ď 2. Theorem 1 (Equisatisfiability) An FO(Type, Aggregate) sentence is equisatisfiable with its translation, given the number of concrete elements in each type. If I is a model of the sentence, then L constructed by extending I by setting all multiplicities to one is a model of the translated sentence: indeed, the added constraints are trivially satisfied, and the translated sentence is equivalent to the original one. Proving the converse is long and complex. It is proved by structural induction of two invariants over the syntactic tree of the sentence to be satisfied: (i) a transformed formula χpϕq is true in L under some variable assignment r¯x : ¯lxs if and only if ϕ is true in I under any variable assignments r¯x : ¯dxs such that each dxi is in the expansion of lxi (ii) similarly, if a transformed term χptq has value l in the lifted structure L under some variable assignment r¯x : ¯lxs, then, the expansion of the value l contains the value of the term t in I (the expansion of L), for any variable assignment r¯x : ¯dxs such that each dxi is in the expansion of lxi. This property holds only when the regularity constraints given in Table ?? hold in the lifted structure. This explains why these constraints are added to the transformed sentence. The proof is in the supplementary material (Carbonnelle et al. 2023). 4Recall that a cardinality aggregate is a shorthand for a sum aggregate whose term is 1. 5 Evaluation of the Method Implementation The goal of the evaluation is to show that there are satisfiability problems where substantial compression of the domain is possible, and that the lifted models can indeed be expanded into concrete ones. A problem is solved iteratively, starting with empty lifted domains. Given a domain, the lifted sentence is reduced to a propositional sentence and its satisfiability is determined with a standard satisfiability solver capable of arithmetic reasoning. If the sentence is unsatisfiable with this domain, the sentence is reduced to a minimal unsatisfiable formula (Lynce and Marques-Silva 2004), and the domains of the types used in that formula are extended with one element. This process is repeated until a model of the sentence is found (it does not terminate if the original sentence is unsatisfiable for any domain size, unless one imposes an upper limit on the size of lifted domains). In many experiments, it was sufficient to support the special Rules 18, 26-27 for the case where the atom p is of the form t1 “ s (e.g., holeOf ppq “ hq. Then, the transformation of a sum aggregate per Rule 18 simplifies to: 1 mulpχpsqq ÿ ¯xP ¯T pMulp¯xq ˆ χptq (32) if 0 ă Mulp¯xq ^ χpt1q“χpsq ^ χpϕqq This formula does not use the lcm function, which is not supported natively by solvers. In other experiments, we used an interpretation table for lcm. Problems are expressed in FOp¨q5, a Knowledge Representation language with support for types, subtypes, and aggregates.6 A FOp¨q (lifted) sentence is translated by IDPZ3 (Carbonnelle et al. 2022) for use with the Z3 SMT solver (de Moura and Bjørner 2008). The overhead of this translation is negligible. The source code of our examples is available on GitLab7. Tests were run using Z3 v4.12.1 on an Intel Core i7-8850H CPU at 2.60GHz with 12 cores, running Ubuntu 22.04 with 16 GB RAM. We run a modified version of IDP-Z3 v0.10.8 on Python 3.10. Pigeonhole problem To validate the approach, we first consider the satisfiability problem of assigning each pigeon to one pigeonhole, such that each pigeonhole holds (at most) 2 pigeons. Function holeOf : Pigeon ÞÑ Hole is used to represent this relation. When there are twice as many pigeons as holes, the lifted solution has 1 lifted pigeon and 1 lifted hole, as shown in the introduction of the paper. As expected, the time needed to solve the lifted problem is almost constant when it is satisfiable. The correct multiplicities are found quickly by Z3 using a sub-solver specialized for arithmetic. For example, with 10,000 pigeons, the lifted sentence is solved in only 5https://fo-dot.readthedocs.io/ 6Subtypes are subsets of types, and are declared as unary predicates in FOp¨q. Subtypes can be used where types are used: in the type signature of a symbol and in quantification. 7https://gitlab.com/pierre.carbonnelle/idp-z3-generative The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7965 Expression e Transformation χpeq Regularity condition Term x x (14) cpq cpq (15) fp¯tq fpχp¯tqq (16) t1 ‘ t2, ‘ P t`, ´, ˆ, ˜u χpt1q ‘ χpt2q (17) ÿ ¯xP ¯T :pp¯t,sq^ϕ t(*) ř ¯xP ¯T p Mulp¯xqLcmpχp¯tq,χpsqq Lcmpχp¯tqqmulpχpsqq ˆ χptq if 0 ă Mulp¯xq ^ ppχp¯tq, χpsqq ^ χpϕqq (18) ÿ ¯xP ¯T :ϕ t ř ¯xP ¯T pMulp¯xq ˆ χptq if 0 ă Mulp¯xq ^ χpϕqq (19) Formula ppq ppq (20) pp¯tq ppχp¯tqq RCpppχp¯tqqq(**) (21) t1 „ t2, „ P tă, ą, ď, ěu χpt1q „ χpt2q (22) ϕ1 â ϕ2, â P t^, _, ñ, ôu χpϕ1q â χpϕ2q (23) ␣ϕ ␣χpϕq (24) T fi tt1, . . . , tnu ÿ xPT mulpxq “ n (25) @¯x P ¯T : pp¯t, sq ñ ϕ(*) @¯x P ¯T : 0 ă Mulp¯xq ^ ppχp¯tq, χpsqq ñ χpϕq (26) D¯x P ¯T : pp¯t, sq ^ ϕ(*) D¯x P ¯T : 0 ă Mulp¯xq ^ ppχp¯tq, χpsqq ^ χpϕq (27) @¯x P ¯T : ϕ @¯x P ¯T : 0 ă Mulp¯xq ñ χpϕq (28) D¯x P ¯T : ϕ D¯x P ¯T : 0 ă Mulp¯xq ^ χpϕq (29) for each function f : ¯T Ñ T @¯x P ¯T : Mulp¯xq “ Lcmp¯xq (30) @¯x P ¯T : Dn P N : Mulp¯xq “ n ˆ mulpfp¯xqq (31) (*) Rules (18, 26, 27) are applied only when varsp¯tq Ď t¯xu and varspsq X t¯xu “ H; Rules (19, 28, 29) are applied otherwise. (**) RCpppχp¯tqqq is defined as @¯x P ¯T¯x : ppχp¯tqq ñ Mulp¯xq “ Lcmp¯xq _ Mulpχp¯tqq “ Lcmpχp¯tqq, where ¯x “ varspppχp¯tqqq. Table 1: Terms, formulas, and their translation 0.05 sec and the expansion of the lifted model into a concrete one in 0.1 sec. With the same solver and the original sentence, the solution time increases exponentially (4 sec to solve the problem for 30 pigeons). We are aware that symmetry breaking can reduce the complexity, but to the best of our knowledge, solving time is at least linear in the number of pigeons for the best symmetry breaking solvers. We also validated our method on pigeonhole problems where the relation between pigeon and holes is represented by a binary predicate. The translated sentence uses an interpretation table for lcm. Experiments confirm equisatisfiability, but the complexity is quadratic because of the lcm table. Finding a more efficient implementation has been left for future work. Generative configuration problems Generative configuration problems (GCP) are configuration problems in which the number of some components has to be found: the number of elements in some types is not known in advance. An iterative method is thus always required to find them. When the compressed domain is smaller than the concrete domain, the number of iterations needed to solve the lifted sentence is smaller than the number of iterations for the original sentence, leading to better performance. Hence, GCP is a good application domain for our method. We evaluate our solving method on three representative GCP discussed in the literature: • the House Configuration and Reconfiguration problem (Friedrich et al. 2011), • the Organized Monkey Village (Reger, Suda, and Voronkov 2016) • the Rack problem (Feinerer, Salzer, and Sisel 2011; Comploi-Taupe, Francescutto, and Schenner 2022), These problems are expressed using only an equality predicate and unary symbols, and most regularity constraints introduced by our method are trivially satisfied. The top half of Table ?? shows results with Z3 (with automatic translation of the sentence), the bottom half with the OR-tools solver8 (with manual translation). A more detailed table is provided in the supplementary material (Carbonnelle et al. 2023). Solutions to problems with higher suffixes have more components than similar problems with lower ones. The table shows near constant time performance on the Rack ABCD problems. The lifted methods solve each of the 20 occurrences of the ABCD Racks problem in (Comploi8https://google.github.io/or-tools/, used via CPMpy (Guns 2019). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7966 Time # Lifted? Lifted Orig. Lifted Orig. HCP 1 0.2 0.2 6 10 HCP 3 0.22 1.5 6 27 HRP 1 0.27 0.23 7 10 HRP 3 0.30 6.41 8 25 Monkey 1 0.28 184.66 6 20 Monkey 4 0.27 T 6 52 Rack 2011 0.24 8.82 5 23 Rack A5 0.46 0.86 5 19 Rack A10 0.41 37.52 5 38 Rack A1000 0.54 T 4 2250 Rack ABCD4 1.55 4.37 13 20 Rack ABCD8 2.69 T 13 40 Rack ABCD20 3.31 T 13 100 Rack A1 0.61 0.75 5 7 Rack A2 0.64 1.54 5 9 Rack A3 0.62 3.69 5 11 Rack A4 0.65 36.9 5 13 Rack A5 0.65 T 5 19 Table 2: Wall clock time in seconds to solve configuration problems, and number (#) of used domain elements in models. Yes: lifted sentence, No: original sentence, T: timeout after 200 sec. Taupe, Francescutto, and Schenner 2022) in less than 5 seconds (instead of 6 minutes on average in that paper, using an ASP solver). These results show that our method has significant merits for solving problems with symmetries and a preponderance of unary symbols. 6 Boolean Algebra of Sets with Presburger Arithmetic Lifted domain elements represent disjoint sets of concrete domain elements. A model search in the lifted domain can be seen as a model search involving sets. Hence, our work is highly related to Boolean Algebra of sets with Presburger Arithmetic (BAPA), a logic that can express constraints on the cardinality of sets, of their unions and of their intersections (Kuncak and Rinard 2007; Suter, Steiger, and Kuncak 2011; Bansal et al. 2016). Some problems from verification of properties of software operating on complex data structures contain fragments that belong to BAPA. A sample BAPA statement is |A| ą 1 ^ A Ď B ^ |B X C| ď 2, where A, B, C are sets, and |A| is the cardinality of A. The equivalent expression in FO(Type, Aggregate) is p#td : Apdqu ą 1q ^ p@d : Apdq ñ Bpdqq ^ p#td : Bpdq ^ Cpdqu ď 2q, where A, B, C are now unary predicates over a (unique) type whose interpretation is to be found. This expression can be lifted and solved using our approach (see the “theories/BAPA” folder in our repository7). In general, any BAPA sentence can be converted to a concrete FO(Type, Aggregate) sentence that only uses unary predicates, and that can be lifted without regularity constraints. Hence, our approach offers a simple way to solve BAPA problems using any solver capable of reasoning over the rationals. The performance advantage should be evaluated in future work. On the other hand, the conversion of a concrete FO(Type, Aggregate) sentence to BAPA logic is challenging because FO(Type, Aggregate) is more expressive: it allows n-ary relations, functions, sum aggregates, and product of cardinalities. Thus, a BAPA solver could not be used to solve, e.g., generative configuration problems. Still, extensions of BAPA solvers to handle finite n-ary relations have been implemented in CVC4 (Meng et al. 2017). A simple approach to represent structures in BAPA is to use disjoint subsets of the concrete domain, called Venn regions, so that the cardinalities of any set of interest is the sum of the cardinalities of its Venn regions. Unfortunately, the number of Venn regions grows exponentially with the number of sets of interest. Hence, various methods have been developed to reduce this growth, e.g., by creating new Venn regions lazily when required (Bansal et al. 2016). Venn regions are similar to our lifted domain elements, and the iterative method that creates new Venn region is similar, to some extent, to our iterative method that creates new lifted elements when required. Our method cannot prove the unsatisfiability of a sentence. By contrast, efficient methods have been proposed to prove the unsatisfiability of BAPA sentences (Suter, Steiger, and Kuncak 2011). 7 Discussion Unlike the traditional approach of adding symmetry breaking conditions to a formula to accelerate satisfiability checking, we automatically translate the formula to a form with fewer symmetries. Our results demonstrate the benefits of this approach for problems with symmetries and a preponderance of unary symbols, and justify further research in automatic translation for symmetry reduction. Much work remains to be done in evaluating the method and determining problem areas where it is useful. Moreover, the relationship with BAPA logic is worth further exploration because, unlike our method, BAPA can identify unsatisfiable sentences and our method does not terminate for such problems. Efficient implementation of the lcm function is another area of research. Worth exploring is also the relevance for related areas such as model counting. Ideally, any compression of a model of a sentence (Section 3) should be a model of the lifted sentence. It is unlikely that this is achievable, as it requires a translation that avoids all regularity constraints apart from those for functions. This requires being able to predict the fraction of concrete variable assignments (in the expansion of a lifted assignment) that make a concrete formula true, given that the translated formula is true with the lifted assignment. In particular, filter formulas (in an aggregate) having both free and quantified variables are problematic. Still, many refinements of the translation in Table ?? are feasible. We have already worked out some refinements, but they cannot be presented within the current space constraints. They will be presented in the PhD thesis of one of the authors (Carbonnelle 2024). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7967 Acknowledgments This research received funding from the Flemish Government under the “Onderzoeksprogramma Artifici¨ele Intelligentie (AI) Vlaanderen” programme. References Bansal, K.; Reynolds, A.; Barrett, C.; and Tinelli, C. 2016. A new decision procedure for finite sets and cardinality constraints in SMT. In International Joint Conference on Automated Reasoning, 82–98. Springer. Carbonnelle, P. 2024. Standard, Interactive and Lifted Model Expansion using FO(¨). Ph.D. thesis, KU Leuven, Belgium. Carbonnelle, P.; Schenner, G.; Bruynooghe, M.; Bogaerts, B.; and Denecker, M. 2023. Using Symmetries to Lift Satisfiability Checking. arXiv:2311.03424. Carbonnelle, P.; Vandevelde, S.; Vennekens, J.; and Denecker, M. 2022. IDP-Z3: a reasoning engine for FO(.). CoRR, abs/2202.00343. Comploi-Taupe, R.; Francescutto, G.; and Schenner, G. 2022. Applying incremental answer set solving to product configuration. In et al., A. F., ed., SPLC ’22: 26th ACM Int’l Systems and Software Product Line Conference, 150–155. ACM. Crawford, J. M.; Ginsberg, M. L.; Luks, E. M.; and Roy, A. 1996. Symmetry-Breaking Predicates for Search Problems. In Aiello, L. C.; Doyle, J.; and Shapiro, S. C., eds., Proceedings of the Fifth International Conference on Principles of Knowledge Representation and Reasoning (KR’96), Cambridge, Massachusetts, USA, November 5-8, 1996, 148–159. Morgan Kaufmann. de Moura, L.; and Bjørner, N. 2008. Z3: An Efficient SMT Solver. In et al., C. R. R., ed., Tools and Algorithms for the Construction and Analysis of Systems, 14th Int’l Conference, volume 4963 of Lecture Notes in Computer Science, 337–340. Springer. Devriendt, J.; Bogaerts, B.; Bruynooghe, M.; and Denecker, M. 2016. On local domain symmetry for model expansion. Theory Pract. Log. Program., 16(5-6): 636–652. Enderton, H. B. 1972. A mathematical introduction to logic. Academic Press. ISBN 978-0-12-238450-9. Feinerer, I.; Salzer, G.; and Sisel, T. 2011. Reducing Multiplicities in Class Diagrams. In et al., J. W., ed., MODELS 2011., volume 6981 of Lecture Notes in Computer Science, 379–393. Springer. Felfernig, A.; Hotz, L.; Bagley, C.; and Tiihonen, J. 2014. Knowledge-based configuration: From research to business cases. Newnes. Friedrich, G.; Ryabokon, A.; Falkner, A. A.; Haselb¨ock, A.; Schenner, G.; and Schreiner, H. 2011. (Re)configuration using Answer Set Programming. In et al., K. M. S., ed., Proceedings of the IJCAI 2011 Workshop on Configuration, volume 755 of CEUR Workshop Proceedings. CEUR-WS.org. Gent, I. P.; Petrie, K. E.; and Puget, J.-F. 2006. Symmetry in constraint programming. Foundations of Artificial Intelligence, 2: 329–376. Guns, T. 2019. Increasing modeling language convenience with a universal n-dimensional array, CPpy as pythonembedded example. In Proceedings of the 18th workshop on Constraint Modelling and Reformulation at CP (Modref 2019), volume 19. Kuncak, V.; and Rinard, M. C. 2007. Towards Efficient Satisfiability Checking for Boolean Algebra with Presburger Arithmetic. In Pfenning, F., ed., CADE-21, 2007, volume 4603 of Lecture Notes in Computer Science, 215–230. Springer. Lynce, I.; and Marques-Silva, J. 2004. On Computing Minimum Unsatisfiable Cores. In SAT 2004 - The Seventh International Conference on Theory and Applications of Satisfiability Testing, 10-13 May 2004, Vancouver, BC, Canada, Online Proceedings. Meng, B.; Reynolds, A.; Tinelli, C.; and Barrett, C. W. 2017. Relational Constraint Solving in SMT. In de Moura, L., ed., CADE 26, 2017, volume 10395 of Lecture Notes in Computer Science, 148–165. Springer. Reger, G.; Suda, M.; and Voronkov, A. 2016. Finding Finite Models in Multi-sorted First-Order Logic. In et al., N. C., ed., Theory and Applications of Satisfiability Testing - SAT 2016, volume 9710 of Lecture Notes in Computer Science, 323–341. Springer. Schenner, G.; and Taupe, R. 2017. Techniques for solving large-scale product configuration problems with ASP. In Proceedings of the 19th International Configuration Workshop, 12–19. Suter, P.; Steiger, R.; and Kuncak, V. 2011. Sets with Cardinality Constraints in Satisfiability Modulo Theories. In et al., R. J., ed., VMCAI 2011, volume 6538 of Lecture Notes in Computer Science, 403–418. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7968
2024
885
18,723
Robust Beamforming for Downlink Multi-Cell Systems: A Bilevel Optimization Perspective Xingdi Chen1, Yu Xiong1, Kai Yang1,2,3* 1Department of Computer Science and Technology, Tongji University, China 2Key Laboratory of Embedded System and Service Computing Ministry of Education at Tongji University 3Shanghai Research Institute for Intelligent Autonomous Systems [email protected], [email protected], [email protected] Abstract Utilization of inter-base station cooperation for information processing has shown great potential in enhancing the overall quality of communication services (QoS) in wireless communication networks. Nevertheless, such cooperations require the knowledge of channel state information (CSI) at base stations (BSs), which is assumed to be perfectly known. However, CSI errors are inevitable in practice which necessitates beamforming techniques that can achieve robust performance in the presence of channel estimation errors. Existing approaches relax the robust beamforming design problems into semidefinite programming (SDP), which can only achieve a solution that is far from being optimal. To this end, this paper views robust beamforming design problems from a bilevel optimization perspective. In particular, we focus on maximizing the worst-case weighted sum-rate (WSR) in the downlink multi-cell multi-user multiple-input single-output (MISO) system considering bounded CSI errors. We first reformulate this problem into a bilevel optimization problem and then develop an efficient algorithm based on the cutting plane method. A distributed optimization algorithm has also been developed to facilitate the parallel processing in practical settings. Numerical results are provided to confirm the effectiveness of the proposed algorithm in terms of performance and complexity, particularly in the presence of CSI uncertainties. Introduction In multi-cell multiuser wireless communication networks, users, especially cell-edge users, may have low-rate data service as a consequence of suffering from both intra-cell interference and inter-cell interference. Beamforming, one of the most promising multi-antenna techniques, harnesses the spatial dimension to effectively alleviate interference in downlink transmissions. This technique is extensively employed in the design of mobile communication systems. However, such precoding technique is contingent upon hardware complexity, often rendering it impractical for mobile terminals characterized by limited computational power and storage capacity. Consequently, multiple transmit antennas are only deployed at the base station (BS) where the issue of computing power is less problematic, while mobile units *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. are equipped with a small number of antennas. This configuration is commonly referred to as a multiple-input and multiple-output (MIMO) system. Beamforming techniques inherently necessitate channel state information (CSI) at each BS to enable precoded transmissions. Various quality of communication services (QoS) metrics are used in wireless networks, including maximum allowable mean square errors, minimum tolerated signal-tointerference-plus-noise ratios, and weighted sum rates. One widely used QoS metric is the weighted sum-rate (WSR), making it a fundamental and extensively studied problem to design linear beamformers that maximize WSR under total power constraints. Since the WSR maximization problem is nonconvex and NP-hard even in the single-antenna case (Luo and Zhang 2008), it is challenging to achieve the global optimal solution and suboptimal solutions are of great interest. Assuming perfect CSI, (Bj¨ornson, Bengtsson, and Ottersten 2014) points out that the optimal beamforming vectors for the WSR maximization problem in single-cell downlink transmission have a simple structure. Several algorithms have been proposed for single-cell downlink transmission (Liu, Zhang, and Chua 2012; Joshi et al. 2012). However, extending the WSR maximization problem to multicell downlink transmission poses greater challenges. An iterative beamformer design based on iterative second-order cone programming (SOCP) approximation is introduced in (Tran et al. 2012). Additionally, there are also several distributed methods for multi-cell systems (Choi et al. 2012; Weeraddana et al. 2013; Shi et al. 2011; Bogale and Vandendorpe 2012). Specifically, (Choi et al. 2012) presents a fully distributed beamforming technique relying on the high signal-to-interference-plus-noise ratio (SINR) assumption. This technique exclusively utilizes local CSI without requiring additional information exchange. (Weeraddana et al. 2013) splits the nonconvex problem into a master problem that is addressed by a novel sequential convex approximation, and multiple subproblems that can be solved by BSs in a fully asynchronous manner through the primal decomposition technique. Both methods in (Choi et al. 2012) and (Weeraddana et al. 2013) are restricted to multiple-input single-output (MISO) systems. The algorithms proposed in (Shi et al. 2011) and (Bogale and Vandendorpe 2012) are both based on iterative minimization of mean-square error (MMSE) and can tackle the WSR maximization problem in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7969 MIMO environments. Regrettably, achieving perfect CSI at transmitters is unattainable due to estimation and quantization errors. The limited amount of feedback bits available for feeding back CSI gives rise to quantization errors, which are the dominant source of the uncertainty in CSI (Tajer, Prasad, and Wang 2011). Additionally, the provision of up-to-date CSI at transmitters also remains questionable. Therefore, the consideration of imperfect CSI, known to be detrimental to the performance of those methods assuming perfect CSI, becomes crucial. This practical constraint has led to the emergence of robust beamforming techniques, aiming to guarantee the worst-case performance of the network under CSI imperfections. Two main approaches exist for modeling the uncertainty region of CSI errors. The first approach employs the probabilistic model, treating errors as a random variable with some known distribution (Weber, Sklavos, and Meurer 2006; Rong, Vorobyov, and Gershman 2006; Zhang, Palomar, and Ottersten 2008; Shenouda and Davidson 2008; Joudeh and Clerckx 2016). Most of these works aim at optimizing a utility function by averaging over the entire uncertainty region. In this paper, we adhere to the second approach, wherein the uncertainty region of CSI perturbations is confined within given bounded uncertainty sets (Vucic and Boche 2009; Tajer, Prasad, and Wang 2011; Shen et al. 2012; Shaverdian and Nakhai 2014; Zhou et al. 2020). Making no assumption on the distribution of CSI errors, this approach matches well with the quantization errors. Moreover, this method also works for unbounded errors as long as the system outage probability is controlled. Incorporating CSI imperfections leads to the formulation of robust optimization problems. (Vucic and Boche 2009) minimizes the transmit power under a predetermined set of QoS constraints for users. In the presence of bounded CSI errors, the constraints are infinite due to the fact that the QoS requirements must be supported for an infinite number of possible channels within the uncertainty regions. To address this issue, authors employ semidefinite programming (SDP) along with a lemma to convert an infinite set of constraints into a finite set of constraints, making the problem computationally tractable. Both (Shen et al. 2012) and (Shaverdian and Nakhai 2014) adopt techniques like semidefinite relaxation (SDR) and the S-Lemma to reformulate the original optimization problem into a numerically tractable one. They then propose distributed algorithms based on ADMM and primal decomposition approaches, respectively. For the problem of maximizing the worst-case WSR of the network, (Tajer, Prasad, and Wang 2011) provides a lower-bound solution by introducing an additional function and then transforming the problem into the weighted sum of the worstcase mean-square error minimization problem that can be solved by SDP. However, as the system scales up, for example, with an increase in the number of cells and the number of transmitting antennas on BSs, these algorithms become impractical due to the time-consuming nature of SDP. Bilevel optimization dates back to the literature (Von Stackelberg 1934). Recently, bilevel optimization has gained significant attention and is widely applied in various machine learning applications including wireless communication (Sun et al. 2022), hyperparameter optimization(Liu et al. 2021; Franceschi et al. 2018), meta learning (Ji et al. 2020; Ji, Yang, and Liang 2021) and neural architecture search (Liu, Simonyan, and Yang 2018; Xue et al. 2021). Bilevel optimization is an optimization problem where a subset of variables is constrained to be optimal for another given optimization problem. Mathematically, a general bilevel optimization takes the following formulation, min x F(x, y) s.t. G(x, y) ≤0 y ∈arg min y′∈Y {f (x, y′) | g (x, y′) ≤0} , (1) where F and f denote the upper-level and lower-level objective functions, respectively. x ∈Rn is the upper-level variable and y ∈Rm is the lower-level variable. We refer to G and g as the upper-level constraint and the lowerlevel constraint, respectively. In most existing bilevel optimization works in machine learning tasks, both upper-level constraint and lower-level constraint are not considered due to the characteristics of tasks(Liu et al. 2021; Franceschi et al. 2018; Ji et al. 2020; Ji, Yang, and Liang 2021; Liu, Simonyan, and Yang 2018; Xue et al. 2021). To the best of our knowledge, bilevel optimization has not been applied to robust beamforming designs. This suggests a promising avenue for exploration in adapting bilevel optimization methods to address robust beamforming problems. In this paper, we consider multi-cell multiuser MISO wireless networks (Tajer, Prasad, and Wang 2011). The main problem of interest is the beamforming optimization with the goal of maximizing the WSR within per BS power constraints in the presence of CSI imperfections. To ensure the worst-case performance, we assume CSI errors are bounded and resort to robust optimization (Yang et al. 2008, 2014). For obtaining such beamformers, we begin by transforming the original robust optimization problem into a bilevel optimization problem, encompassing both upper-level and lower-level constraints. Secondly, we develop a BiLevel based Robust BeamForming (BLRBF) algorithm similar to the centralized method proposed in Appendix A of the reference (Jiao et al. 2023). To be specific, we treat the lower-level optimization problem as a constraint to the upper-level optimization problem and utilize cutting planes to approximate this constraint. Subsequently, inspired by the work (B¨urger, Notarstefano, and Allg¨ower 2014), we extend the BLRBF method to the asynchronous distributed implementation in order to faster approximate the feasible region. This distributed algorithm is referred to as BiLevel based Asynchronous Distributed Robust BeamForming (BLADRBF). Notably, our algorithm can be readily extended to MIMO systems. We prove that both BLRBF and BLADRBF are guaranteed to converge. Contributions. Our main contributions are summarized as follows: • We are the first to propose viewing robust beamforming design problems from a bilevel optimization perspective. Unlike conventional methods that rely on SDP, which are The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7970 computationally expensive and can only achieve a solution that is far from being optimal. The fresh perspective provides new insights into solving such problems and offers a promising alternative with the potential for improved performance. • To illustrate the application of bilevel optimization, we present a novel bilevel based formulation and develop a cutting plane based algorithm called BLRBF. This approach efficiently handles the challenging task of maximizing worst-case weighted sum-rates. • We also propose an asynchronous distributed algorithm (BLADRBF) to facilitate the parallel processing in practical settings. The asynchronism gives the algorithm a high robustness against failures in the communication. Importantly, both algorithms are mathematically proven to converge. System Model In this section, we consider a multi-cell MISO downlink system with M cells each equipped with one BS with N antennas that serves single-antenna K users. The BS of the mth cell and the kth user in the mth cell are denoted by Bm and Ukm, respectively. The transmitted signal from Bm is given by xm = K X k=1 vkmskm, (2) where vkm ∈CN×1 represents the beamformer that the mth BS uses to transmit the signal skm ∼CN(0, 1) to user Ukm and we assume E[|skm|2] = 1. Then, the received signal at Ukm is given by ykm = hkmmvkmskm | {z } the desired signal + X l̸=k hkmmvlmslm | {z } intra-cell interference + X n̸=m X l hkmnvlnsln + nkm | {z } inter-cell interference plus noise , (3) where hkmn ∈C1×N represents the channel from BSn to Ukm and nkm ∼CN(0, σ2 km) denotes the additive complex white Gaussian noise of Ukm. Accordingly, the SINR of Ukm can be written as SINRkm({v1m, · · · , vKm}M m=1) = |hkmmvkm|2 P l̸=k |hkmmvlm|2 + P n̸=m P l |hkmnvln|2 + σ2 km . (4) It is assumed that the Bm knows only erroneous channel estimates {ehkmn}, i.e., hkmn = ehkmn + ˆ∆kmn, ∀m, n ∈{1, ..., M}, and ∀k ∈{1, ..., K}, (5) where ˆ∆kmn is the channel estimation errors, which are unknown to BSs. Furthermore, the BSs are supposed to know the structure of the uncertainty regions, which in this paper, are bounded and defined as origin-centered hyper-spherical region of radius ϵkmn, i.e., ∥ˆ∆kmn∥2 ≤ϵkmn. For notational simplicity, we denote the beamformer of Bm by ˆV m = [v1m, v2m, . . . , vKm] ∈CN×K. Then, the problem of interest is to find the transmit beamformers { ˆV m} such that the worst-case WSR of the network is maximized, while the power of each BS is constrained. Mathematically, this problem can be formulated as max { ˆ V m} min { ˆ ∆kmn} PM m=1 PK k=1 αkm log (1 + SINRkm) s. t. ˆV m 2 F ≤Pm ∀m, (6) where αkm is the positive weighting factor corresponding to the rate of Ukm and Pm is the power budget of the BS Bm. The objective function of the problem (6) is known as the WSR utility function. This problem focuses solely on maximizing the throughput of the network, disregarding the minimum rate requirements of individual users. This utility function is proved to be highly suitable for scenarios where users can tolerate delays due to the fact that certain users may not receive any resources during specific scheduling frames for the benefit of the network throughput in the extreme case (Rong, Vorobyov, and Gershman 2006). The incorporation of weighting factors enables us to assign different priorities to individual users, thereby allowing us to cater to diverse user needs. Additionally, these weights can be dynamically adjusted over time to ensure long term fairness. The WSR utility function is nonconvex and involves CSI and beamformers of all BSs. Thus, this optimization problem is quite complicated. BLRBF and BLADRBF Methods In this section, we propose a BiLevel based Asynchronous Distributed Robust BeamForming (BLADRBF) method to solve the problem (6) in an asynchronous distributed manner. We first transform the problem (6) into a bilevel optimization problem and then introduce the centralized method, i.e., BLRBF, similar to the method CPBO proposed in Appendix A of the reference (Jiao et al. 2023). CPBO deals with problems where there are no constraints at both the upper and lower levels. Unlike them, our problem has both convex upper-level constraints and convex lower-level constraints. Finally, we extend this centralized method to the asynchronous distributed implementation, i.e., BLADRBF. BLRBF: Bilevel Based Robust Beamforming For notational simplicity, we split { ˆV m} into real and imaginary components and then arrange these components into a single vector, i.e., V ≜[Vec(Re( ˆV )), Vec(Im( ˆV ))] ∈ R2MNK×1. And ∆∈R2M 2NK×1 is defined in the same way. First, we define f(V , ∆) = M X m=1 K X k=1 αkm log (1 + SINRkm) . (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7971 Then, from the perspective of bilevel optimization, problem (6) can be written as min V −f(V , ∆) s.t. ∥ˆV m∥2 F ≤Pm, ∀m ∆= arg min ∆′ f V , ∆′ ∥ˆ∆ ′ kmn∥2 ≤ϵkmn, ∀m, n, k, (8) where specially, the upper and lower objective functions differ only by a negative sign. Now, we begin the process of solving this bilevel optimization problem. By defining ϕ(V ) = arg min∆′ n f V , ∆′ | ∥ˆ∆ ′ kmn∥2 ≤ϵkmn, ∀m, n, k o and g(V , ∆) = ∥∆−ϕ(V )∥2 2, we can reformulate the problem (8) as a single-level problem min V ,∆ −f(V , ∆) s.t. ∥ˆV m∥2 F ≤Pm, ∀m g(V , ∆) = 0. (9) In order to get an estimate of ϕ(V ), we first transform the inequality lower-level constraints into equality constraints by introducing slack variables {skmn}, and then turn to augmented Lagrangian method. In specific, the lower-level problem can be given by arg min ∆′,{skmn} f(V , ∆′) s.t. ∥ˆ∆ ′ kmn∥2 + s2 kmn = ϵkmn, ∀m, n, k. (10) Considering the first-order Taylor approximation of f(V , ∆′) with respect to V , i.e., for a given point eV , ˜f(V , ∆′) = f( eV , ∆′)+∇V f( eV , ∆′)T (V −eV ), the augmented Lagrangian function of the lower-level optimization problem (10) can be written as fALM(V , ∆′, {skmn} , {µkmn}) = ˜f(V , ∆′) + P k,m,n µkmn(∥ˆ∆ ′ kmn∥2 + s2 kmn −ϵkmn) + P k,m,n ρ 2(∥ˆ∆ ′ kmn∥2 + s2 kmn −ϵkmn)2, (11) where {µkmn} are Lagrange multipliers, and ρ > 0 is the penalty parameter. Therefore, based on augmented Lagrangian method, we have ∆′ k+1 = ∆′ k −η∆′∇∆′fALMk, {skmn}k+1 = {skmn}k −η{skmn}∇{skmn}fALMk, {µkmn}k+1 = {µkmn}k + η{µkmn}∇{µkmn}fALMk, (12) where fALMk = fALM(V , ∆′ k, {skmn}k , {µkmn}k), and η∆′, η{skmn}, η{µkmn} are step-sizes. If we repeat procedure (12) K times, ϕ(V ) can be approximated by ϕ(V ) = ∆′ 0 − K−1 X k=0 η∆′∇∆′fALMk. (13) Let us then consider the relaxed problem of (9) min V ,∆ −f(V , ∆) s.t. ∥ˆV m∥2 F ≤Pm, ∀m g(V , ∆) ≤ε, (14) where ε > 0 is a very small constant. The feasible region of this problem is defined as S. Theorem 1. The function g(V , ∆) is convex with respect to (V , ∆) and the feasible region S is convex when K = 1. The proof of Theorem 1 is presented in Appendix A. We use a set of cutting planes to approximate the convex feasible region. These cutting planes forms a polytope D[t] at t-th iteration, which can be given by D[t] = n aT i V + bT i ∆+ κi ≤0, i = 1, ..., |D[t]| o , (15) where ai ∈R2MNK×1, bi ∈R2M 2NK×1 and κi ∈R1 are parameters of i-th cutting plane and |D[t]| denotes the number of cutting planes in D[t]. Thus, the problem (14) can be expressed as follows min V ,∆ −f(V , ∆) s.t. aT i V + bT i ∆+ κi ≤0, ∀i = 1, ..., |D[t]|. (16) The Lagrangian function of problem (16) can be written as L (V , ∆, {λi}) = −f(V , ∆) + |Dt| X i=1 λi  a⊤ i V + b⊤ i ∆+ κi  , (17) where {λi} are Lagrange multipliers associated with inequality constraints. Thus, the algorithm can proceed in the t-th iteration as follows V [t+1] = V [t] −ηV ∇V L  V [t], ∆[t], n λ[t] i o , (18) ∆[t+1] = ∆[t] −η∆∇∆L  V [t], ∆[t], n λ[t] i o , (19) λ[t+1] i = h λ[t] i + ηλi∇λiL  V [t], ∆[t], n λ[t] i oi + , ∀i, (20) where [·]+ denotes a projection of a value onto the nonnegative space, and ηV , η∆and ηλi are step-sizes. (Jorge and Stephen 2006) indicates that one of the main challenges in solving constrained optimization problems lies in determining which inequality constraints are active and which are not. By introducing active sets, we can simplify the search for the optimal solution of the problem (16). Because of the complementary slackness, we can decide whether the specific constraint is active through its corresponding dual variable rather than check whether the strict inequality holds (Jiao, Yang, and Song 2022). To be specific, if λi > 0, the constraint aT i V + bT i ∆+ κi ≤0 is active, and the constraint is inactive if λi = 0. To reduce the number of constraints, we can remove inactive constraints during iterations. Next, we will introduce how to update cutting planes. The cutting planes will be updated every kpre iteration by (a) removing inactive cutting planes and (b) adding new cutting planes. Firstly, We remove inactive cutting planes as follows D[t+1] =  Drop D[t], cpi  , if λ[t+1] i = 0 D[t], otherwise , (21) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7972 where cpi represents the i-th cutting plane in D[t] and Drop D[t], cpi  means removing the i-th cutting plane cpi from D[t]. As D[t] is updated, corresponding Lagrange multipliers are supposed to be removed. Secondly, we introduce the addition of new cutting planes. Given a query point (V [t+1], ∆[t+1]), we check whether this point satisfies the constraints ∥ˆV m∥2 F ≤Pm, ∀m. If not, we are supposed to generate a new cutting plane to separate the point (V [t+1], ∆[t+1]) from S. Since the function ∥ˆV m∥2 F − Pm is convex, we can generate valid cutting planes for all m according to (Boyd and Vandenberghe 2007) as follows ∥ˆV [t+1] m ∥2 F −Pm+ " ∂  ∥ˆ V [t+1] m ∥2 F −Pm  ∂V0 #⊤ V ∆  −  V [t+1] ∆[t+1]  ≤0, (22) which is denoted by cp[t+1] new,V m. We also need to check whether the point (V [t+1], ∆[t+1]) is a feasible solution of the problem (14). If not, that is g(V [t+1], ∆[t+1]) > ε, a valid cutting plane is generated as follows g  V [t+1], ∆[t+1] + " ∂g(V [t+1],∆[t+1]) ∂V ∂g(V [t+1],∆[t+1]) ∂∆ #⊤ V ∆  −  V [t+1] ∆[t+1]  ≤ε, (23) which is denoted by cp[t+1] new,g. Corresponding Lagrange multipliers should also be added. The details of the proposed algorithm are summarized in algorithm 1. Theorem 2. (Convergence) The optimal objective value of the problem (16) monotonically converges to a constant F over the evolution of the algorithm 1. The proof of Theorem 2 is presented in Appendix B. BLADRBF In this subsection, we extend the centralized algorithm to a asynchronous distributed implementation, which can approximate the feasible region S of the problem (14) more quickly. Before that, we introduce the concept of communication graph. Definition 1. Communication Graph is a direct graph G(V, E). The node set V = {1, ..., M} is the set of BSs, and the edge set E represents the communication between BSs. There is an edge from node i to node j if BSi transmits information to BSj. We denote the outgoing and incoming nerghbors of node i by NO(i) and NI(i), respectively. The communication graph is said to be strongly connected if for every pair of nodes (i, j) there exists a directed path from i to j. In (B¨urger, Notarstefano, and Allg¨ower 2014), authors consider that each processor i has its own constraint set and the global feasible set is the intersection of all these Algorithm 1: BLRBF: BiLevel based Robust BeamForming. Input: P , n ehkmn, ϵkmn o . Output: V , ∆. 1: Initialization: set t = 0, V [0] randomly, ∆[0] = 0 and dual variables n λ[0] i o = 0; 2: repeat 3: Update variables V t+1, ∆t+1, n λ[t+1] i o according to (18), (19) and (20); 4: if t mod kpre == 0 then 5: Remove inactive cutting planes according to (21) and corresponding dual variables; 6: Compute an estimate solution ϕ(V [t+1]) of the lower level problem according to (13); 7: if ∥ˆV [t+1] m ∥2 F > Pm, ∀m then 8: Add new cutting planes according to (22) and corresponding dual variables; 9: end if 10: if g(V [t+1], ∆[t+1]) > ε then 11: Add the new cutting plane according to (23) and corresponding dual variable; 12: end if 13: end if 14: t ←t + 1; 15: until convergence. Figure 1: local feasible regions Si, global feasible region S, and cutting planes Di that are generated by BSi. sets. Inspired by this design, we first distribute constraints ∥ˆV m∥2 F −Pm, ∀m to all BSs. As for the constraint g(V , ∆) ≤ε, we utilize all BSs to generate different cutting planes in order to accelerate the approximation of the feasible region. Thus, it can approximate the feasible region better and faster. Mathematically, the problem solved by BSm is given as min V ,∆ −f(V , ∆) s.t. ∥ˆV m∥2 F ≤Pm g(V , ∆) ≤ε, (24) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7973 Algorithm 2: BLADRBF: BiLevel based Asynchronous Distributed Robust BeamForming. Input: P , n ehkmn, ϵkmn o . Output: V , ∆. 1: for each BSl do 2: Initialize iteration t = 0, variables V [0] l , ∆[0] l with different values and dual variables n λ[0] l,i o = 0; 3: end for 4: for each BSl do 5: repeat 6: Each BS update variables V [t+1] l , ∆[t+1] l like algorithm 1; 7: if t mod kpre == 0 then 8: It transmits its current D[t+1] l to all its outneighbors NO(l) and receives active constrains of its in-neighbors Y [t+1] l = S j∈NI(l) D[t+1] j ; 9: D[t+1] l ←D[t+1] l S Y [t+1] l ; 10: end if 11: t ←t + 1; 12: until convergence. 13: end for where the feasible region is denoted by Sm. Taking Figure 1 as an example, cutting planes Di, i = 1, ..., 3 are generated by three different BSs every kpre iteration in order to approximate its own feasible region Si simultaneously and will be passed between BSs. Three cutting plane constraints combined can approximate global feasible region S more precisely. Compared with BLRBF which generates only one cutting plane constraint every kpre iteration, this distributed algorithm can generate three cutting plane constraints. The details of BLADRBF are presented in algorithm 2. The subscript l represents variables and sets at BSl. Note that the algorithm 2 operates without the need for time synchronization. Each BS has the flexibility to conduct its own computations at varying speeds and can update its constraints as soon as it receives relevant cutting planes. The asynchronous distributed method is more robust against communication failures. Proposition 1. The linear constraints at a BS form a polyhedral approximation of the feasible region S. The detailed proof of Proposition 1 is given in Appendix C. Theorem 3. (Consistent) Let F i be the convergence value computed by BSi. We have that F i = F j = F, ∀i, j. The proof of Theorem 3 is presented in Appendix D. Experiments In this section, numerical simulations are carried out to illustrate the performance of BLRBF and BLADRBF algorithms. We consider multi-cell multi-user MISO downlink systems. The specific parameters including the number of cells M, the number of users K, the number of antennas N, and the transmit power budget P are provided alongside 10 5 0 5 10 15 20 25 30 35 40 SNR(dB) 0 5 10 15 20 25 30 WSR(bits/sec/Hz) WMMSE…with…perfect…CSI SDP-based…beamforming,…=0.0 SDP-based…beamforming,…=0.05 SDP-based…beamforming,…=0.1 BLRBF,…=0.0 BLRBF,…=0.05 BLRBF,…=0.1 Figure 2: Comparing the worst-case weighted sum-rates yielded by BLRBF, SDP-based beamforming and WMMSE algorithms for M = N = K = 2. corresponding figures. We adopt a typical small-scale fading channel model,i.e., Rayleigh fading, which is widely used in previous literature (Choi et al. 2012; Zhang et al. 2022). Rayleigh fading: Each channel coefficient hkmn is generated according to a complex standard normal distribution, i.e., Re(hkmn) ∼CN(0, I) √ 2 , Im(hkmn) ∼CN(0, I) √ 2 , ∀m, n, k. We contrast the proposed algorithms with the SDP-based beamforming proposed in (Tajer, Prasad, and Wang 2011). Under the assumption of perfect CSI, the results are compared with the WMMSE method proposed in (Shi et al. 2011). All simulation experiments are executed on a machine equipped with a 16-core AMD Ryzen 7 5800H processor. We perform an analysis of the robust WSR maximization across three systems with parameters M = N = K = 2, M = K = 3, N = 4 and M = 4, K = 10, N = 64. For each of these system configurations, we consider 10 erroneous channel realizations to illustrate the performance of the proposed algorithms. Figure 2 displays the optimized worst-case sumrate achieved by BLRBF, SDP-based beamforming, and WMMSE algorithms. If CSI is assumed to be perfectly known, i.e., ϵkmn = 0 for all k, m, n, we find that both BLRBF and SDP-based beamforming algorithms can achieve comparable performance with the WMMSE algorithm, while BLRBF approach slightly outperforms the SDP-based beamforming method. Next, we explore BLRBF and SDP-based beamforming algorithms for different uncertainty regions with radii of 0.05 and 0.1. The key observation is that larger uncertainty regions lead to diminished robust weighted sum rates. This is primarily because larger regions of uncertainty indicate more significant channel estimation errors, which consequently lead to a reduced worstcase weighted sum rate. Moreover, as SNR increases, the BLRBF method outperforms the SDP-based beamforming method in terms of achieving significant improvements in robust weighted sum rates. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7974 10 5 0 5 10 15 20 25 30 35 40 45 SNR(dB) 0 10 20 30 40 50 WSR(bits/sec/Hz) WMMSE…with…perfect…CSI SDP-based…beamforming,…=0.0 SDP-based…beamforming,…=0.05 SDP-based…beamforming,…=0.1 BLRBF,…=0.0 BLRBF,…=0.05 BLRBF,…=0.1 Figure 3: Comparing the worst-case weighted sum-rates yielded by BLRBF, SDP-based beamforming and WMMSE algorithms for M = K = 3, N = 4. Continuing, we scale up the system, focusing on a configuration with M = K = 3 and N = 4. Figure 3 illustrates the relationship between the robust weighted sum rate and SNR, consistent with Figure 2. This is because higher SNR means more base station power P , assuming constant noise. In communication systems, greater SNR lowers noise interference, enhancing data reliability and boosting the weighted sum rate. Additionally, we observe that the optimized robust weighted sum rate saturates at high SNR levels which aligns with the high SNR analysis and Theorem 7 in (Tajer, Prasad, and Wang 2011). Despite both BLRBF and SDP-based beamforming algorithms saturating at high SNR for ϵ ̸= 0, it is noteworthy that the BLRBF method achieves saturation at a higher SNR level, and its saturation value surpasses that of SDP-based beamforming method. It is also revealed that at the same SNR, the BLRBF algorithm outperforms the SDP-based beamforming algorithm, yielding an optimized robust weighted sum rate that is superior. Lastly, we delve into the analysis of a network with a more extensive setup, that is M = 4, K = 10, N = 64. This setup emulates a more realistic large-scale scenario. When the matrix size is large, SDP solved through the interior point method becomes computationally expensive and even intolerable. Specifically, the computation time of the SDP-based beamforming method exceeds 4 hours under the scenario settings of M = 4, K = 10, N = 64. In contrast, our proposed BLRBF approach can efficiently achieve the optimized results within an acceptable time frame. In Figure 4, the worst-case weighted sum rates versus SNRs for this setup are depicted. This analysis demonstrates the capability of our proposed BLRBF algorithm to handle large-scale scenarios effectively, even in situations where the SDP-based beamforming method faces computational challenges. Finally, we demonstrate the disparity in terms of convergence rates between the BLRBF algorithm and the BLADRBF algorithm in Figure 5. It is evident that while both BLRBF and BLADRBF algorithms attain similar final convergence values, the BLADRBF approach exhibits a more rapid convergence. 10 5 0 5 10 15 20 25 30 SNR(dB) 70 110 150 190 230 270 310 350 390 WSR(bits/sec/Hz) WMMSE…with…perfect…CSI BLRBF,…=0.0 BLRBF,…=0.05 BLRBF,…=0.1 Figure 4: The worst-case weighted sum-rates yielded by BLRBF algorithms for M = 4, K = 10, N = 64. 0 105 210 315 420 time(s) 0 4 8 12 16 20 24 WSR(bits/sec/Hz) BLRBF BLADRBF Figure 5: Comparing the convergence rate of BLRBF and BLADRBF algorithms for M = 3, K = 3, N = 4. Conclusion We propose to address robust beamforming problems by adopting a bilevel optimization perspective, thereby providing a fresh insight into this field. Focusing on the problem of maximizing the worst-case weighted sum-rate for multicell multi-user MISO wireless networks where BSs can acquire only noisy channel estimates, we develop an efficient algorithm, i.e., BLRBF, based on the cutting plane method. A distributed algorithm called BLADRBF is also proposed to facilitate the parallel processing in practical settings. We prove both algorithms are guaranteed to converge. Our algorithm can be readily extended to MIMO systems. Finally, through comprehensive numerical experiments, we demonstrate that the BLRBF method can significantly outperform the SDP-based beamforming method proposed in (Tajer, Prasad, and Wang 2011), particularly in high SNR regimes. We also confirm that the distributed algorithm BLADRBF exhibits a faster convergence rate compared to the centralized algorithm BLRBF. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7975 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 12371519 and 61771013; in part by the Fundamental Research Funds for the Central Universities of China; and in part by the Fundamental Research Funds of Shanghai Jiading District. References Bj¨ornson, E.; Bengtsson, M.; and Ottersten, B. 2014. Optimal Multiuser Transmit Beamforming: A Difficult Problem with a Simple Solution Structure [Lecture Notes]. IEEE Signal Processing Magazine, 31(4): 142–148. Bogale, T. E.; and Vandendorpe, L. 2012. Weighted Sum Rate Optimization for Downlink Multiuser MIMO Coordinated Base Station Systems: Centralized and Distributed Algorithms. IEEE Transactions on Signal Processing, 60(4): 1876–1889. Boyd, S.; and Vandenberghe, L. 2007. Localization and cutting-plane methods. From Stanford EE 364b lecture notes, 386. B¨urger, M.; Notarstefano, G.; and Allg¨ower, F. 2014. A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization. IEEE Transactions on Automatic Control, 59(2): 384–395. Choi, H.-J.; Park, S.-H.; Lee, S.-R.; and Lee, I. 2012. Distributed Beamforming Techniques for Weighted SumRate Maximization in MISO Interfering Broadcast Channels. IEEE Transactions on Wireless Communications, 11(4): 1314–1320. Franceschi, L.; Frasconi, P.; Salzo, S.; Grazzi, R.; and Pontil, M. 2018. Bilevel programming for hyperparameter optimization and meta-learning. In International conference on machine learning, 1568–1577. PMLR. Ji, K.; Lee, J. D.; Liang, Y.; and Poor, H. V. 2020. Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters. In Advances in Neural Information Processing Systems, 11490–11500. Ji, K.; Yang, J.; and Liang, Y. 2021. Bilevel optimization: Convergence analysis and enhanced design. In International conference on machine learning, 4882–4892. PMLR. Jiao, Y.; Yang, K.; and Song, D. 2022. Distributed Distributionally Robust Optimization with Non-Convex Objectives. In Advances in Neural Information Processing Systems, 7987–7999. Jiao, Y.; Yang, K.; Wu, T.; Song, D.; and Jian, C. 2023. Asynchronous distributed bilevel optimization. In International Conference on Learning Representations. Jorge, N.; and Stephen, J. W. 2006. Numerical optimization. Spinger. Joshi, S. K.; Weeraddana, P. C.; Codreanu, M.; and Latvaaho, M. 2012. Weighted Sum-Rate Maximization for MISO Downlink Cellular Networks via Branch and Bound. IEEE Transactions on Signal Processing, 60(4): 2090–2095. Joudeh, H.; and Clerckx, B. 2016. Sum-Rate Maximization for Linearly Precoded Downlink Multiuser MISO Systems With Partial CSIT: A Rate-Splitting Approach. IEEE Transactions on Communications, 64(11): 4847–4861. Liu, H.; Simonyan, K.; and Yang, Y. 2018. Darts: Differentiable architecture search. In International Conference on Learning Representations. Liu, L.; Zhang, R.; and Chua, K.-C. 2012. Achieving Global Optimality for Weighted Sum-Rate Maximization in the KUser Gaussian Interference Channel with Multiple Antennas. IEEE Transactions on Wireless Communications, 11(5): 1933–1945. Liu, R.; Liu, X.; Yuan, X.; Zeng, S.; and Zhang, J. 2021. A value-function-based interior-point method for non-convex bi-level optimization. In International Conference on Machine Learning, 6882–6892. PMLR. Luo, Z.-Q.; and Zhang, S. 2008. Dynamic Spectrum Management: Complexity and Duality. IEEE Journal of Selected Topics in Signal Processing, 2(1): 57–73. Rong, Y.; Vorobyov, S.; and Gershman, A. 2006. Robust linear receivers for multiaccess space-time block-coded MIMO systems: a probabilistically constrained approach. IEEE Journal on Selected Areas in Communications, 24(8): 1560– 1570. Shaverdian, A.; and Nakhai, M. R. 2014. Robust Distributed Beamforming With Interference Coordination in Downlink Cellular Networks. IEEE Transactions on Communications, 62(7): 2411–2421. Shen, C.; Chang, T.-H.; Wang, K.-Y.; Qiu, Z.; and Chi, C.Y. 2012. Distributed Robust Multicell Coordinated Beamforming With Imperfect CSI: An ADMM Approach. IEEE Transactions on Signal Processing, 60(6): 2988–3003. Shenouda, M. B.; and Davidson, T. N. 2008. On the Design of Linear Transceivers for Multiuser Systems with Channel Uncertainty. IEEE Journal on Selected Areas in Communications, 26(6): 1015–1024. Shi, Q.; Razaviyayn, M.; Luo, Z.-Q.; and He, C. 2011. An Iteratively Weighted MMSE Approach to Distributed SumUtility Maximization for a MIMO Interfering Broadcast Channel. IEEE Transactions on Signal Processing, 59(9): 4331–4340. Sun, H.; Pu, W.; Fu, X.; Chang, T.-H.; and Hong, M. 2022. Learning to Continuously Optimize Wireless Resource in a Dynamic Environment: A Bilevel Optimization Perspective. IEEE Transactions on Signal Processing, 70: 1900–1917. Tajer, A.; Prasad, N.; and Wang, X. 2011. Robust Linear Precoder Design for Multi-Cell Downlink Transmission. IEEE Transactions on Signal Processing, 59(1): 235–251. Tran, L.-N.; Hanif, M. F.; Tolli, A.; and Juntti, M. 2012. Fast Converging Algorithm for Weighted Sum Rate Maximization in Multicell MISO Downlink. IEEE Signal Processing Letters, 19(12): 872–875. Von Stackelberg, H. 1934. Marktform und gleichgewicht. Vucic, N.; and Boche, H. 2009. Robust QoS-Constrained Optimization of Downlink Multiuser MISO Systems. IEEE Transactions on Signal Processing, 57(2): 714–725. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7976 Weber, T.; Sklavos, A.; and Meurer, M. 2006. Imperfect channel-state information in MIMO transmission. IEEE Transactions on Communications, 54(3): 543–552. Weeraddana, P. C.; Codreanu, M.; Latva-aho, M.; and Ephremides, A. 2013. Multicell MISO Downlink Weighted Sum-Rate Maximization: A Distributed Approach. IEEE Transactions on Signal Processing, 61(3): 556–570. Xue, C.; Wang, X.; Yan, J.; Hu, Y.; Yang, X.; and Sun, K. 2021. Rethinking bi-level optimization in neural architecture search: A gibbs sampling perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 10551–10559. Yang, K.; Huang, J.; Wu, Y.; Wang, X.; and Chiang, M. 2014. Distributed robust optimization (DRO), part I: Framework and example. Optimization and Engineering, 15(1): 35–67. Yang, K.; Wu, Y.; Huang, J.; Wang, X.; and Verdu, S. 2008. Distributed Robust Optimization for Communication Networks. In IEEE INFOCOM 2008 - The 27th Conference on Computer Communications, 1157–1165. Zhang, J.; Yuan, Y.; Zheng, G.; Krikidis, I.; and Wong, K.K. 2022. Embedding Model-Based Fast Meta Learning for Downlink Beamforming Adaptation. IEEE Transactions on Wireless Communications, 21(1): 149–162. Zhang, X.; Palomar, D. P.; and Ottersten, B. 2008. Statistically Robust Design of Linear MIMO Transceivers. IEEE Transactions on Signal Processing, 56(8): 3678–3689. Zhou, G.; Pan, C.; Ren, H.; Wang, K.; Renzo, M. D.; and Nallanathan, A. 2020. Robust Beamforming Design for Intelligent Reflecting Surface Aided MISO Communication Systems. IEEE Wireless Communications Letters, 9(10): 1658–1662. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7977
2024
886
18,724
Hardness of Random Reordered Encodings of Parity for Resolution and CDCL Leroy Chew1, Alexis de Colnet1, Friedrich Slivovsky2, Stefan Szeider1 1 Algorithms and Complexity Group, TU Wien, Vienna, Austria 2 Department of Computer Science, University of Liverpool, Liverpool, UK {lchew,decolnet,sz}@ac.tuwien.ac.at, [email protected] Abstract Parity reasoning is challenging for Conflict-Driven Clause Learning (CDCL) SAT solvers. This has been observed even for simple formulas encoding two contradictory parity constraints with different variable orders (Chew and Heule 2020). We provide an analytical explanation for their hardness by showing that they require exponential resolution refutations with high probability when the variable order is chosen at random. We obtain this result by proving that these formulas, which are known to be Tseitin formulas, have Tseitin graphs of linear treewidth with high probability. Since such Tseitin formulas require exponential resolution proofs, our result follows. We generalize this argument to a new class of formulas that capture a basic form of parity reasoning involving a sum of two random parity constraints with random orders. Even when the variable order for the sum is chosen favorably, these formulas remain hard for resolution. In contrast, we prove that they have short DRAT refutations. We show experimentally that the running time of CDCL SAT solvers on both classes of formulas grows exponentially with their treewidth. Introduction SAT solvers, including Conflict-Driven Clause-Learning (CDCL) solvers, can solve practical problems with millions of variables (Marques-Silva, Lynce, and Malik 2009; Fichte et al. 2023), but on the other hand, can struggle with basic mathematical principles. The Handbook of Satisfiability (Biere et al. 2021, Section 9.6.1) lists one such example: the problem of XOR (exclusive-or) constraints, which is equivalent to the parity problem of summation modulo 2. XOR-constraints serve practical purposes, particularly around modern cryptographical and cryptonanalytical problems. Provably hard XOR problems are usually constructed over complex structures such as expander graphs (Urquhart 1987; Ben-Sasson and Wigderson 2001), but much simpler problems involving only two constraints were found experimentally hard for CDCL (Chew and Heule 2020) and, up until this paper, were not matched with a corresponding lower bound in resolution. Resolution is particularly important here because the relationship with CDCL solving is two-way; CDCL runs of unsatisfiable instances can be output as resolution proofs, but Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. also every resolution refutation can be followed completely by a CDCL algorithm (with a few non-deterministic choices) to return UNSAT (Pipatsrisawat and Darwiche 2011). Lower bounds on the length of resolution proof of unsatisfiability have been shown for pure XOR problems structured by graphs and represented in CNF formulas called Tseitin formulas (Tseitin 1983; Urquhart 1987). Finding a complete characterization of hard Tseitin formulas for resolution is still an open problem. However, exponential lower bounds for resolution are known under the suitable condition: that their underlying graphs have high linear treewidth (a graph invariant that measures how close a graph is to being a tree). The relationship between treewidth and proof length has been extensively studied for Tseitin formulas already (BenSasson and Wigderson 2001; Galesi, Talebanfard, and Tor´an 2020; Itsykson, Riazanov, and Smirnov 2022). Here, we look at some XOR-constraint problems that are strikingly simple to define and whose hardness for resolution was observed empirically but yet to be understood theoretically (Chew and Heule 2020). We prove that they are, in fact, families of Tseitin formulas and that linear treewidth emerges for almost all of them, thus showing asymptotic exponential lower bounds for resolution. Furthermore, our experiments suggest that this is not just theoretical and asymptotic for proof systems; treewidth indeed correlates to the solving time of CDCL solvers on these families. In the rest of this section, we present the problems and our results. Problem 1: Reordered Parity The standard linear CNF encoding of an XOR-constraint over n propositional variables splits the constraint into a sequence of XORs of size 3 according to an ordering of its variables (Biere et al. 2021, Section 2.2.5). The encoding uses one auxiliary variable for every k < n to store the parity of the first k variables. The simplest form of the XOR-constraint problem starts with two opposite XORconstraints and their standard linear CNF encodings where the n variables appear in a different order given by the permutation σ. The two CNF are saying the sum of the n variables is both odd and even, so their conjunction—denoted by rPar(n, σ)—is a contradiction a SAT solver should be able to recognize. Chew and Heule (2020) showed that these problems could be proven false by O(n log n)-size DRAT proofs even withThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7978 out new variables (Buss and Thapen 2019). We show here that resolution proofs on their own are often unable to handle even these restricted examples. For some σ, such as the identity mapping, resolution proofs are short; in fact, the identity mapping gives the Dubois family in the SAT library. However, the easiness is not seen with other permutations. Chew and Heule (2020) conducted experiments that showed that CDCL solvers struggle and time-out around n = 50 for uniformly selected permutations, although a theoretical lower bound was never proved. We show that as n increases, a random permutation σ yields, with high probability, a formula rPar(n, σ) whose resolution proof requires exponentially many clauses. Theorem 1. There is a constant α > 0 such that, with probability tending to 1 as n increases, the length of a smallest resolution refutation of the unsatisfiable formula rPar(n, σ), where σ is chosen uniformly at random, is at least 2αn. A key observation here is that the rPar formulas are Tseitin formulas. The fact that the rPar formulas come from standard CNF encodings of very simple XOR-constraints problems makes them natural examples of Tseitin formulas that are more likely to occur in practice than those that appear in proofs of hardness that are often constructed from arbitrary expander graphs (Urquhart 1987). Theorem 1 proves the context assumed by Chew and Heule (2020) that a powerful proof system such as DRAT−was necessary for the short proofs of rPar, as we now have exponential resolution lower bounds. Along with the evidence shown by our experiments, we now conclude that high treewidth is the reason for the hardness in the experiments of Chew and Heule (2020). We can also take this as clarification that order matters in the encoding of parity constraints in general. Problem 2: Random Parity Addition There are several effective strategies for dealing with XORconstraints in practice. One method that has succeeded is to employ Gaussian elimination (Han and Jiang 2012; Soos 2012) techniques to simplify the problem. Two contradictory parity constraints fall short of representing what happens in an average step of Gaussian elimination. Instead, Gaussian elimination involves many steps using the bitwise addition of two XOR-constraints to produce a new constraint. In order to study such a step as an instance for resolution, we have to write it as a contradiction. So here we modify rPar to use three XOR-constraints, with the third containing the variables in the symmetric difference of the first two constraints and then flip some literals to create a contradiction. The input XOR-constraints are encoded in CNF formulas a and b using the standard linear encoding. We then define a CNF encoding rAddPar(a, b) of the contradiction similar to rPar(n, σ), and we show their hardness for resolution. Theorem 2. With high probability, for any two random parity constraints over n variables encoded randomly and independently in CNF formulas a and b using the standard linear encoding, the length of the shortest resolution refutation of rAddPar(a, b) is exponential in n, the length of a shortest resolution refutation of rAddPar(a, b) is exponential in the number of variables. The rAddPar formulas turn out to also be Tseitin formulas, so this again provides a new intuitive family that demonstrates the hardness of Tseitin formulas—and yet again shows that order matters when encoding parity constraints. Adding Gaussian elimination to SAT-solving or preprocessing presents several technical challenges. An example is verification—unsatisfiable instances in CDCL SAT solvers can be readily verified in resolution proofs and thus verified in the more powerful checking format standard DRAT (J¨arvisalo, Heule, and Biere 2012). It was therefore pertinent to show that Gaussian elimination techniques could also be verified efficiently in DRAT (Philipp and Rebola-Pardo 2016). For the specific family of rPar(n, σ), it was shown to have DRAT−refutations in O(n log n) many lines using a tool from Chew and Heule (2020). Recently, a BDD-based SAT solver augmented with pseudo-Boolean constraints (Bryant, Biere, and Heule 2022) was shown to have improved the result experimentally. We can generalize Chew and Heule’s upper-bound results to rAddPar(a, b). Theorem 3. For any two random parity constraints over n variables encoded randomly and independently in CNF formulas a and b using the standard linear encoding, there are DRAT−refutations of rAddPar(a, b) with O(n log n) many lines. O(n log n) is already a good upper bound, and this can potentially be used in verification. One of the advantages of DRAT−is that no extension variables are added. Preliminaries Boolean variables take value in {0, 1}. A literal is either a variable x or its negation ¯x. Clauses are disjunctions of literals and CNF formulas are conjunctions of clauses. The negation of clause C can be labelled ¯C and is a CNF of clauses each containing one literal. The symbols ∨, ∧denote disjunction and conjunction and we use ⊕for exclusive disjunction, that is, x ⊕y = x + y mod 2. The canonical CNF representation of a parity constraint x1 ⊕· · ·⊕xk = 0 is the CNF formula xor(x1, . . . , xk) composed of all 2k−1 clauses of size k that contain an odd (resp. even) number of positive literals when k is odd (resp. even). For instance xor(p, q, r) := (¯p∨¯q∨¯r)∧(¯p∨q∨r)∧(p∨¯q∨r)∧(p∨q∨¯r). The canonical representation of x1 ⊕· · · ⊕xk = 1 is just xor(x1, . . . , xk) where we flip all literals for an arbitrary variable, for instance xor(¯x1, x2, . . . , xk). Proofs and Refutations Resolution. Resolution is a refutational proof system that works by adding clauses based on a single binary rule—the resolution rule (Robinson 1963). The resolution rule’s new clause is a logical implication. Adding it to the formula preserves not only satisfiability but also the models. A resolution proof that derives the empty clause shows that the original formula is unsatisfiable. C1 ∨x C2 ∨¬x (Resolution) C1 ∨C2 Extended Resolution. Extended resolution adds an extension rule, it creates extension clauses that introduce a new The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7979 variable with clauses that force that new variables to follow a definition. If we treat the extension variables as new, the extension rule does not change the satisfying assignments when considering only the original variables. Therefore, when we reach the empty clause, we know that the original formula must have been unsatisfiable. Example 1. We can add the following extension clauses that state that extension variable n is the exclusive or of x with y: (¯x ∨y ∨n), (x ∨¯y ∨n), (x ∨y ∨¯n), (¯x ∨¯y ∨¯n). DRAT. Unit propagation is an incomplete, model preserving, and polynomial-time process. Definition 1. A unit clause is a clause of one literal. Unit propagation takes any unit clause (a) and resolves it with every clause which has an ¯a in it (possibly creating another unit clause). After all resolvents are found, the clause (a) is removed, and we repeat the process for another unit clause until no unit clauses remain. We also terminate if we reach the empty clause, and we can write F ⊢1 ⊥to denote that unit propagation of F reaches the empty clause. While unit propagation itself is incomplete, it terminates in polynomial time. It therefore is a convenient tool for checking implication, we can use this in the concept of an asymmetric tautology, which is a clause that must be true assuming a CNF because its negation would cause a conflict via unit propagation. Definition 2 (J¨arvisalo, Heule, and Biere 2012). Let F be a CNF formula. A clause C is an asymmetric tautology (AT) w.r.t. F if F ∧¯C ⊢1 ⊥. A clause being an asymmetric tautology in F is a generalization of being a resolvent of some pair of clauses in F. We also want to be able to generalize the creation of extension clauses. To do this, we first generalize extension clauses to blocked clauses. Blocked clauses are clauses that have a literal that cannot be resolved without causing a tautology, and so they are non-threatening to the satisfiability of the formula. We generalize blocked clauses to RAT clauses, where we widen the condition of tautology to asymmetric tautology. Definition 3 (J¨arvisalo, Heule, and Biere 2012). Let F be a CNF formula. A clause C is a resolution asymmetry tautology (RAT) w.r.t. F if there exists a literal l ∈C such that for every clause ¯l ∨D ∈F it holds that F ∧¯D ∧¯C ⊢1 ⊥. DRAT is a generalized and application friendly version of extended resolution. Each rule modifies a formula by either adding (removing) a clause while preserving satisfiability (unsatisfiability), respectively. Unlike resolution, clauses can be added (removed) with or without preserving the exact set of satisfying models of a formula. The first set of DRAT rules show us how we can add or remove clauses while preserving models by using asymmetric tautologies when C is AT w.r.t. F: F (ATA) F ∧C F ∧C (ATE) F The second set of rules use resolution asymmetric tautologies (C is RAT w.r.t. F) and do not preserve models: F (RATA) F ∧C F ∧C (RATE) F In all clause additions, we can add a new variable as long as it works with the side conditions of the rules. However, an excess of new variables can cause a proof checker to slow down, so there is a version of DRAT that forbids new variables known as DRAT−. Tseitin Formulas A Tseitin formula is a CNF formula that represents a system of parity constraints where every variable appears in exactly two constraints. Such a formula is determined by a graph G: each edge e corresponds to a unique Boolean variable xe and each vertex v defines a constraint L e∈E(v) xe = c(v), where E(v) is the set of edges incident to v in G and c : V (G) →{0, 1} is the charge function. The Tseitin formula T(G, c) is the conjunction of the xor representations for the constraints for every v ∈V (G). We call G the Tseitin graph of the formula. It is often assumed that the maximum degree of all vertices in G is bounded by a constant, so that the size of T(G, c) is linear in |var(T(G, c))| = |E(G)|. Example 2. Let G be the following graph with V (G) = {1, 2, 3, 4}. Let c : V (G) →{0, 1} such that gray vertices have charge 0 and white vertices have charge 1. x12 ⊕x13 ⊕x14 = 0 x12 ⊕x23 = 1 x13 ⊕x23 ⊕x34 = 1 x14 ⊕x34 = 0 1 2 3 4 The Tseitin formula for this graph and this charge c is T(G, c) = xor(x12, x13, x14) ∧xor(¯x12, x23) ∧xor(¯x13, x23, x34) ∧xor(x14, x34). Tseitin formulas were introduced by Tseitin (1968, 1983) in the 1960s as hard instances for proof systems, despite an easy criterion to decide their satisfiability (Urquhart 1987, Lemma 4.1). Urquhart (1987) later showed that when G belongs to the family of bounded-degree expander graphs (whose definition we omit), all resolution refutations of T(G, c) require exponentially many clauses. This was generalized by Ben-Sasson and Widgerson who used their width-length relations on refutation proofs to derive exponential lower bounds parameterized on the edge expansion of G (Ben-Sasson and Wigderson 2001). Beyond expansion, the key parameter to characterize the hardness of Tseitin formulas for resolution could be the treewidth of the graph. Treewidth is a very well-known graph parameter whose definition we omit (see (Bodlaender 1998)). Intuitively the treewidth of G, denoted by tw(G), is an integer between 0 and |V (G)| that measures how close G is to a tree (trees having treewidth 1). On the one hand, it was shown (Alekhnovich and Razborov 2011) that unsatisfiable Tseitin formulas have resolution refutations of length at most 2O(tw(G))|E(G)|O(1), thus a logarithmic treewidth guarantees short refutations. On the other hand, combining the width-length relation with Corollaries 8 and 16 of Galesi et al.’s (2020) yields the following: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7980 Theorem 4. Let G be an n-vertex graph whose maximum degree is bounded by a constant, if tw(G) = Ω(n), then the length of a shortest resolution refutation of an unsatisfiable Tseitin formula T(G, c) is at least 2Ω(n). Note that there is still a gray area for tw(G) less than linear, but more than logarithmic in n. Note also that Tseitin formulas are easily refutable in proof systems different from resolution, regardless of high treewidth (Itsykson et al. 2020; Bonacina, Bonet, and Levy 2023). Parity Problems For a constraint x1 ⊕· · · ⊕xn = c, the number of clauses in the canonical representation is exponential in n. We can use the xor notation to build larger parity constraints if we include auxiliary variables called Tseitin variables: Definition 4. Let σ be a permutation of n elements and X = { xi | 1 ≤i ≤n } be an ordered set of literals. We define Parity(X, T, σ) = xor(xσ(1), xσ(2), t1) ∧ Vn−4 j=1 xor(tj, xσ(j+2), tj+1) ∧xor(tn−3, xσ(n−1), xσ(n)), where T = { ti | i ≤n −3 } are Tseitin variables. Here Parity(X, T, σ) is satisfiable if and only if the total parity of X is 0. If we wanted a constraint which is satisfiable if and only if the parity was 1, again we simply flip a literal. In our particular Tseitin encoding, we structure the ⊕ linearly so that the formula for n = 5, σ = id looks like ((((x1 ⊕x2) ⊕x3) ⊕x4) ⊕x5). Here the formula depth is linear, however the structure does not affect the satisfiability as ⊕is associative. Furthermore the actual permutation σ does not affect the satisfiability because ⊕is commutative. Problem 1: Reordered Parity In our first problem we simply take two parity constraints that are in contradiction. We simultaneously state that the variables of X have 0 parity and 1 parity. This is obviously a contradiction. However in order to make it difficult we use two different permutations to obscure the conflict. We define rPar(n, σ) = Parity(X, S, id) ∧Parity(X′, T, σ) where X = { xi | 1 ≤i ≤n }, X′ = { xi | 1 ≤i < n } ∪{¯xn}, σ is a permutation of n elements, and id is the identity map. S, T, and X are disjoint sets of variables. Note that rPar(n, σ) is a Tseitin formula where each xor constraint corresponds to a vertex of the underlying graph. This is because every variable appears in exactly two xor constraints. For n fixed, its Tseitin graph depends only on σ and its vertices all have charge 0, except for the vertex corresponding to xor(tn−3, xσ(n−1), ¯xσn). Fact 1. rPar(n, σ) is an unsatisfiable Tseitin formula. The version where the identity map is used for σ is the most natural, and is easy to solve. However the random version can still appear from equivalences (which themselves are xors) obfuscating the random parity. For example: A system of equations containing x1 ⊕x2 ⊕x3 ⊕x4 = 0 and x5 ⊕x6 ⊕x7 ⊕x8 = 1 and other binary clauses implying: x5 ↔x4 and x6 ↔x1 and x7 ↔x2 and x8 ↔x3. A standard CNF encoding using the natural variable ordering x1 < x2 < x3 < x4 < x5 < x6 < x7 < x8 (as done in the Dubois benchmark family) yields a non-trivial random parity problem after removing binary clauses, namely: x1 ⊕x2 ⊕x3 ⊕x4 = 0 encoded in CNF using the ordering x1 < x2 < x3 < x4, and x4 ⊕x1 ⊕x2 ⊕x3 = 1 encoded in CNF using the ordering x4 < x1 < x2 < x3. Problem 2: Random Parity Addition Problem 1 is a simple special case. In general, solvers and preprocessors want to deal with XOR-constraints by Gaussian elimination. In Gaussian elimination we use multiple steps involving adding two parity constraints together to get a third parity constraint. Since the parity constraints may have a large number of input variables we would have to use Tseitin variables, even in the third constraint. We model the difficulty of an addition step by taking the three parity constraints: the two summands and the negation of the sum, in conjunction to get a contradiction. Let a = Parity(A, S, σa) and b = Parity(B, T, σb), where A and B are subsets of X = { xi | 1 ≤i ≤n } and S and T are disjoint sets of Tseitin variables. We define rAddPar(a, b, σc) = Parity(A, S, σa) ∧Parity(B, T, σb) ∧Parity(C, U, σc) where U is disjoint from S, T and X. Here C is the symmetric difference of A and B but with a literal flipped. In a first scenario, the variable ordering σc for the sum constraint is independent of that used in a and b. This is modeled fixing σc to be the identity id. For convenience we write rAddPar(a, b) = rAddPar(a, b, id). A more clever approach is to choose σc favorably for σa and σb. We will precise what we mean by “choosing σc favorably” later in the paper. Again rAddPar(a, b, σc) is a Tseitin formula since every variable appears in two xor constraints. Every variable appearing in A and B does not appear in the third constraint, and every variable in the symmetric difference of A and B appears a second time the third constraint. The Tseitin variables are disjoint so they also appear in exactly two xor. Fact 2. rAddPar(a, b, σc) is an unsatisfiable Tseitin formula. The Graph Model and Lower Bounds In this section we present our lower bounds on the length of resolution refutations for rPar and rAddPar when they are constructed in a random fashion. The complete proofs of some results are deferred to the full version of the paper. The current paper should contain enough proof sketches and intuition for the reader to navigate through our results. Lower Bounds for Reordered Parity In the following, we denote the set {1, . . . , n} by [n] and we call Sn the set of permutations of [n]. Consider 2n vertices labeled 1, . . . , n, 1′, . . . , n′, a permutation σ ∈Sn, and let Gσ be the graph over these vertices whose edge set is { (i, i+1) | i < n }∪{ (σ(i)′, σ(i+1)′) | i < n }∪{ (i, i′) | i ≤n }. Let G∗ σ be the multigraph obtained by contracting the edges (i, i′) for all i ∈[n]. That is, V (G∗ σ) = [n] and, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7981 for every edge (i, j), (i′, j), (i, j′) or (i′, j′) in E(G), we add an edge (i, j) to E(G∗ σ). Example 3. Let n = 5 and σ(1) = 3, σ(2) = 1, σ(3) = 5, σ(4) = 4, σ(5) = 2. The graph Gσ and G∗ σ are: Gσ = 1 2 3 4 5 3′ 1′ 5′ 4′ 2′ = 1 2 3 4 5 3′ 1′ 5′ 4′ 2′ G∗ σ = 1 2 3 4 5 The maximum degree of a vertex of Gσ (resp. G∗ σ) is 3 (resp. 4). Since G∗ σ is a minor of Gσ after merging of the parallel edges, we have that tw(G∗ σ) ≤tw(Gσ). We also have a bound in the other direction which may be useful when tw(Gσ) is harder to compute than tw(G∗ σ) in practice. Lemma 1. We have that 1 2tw(Gσ) ≤tw(G∗ σ) ≤tw(Gσ). The Tseitin graph of rPar(n, σ) is not exactly Gσ. The two graphs would be the same if we were to slightly modify the first and last constraints of Parity(X, S, id) and Parity(X′, T, σ) by replacing xor(x1, x2, s1) by xor(x1, ¯s0) ∧xor(s0, x2, s1) etc. Lemma 2. The Tseitin graph of rPar(n, σ) is obtained by contracting four edges of Gσ. Proof sketch. Contract the edges (1, 2), (n −1, n), (σ(1)′, σ(2)′) and (σ(n −1)′, σ(n)′) of Gσ. Since an edge contraction can only decrease the treewidth by one, it follows that the treewidth of the Tseitin graph of rPar(n, σ) is at least tw(Gσ) −4. But then we show that when σ is sampled uniformly at random from Sn, with high probability (i.e., with probability tending to 1 as n increases) both Gσ and rPar(n, σ) have linear treewidth. Lemma 3. There is a constant α > 0 such that Pr(tw(Gσ) < αn) vanishes to 0 as n increases when σ is chosen uniformly at random in Sn. Proof. Kim and Wormald (2001) have studied the graph distribution Hn ⊕Hn where each graph over n vertices is the superposition of two independent hamiltonian cycles over these vertices (merging parallel edges). They call G4,n the uniform distribution over all 4-regular graphs over n vertices. They show that any sequence of events is true asymptotically almost surely (a.a.s.) in Hn ⊕Hn if and only if it is true a.a.s. in G4,n (Kim and Wormald 2001, Theorem 2). The treewidth of a random 4-regular graph from G4,n is linear in n with high probability (Chandran and Subramanian 2003). On the one hand, Chandran and Subramanian (2003) have shown that the treewidth of a d-regular graph G is at least ⌊3n 4 (d−λ2(G)) (3d−2λ2(G))⌋−1 where λ2(G) is the second largest eigenvalue of the adjacency matrix of G. On the other hand, Friedman has shown that, for any fixed ε > 0 and any d ≥2, |λ2(G)| ≤2 √ d −1 + ε holds with high probability when G ∈Gd,n (Friedman 2003, Corollary 1.4)1. The combination of the two results yields that, with high probability when G ∈G4,n, tw(G) ≥Ω(n). Thus, with high probability, a random graph G ∈Hn ⊕ Hn has linear treewidth. Now G∗ σ is not the superposition of two independent hamiltonian cycles but the superposition of two independent paths. But if we close both paths before superposition, then we obtain a graph in Hn ⊕Hn whose treewidth is at least tw(G∗ σ) −2. So with high probability tw(Gσ) ≥tw(G∗ σ) ≥Ω(n) holds. rPar(n, σ)’s graph has degree at most 4 and linear treewidth with high probability so, by Theorem 4 we immediately have that the formula is hard for resolution. Theorem 1. There is a constant α > 0 such that, with probability tending to 1 as n increases, the length of a shortest resolution refutation of rPar(n, σ) where σ is chosen uniformly at random in Sn, is least 2αn. Lower Bounds for Random Parity Addition The symmetric difference of two subsets A and B of X = {x1, . . . , xn} is denoted by A△B := (A∪B)\(A∩B). Recall that for a = Parity(A, S, σa) and b = Parity(B, T, σb), the formula rAddPar(a, b) = a ∧b ∧Parity(C, U, id) is a Tseitin formula (where U ∩S = U ∩T = S ∩T = ∅ and C is A△B with one literal flipped). Let us describe rAddPar(a, b)’s Tseitin graph. We call H the graph whose vertices are split into three sets V = { i | xi ∈A }, V ′ = { i′ | xi ∈B } and V ′′ = { i′′ | xi ∈A△B }. The edge set of H contains (i, i′) for all xi ∈A ∩B, and (i, i′′) for all xi ∈A∩(A△B), and (i′, i′′) for all xi ∈B∩(A△B). The vertices in V (resp. V ′ and V ′′) are also connected in a path following the order σa (resp. σb and id). H is not exactly the Tseitin graph of rAddPar, but is close enough and easier to analyze. Lemma 4. For a = Parity(A, S, σa) and b = Parity(B, T, σb) the Tseitin graph of rAddPar(a, b) is obtained by contracting six edges of the graph H. At this point it is worth giving an example of such a graph H. Example 4. Let n = 6, A = {x1, x2, x4, x5, x6} and B = {x1, x2, x3, x5}. So A△B = {x3, x4, x6}. The constraints encoded in CNF with Parity are x1 ⊕x2 ⊕x4 ⊕x5 ⊕x6 = 0, x1⊕x2⊕x3⊕x5 = 0 and x3⊕x4⊕x6 = 1. Let σa(1) = 4, σa(2) = 5, σa(4) = 1, σa(5) = 6, σa(6) = 2 and σb(1) = 1, σb(2) = 5, σb(3) = 2, σb(5) = 3. Then the graph H is: H = 4 5 1 6 2 3′′ 4′′ 6′′ 1′ 5′ 2′ 3′ The Tseitin graph of rAddPar(a, b) is the graph H above after contraction of the edges (4, 5), (6, 2), (1′, 5′), (2′, 3′), (3′′, 4′′) and (4′′, 6′′). 1Note that our Gd,n is Friedman’s Kd,n The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7982 When the constraints a and b are constructed in a random fashion, the hardness of rAddPar(a, b) for resolution stems from the hardness of rPar. One proves this by showing that, when the parameters of a and b are sampled uniformly at random, H is a random graph that, w.h.p., admits a graph Gσ as a minor, for σ a permutation over Ω(n) elements of X. When this minor exists (which is almost always the case), it also follows a uniform distribution, and thus its treewidth is in Ω(n) by Lemma 3. This then shows that tw(H) = Ω(n) holds w.h.p., and the lower bounds for resolution follows. Theorem 2. There is a constant α > 0 such that, with probability tending to 1 as n increases, when A, B, σa and σb are chosen independently and uniformly at random, the length of a shortest resolution refutation of rAddPar(a, b) is least 2αn. Notice here that σa, σb and id are relative to each other shuffled randomly. This is the most chaotic scenario which is likely to contribute to its difficulty. Let us instead briefly discuss an example that favors shorter proofs. We are given a and b in their random orders but, in an addition step, we may be the ones creating the encoding for the sum constraint and so we can choose the permutation σc favorably by ensuring that σc(i) < σc(j) if and only if either: • xi ∈A \ B and xj ∈B \ A • xi, xj ∈A \ B and σa(i) < σa(j) • xi, xj ∈B \ A and σb(i) < σb(j) Even in this case, the encoding rAddPar(a, b, σc) = a ∧b ∧ Parity(C, U, σc) will be hard w.h.p. when |A ∩B| = Ω(n) since then we can find the minor Gσ evoked above by only looking in H[V ∪V ′]. The condition |A ∩B| = Ω(n) is fulfilled almost surely when A and B are chosen uniformly. Sorting and Upper Bounds Philipp and Rebola-Pardo (2016) showed that XORreasoning such as in Gaussian elimination can have short proofs; a BDD approach can find polynomial-size extended resolution proofs (Sinz and Biere 2006). For these particular formulas we can do even better, reducing the complexity and the number of extra variables needed. Lemma 5 (Chew and Heule 2020). Suppose we have a CNF F and two sets of XOR clauses xor(x, y, p) and xor(p, z, q), where variable p appears nowhere in F. We can infer: F ∧xor(x, y, p) ∧xor(p, z, q) F ∧xor(y, z, p) ∧xor(p, x, q) in 32 of DRAT steps without adding new variables. This provides the building blocks for the short proofs. Lemma 6 (Chew and Heule 2020). Given two permutations σ1 and σ2, X, S and T are disjoint sets of variables where both S variables and T variables do not appear in CNF F, F ∧Parity(X, S, σ1) can be transformed into F ∧Parity(X, T, σ2) in O(n log n) many DRAT steps where |X| = n. Sketch Proof. 1. In O(n log n) applications of Lemma 5 we can take the linear structure of the Tseitin variables and reorganize it into a balanced binary tree, using a divideand-conquer approach. 2. In O(n log n) applications of Lemma 5 we can make any permutation of the leaf edges. By first swapping any two leaves takes O(log n) applications of Lemma 5 and then n−1 swaps are required (in the worst case) and sufficient to place every variable in place. 3. In O(n log n) applications of Lemma 5 we can take our balanced binary tree and return it to a linear structure. Theorem 3. For any parity constraints a, b over n input variables X, there are DRAT−refutations of rAddPar(a, b) that have O(n log n) many lines. Sketch Proof. Using Lemma 6 we can rearrange the the random orderings to an easy case where all variables appear in order in O(n log n)-step. The remaining refutation is a linear-sized resolution refutation. Experiments We ran experiments to confirm that the reordered parity and random parity addition formulas are hard to refute for CDCL solvers, and that their hardness is largely explained by the treewidth of their Tseitin graphs. This is expected given the lower bounds of Theorems 1 and 2, but these results are asymptotic and probabilistic, and it is not certain that they apply to relatively small formulas encountered in practice. The experiments described here were performed on a cluster with Intel Xeon E5649 processors at 2.53 GHz running 64-bit Linux. An 8 GB memory limit and varying time limits were enforced with RUNSOLVER (Roussel 2011). The benchmarks are available here: (Chew et al. 2023). 2 Problem 1: Reordered Parity We generated a benchmark set of rPar(n, σ) for n = 50. Experiments for increasing values of n were done by Chew and Heule (2020). To get formulas of varying treewidth, the permutations σ were constructed in several ways: (a) 5 permutations were drawn from a uniform distribution. (b) 30 permutations were obtained from a stochastic process following a Mallows distribution whose parameter controls the likelihood of inversions (Mallows 1957). (c) 30 permutations come from a sequence of random adjacent swaps, with a varying number of swaps. (d) 15 permutations were constructed by a sequence of random adjacent swaps until an element is a set distance away from its original position in the original order. An ideal construction would have allowed us to uniformly sample graphs Gσ for a fixed n, with treewidth lying in a fixed range. But we know of no such construction that is also efficient. Hence the three last constructions listed above, that have parameters that intuitively give us some control on the treewidth. The drawback is that the graphs are not sampled uniformly at random, as in the theoretical results. The resulting Tseitin graphs have 100 vertices each, so determining their treewidth is challenging. We obtained upper and lower bounds using tools by Tamaki (2022).3 Within 2https://doi.org/10.5281/zenodo.10391790 3https://github.com/twalgor/tw The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7983 Reordered Parity Random Parity Addition 6 8 10 12 14 16 2 4 6 8 10 12 14 16 18 1 10 50 250 1000 3600 Time (s) Solved Timeout Figure 1: Treewidth (x-axis) vs. solving time (y-axis) for reordered parity (left) and random parity addition (right). a time limit of 3600 seconds, the treewidth of only 30 graphs could be computed exactly. For another 30 graphs, no non-trivial lower bound was returned. To get lower bounds for such graphs Gσ, we determined the treewidth of the graph G∗ σ obtained by contracting the edges (i, σ(i)) for i ∈ [n]. Lemma 1 shows that tw(G∗ σ) ≤tw(Gσ) ≤2tw(G∗ σ), and since the graphs G∗ σ only contain half as many vertices, matching upper and lower bounds could be computed for all except two instances (curiously, these were graphs for which the treewidths of the original graphs could be determined). We ran the CDCL solver CaDiCaL (Biere et al. 2020) on each reordered parity formula with default settings and a timeout of 3600 seconds. CaDiCaL generates Reverse Unit Propagation (RUP) proofs (Gelder 2008; Heule, Jr., and Wetzler 2013), which can be converted to resolution proofs with a quadratic overhead (Goldberg and Novikov 2003). Figure 1 (left) plots the solver’s running time against the best upper bounds on the treewidth for each instance. The regression line clearly shows that the running time grows exponentially with the treewidth. By contrast, CryptoMiniSat (Soos, Nohl, and Castelluccia 2009), a solver that is capable of reasoning with XORs using Gaussian elimination, was able to solve all instances within a few seconds. Problem 2: Random Parity Additions We generated a benchmark set consisting of 95 reordered parity formulas rAddPar(a, b, σc). To get formulas of varying treewidth, we altered the value p, which dictates the probability of each input variable being selected to be including in a, and the same probability also for b. We chose p in increments of 0.05 from 0.05 up to 0.95. We scaled the number of variables we drew from based on p to keep the expectation of the number of clauses the same, in our sample the number of clauses ended up being between 360 and 536. σc was chosen to be favorable to solving. Figure 1 plots the solver’s running time against the best upper bounds on the treewidth for each instance. Once again we see running time grows exponentially with the treewidth. Conclusion We present both theoretical and experimental evidence that treewidth explains the hardness of reordered parity, and random parity additions for CDCL/resolution. Chew and Heule (2020) have left the DRAT−upper bound for rPar without a proof lower bound for resolution. We have now provided that. In particular, noticing that the instances were Tseitin formulas, we were driven to study the treewidth of their underlying graphs and we have shown that it is, with high probability, linear in the number of variables. And although the relationship between resolution refutations of Tseitin formulas and the graph’s treewidth is not fully understood yet, results do exist for linear treewidth that are enough for us to prove exponential lower bounds on the length of the resolution proofs. Previous experiments showed the exponential increase in CaDiCaL proof size as the number of variables increases (Chew and Heule 2020), in this paper we show that same exponential increase (but in solving time), but with the number of variables and clauses controlled and now the treewidth being varied. We generalize this further to rAddPar which draws its motivation from Gaussian elimination. rAddPar provides yet another example of a hard Tseitin formula, and its hardness is confirmed both theoretically and experimentally. Again, treewidth is the important factor in determining its hardness. In both the rPar and rAddPar case we can draw the conclusion that the variable order matters. Just as in rPar, we can show that we have short DRAT−proofs for rAddPar. This will be useful for verification, with a hope that it may be generalized to Gaussian elimination. In future work, it would be interesting to explore the BDD-techniques for dealing with XOR-constraints, in a similar manner to our exploration on CDCL here. On reordered parity EBDDRES (Sinz and Biere 2006) can perform even more poorly than CaDiCaL, but recent work (Bryant, Biere, and Heule 2022) show BDD-solvers can perform even better than Chew and Heule’s sorting tool, so the overall picture may be more complicated. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7984 Acknowledgments The authors acknowledge the support from the FWF (P36420, ESP 197, ESP 235) and the WWTF (ICT19-060, ICT19-065). References Alekhnovich, M.; and Razborov, A. A. 2011. Satisfiability, Branch-Width and Tseitin tautologies. Comput. Complex., 20(4): 649–678. Ben-Sasson, E.; and Wigderson, A. 2001. Short proofs are narrow - resolution made simple. Journal of the ACM, 48(2): 149–169. Biere, A.; Fazekas, K.; Fleury, M.; and Heisinger, M. 2020. CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling Entering the SAT Competition 2020. In Balyo, T.; Froleyks, N.; Heule, M.; Iser, M.; J¨arvisalo, M.; and Suda, M., eds., Proc. of SAT Competition 2020 – Solver and Benchmark Descriptions, volume B-2020-1 of Department of Computer Science Report Series B, 51–53. University of Helsinki. Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds. 2021. Handbook of Satisfiability - Second Edition, volume 336 of Frontiers in Artificial Intelligence and Applications. IOS Press. ISBN 978-1-64368-160-3. Bodlaender, H. L. 1998. A Partial k-Arboretum of Graphs with Bounded Treewidth. Theor. Comput. Sci., 209(1-2): 1– 45. Bonacina, I.; Bonet, M. L.; and Levy, J. 2023. Polynomial Calculus for MaxSAT. In Mahajan, M.; and Slivovsky, F., eds., 26th International Conference on Theory and Applications of Satisfiability Testing, SAT 2023, July 4-8, 2023, Alghero, Italy, volume 271 of LIPIcs, 5:1–5:17. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Bryant, R. E.; Biere, A.; and Heule, M. J. H. 2022. Clausal Proofs for Pseudo-Boolean Reasoning. In Fisman, D.; and Rosu, G., eds., Tools and Algorithms for the Construction and Analysis of Systems, 443–461. Cham: Springer International Publishing. ISBN 978-3-030-99524-9. Buss, S.; and Thapen, N. 2019. DRAT proofs, propagation redundancy, and extended resolution. In International Conference on Theory and Applications of Satisfiability Testing, 71–89. Springer. Chandran, L. S.; and Subramanian, C. R. 2003. A spectral lower bound for the treewidth of a graph and its consequences. Inf. Process. Lett., 87(4): 195–200. Chew, L.; de Colnet, A.; Slivovsky, F.; and Szeider, S. 2023. Dataset of Random Reordered Encodings of Parity Problems. Chew, L.; and Heule, M. J. H. 2020. Sorting Parity Encodings by Reusing Variables. In Pulina, L.; and Seidl, M., eds., Theory and Applications of Satisfiability Testing - SAT 2020 - 23rd International Conference, Alghero, Italy, July 3-10, 2020, Proceedings, volume 12178 of Lecture Notes in Computer Science, 1–10. Springer. Fichte, J. K.; Berre, D. L.; Hecher, M.; and Szeider, S. 2023. The Silent (R)evolution of SAT. Communications of the ACM, 66(6): 64–72. Friedman, J. 2003. A proof of Alon’s second eigenvalue conjecture. In Larmore, L. L.; and Goemans, M. X., eds., Proceedings of the 35th Annual ACM Symposium on Theory of Computing, June 9-11, 2003, San Diego, CA, USA, 720– 724. ACM. Galesi, N.; Talebanfard, N.; and Tor´an, J. 2020. CopsRobber Games and the Resolution of Tseitin Formulas. ACM Trans. Comput. Theory, 12(2): 9:1–9:22. Gelder, A. V. 2008. Verifying RUP Proofs of Propositional Unsatisfiability. In International Symposium on Artificial Intelligence and Mathematics, ISAIM 2008, Fort Lauderdale, Florida, USA, January 2-4, 2008. Goldberg, E. I.; and Novikov, Y. 2003. Verification of Proofs of Unsatisfiability for CNF Formulas. In 2003 Design, Automation and Test in Europe Conference and Exposition (DATE 2003), 3-7 March 2003, Munich, Germany, 10886– 10891. IEEE Computer Society. Han, C.-S.; and Jiang, J.-H. R. 2012. When Boolean Satisfiability Meets Gaussian Elimination in a Simplex Way. In Madhusudan, P.; and Seshia, S. A., eds., Computer Aided Verification, 410–426. Berlin, Heidelberg: Springer Berlin Heidelberg. Heule, M.; Jr., W. A. H.; and Wetzler, N. 2013. Trimming while checking clausal proofs. In Formal Methods in Computer-Aided Design, FMCAD 2013, Portland, OR, USA, October 20-23, 2013, 181–188. IEEE. Itsykson, D.; Knop, A.; Romashchenko, A. E.; and Sokolov, D. 2020. On OBDD-based Algorithms and Proof Systems that Dynamically Change the order of Variables. J. Symb. Log., 85(2): 632–670. Itsykson, D.; Riazanov, A.; and Smirnov, P. 2022. Tight Bounds for Tseitin Formulas. In Meel, K. S.; and Strichman, O., eds., 25th International Conference on Theory and Applications of Satisfiability Testing, SAT 2022, August 25, 2022, Haifa, Israel, volume 236 of LIPIcs, 6:1–6:21. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. J¨arvisalo, M.; Heule, M. J. H.; and Biere, A. 2012. Inprocessing Rules. In Gramlich, B.; Miller, D.; and Sattler, U., eds., Automated Reasoning, 355–370. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-31365-3. Kim, J. H.; and Wormald, N. C. 2001. Random Matchings Which Induce Hamilton Cycles and Hamiltonian Decompositions of Random Regular Graphs. J. Comb. Theory, Ser. B, 81(1): 20–44. Mallows, C. L. 1957. Non-null ranking models. I. Biometrika, 44(1/2): 114–130. Marques-Silva, J. P.; Lynce, I.; and Malik, S. 2009. ConflictDriven Clause Learning SAT Solvers. In Handbook of Satisfiability. IOS Press. Philipp, T.; and Rebola-Pardo, A. 2016. DRAT Proofs for XOR Reasoning. In Michael, L.; and Kakas, A., eds., Logics in Artificial Intelligence, 415–429. Cham: Springer International Publishing. Pipatsrisawat, K.; and Darwiche, A. 2011. On the power of clause-learning SAT solvers as resolution engines. Artificial Intelligence, 175(2): 512 – 525. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7985 Robinson, J. A. 1963. Theorem-Proving on the Computer. Journal of the ACM, 10(2): 163–174. Roussel, O. 2011. Controlling a solver execution with the runsolver tool. Journal on Satisfiability, Boolean Modeling and Computation, 7(4): 139–144. Sinz, C.; and Biere, A. 2006. Extended Resolution Proofs for Conjoining BDDs. In Grigoriev, D.; Harrison, J.; and Hirsch, E. A., eds., Computer Science – Theory and Applications, 600–611. Berlin, Heidelberg: Springer Berlin Heidelberg. Soos, M. 2012. Enhanced Gaussian Elimination in DPLLbased SAT Solvers. In Berre, D. L., ed., POS-10. Pragmatics of SAT, volume 8 of EPiC Series in Computing, 2–14. EasyChair. Soos, M.; Nohl, K.; and Castelluccia, C. 2009. Extending SAT Solvers to Cryptographic Problems. In Kullmann, O., ed., Theory and Applications of Satisfiability Testing - SAT 2009, 244–257. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-02777-2. Tamaki, H. 2022. Heuristic Computation of Exact Treewidth. In Schulz, C.; and Uc¸ar, B., eds., 20th International Symposium on Experimental Algorithms, SEA 2022, July 25-27, 2022, Heidelberg, Germany, volume 233 of LIPIcs, 17:1–17:16. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Tseitin, G. 1968. On the Complexity of Derivation in Propositional Calculus. Studies in Constructive Mathematics and Mathematical Logic, Part 2: 115–125. Tseitin, G. S. 1983. On the Complexity of Derivation in Propositional Calculus, 466–483. Springer Berlin Heidelberg. ISBN 978-3-642-81955-1. Urquhart, A. 1987. Hard Examples for Resolution. J. ACM, 34(1): 209–219. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7986
2024
887
18,725
Percentile Risk-Constrained Budget Pacing for Guaranteed Display Advertising in Online Optimization Liang Dai1, Kejie Lyu1, Chengcheng Zhang1, Guangming Zhao2, Zhonglin Zu1, Liang Wang2, Bo Zheng2 1Alibaba Group, Hangzhou, China 2Alibaba Group, Beijing, China {dailiang.dl, lvkejie.lkj, xuelun.zcc, lambert.zgm, zhonglin.zuzl, liangbo.wl, bozheng}@alibaba-inc.com Abstract Guaranteed display (GD) advertising is a critical component of advertising since it provides publishers with stable revenue and enables advertisers to target specific audiences with guaranteed impressions. However, smooth pacing control for online ad delivery presents a challenge due to significant budget disparities, user arrival distribution drift, and dynamic change between supply and demand. This paper presents robust riskconstrained pacing (RCPacing) that utilizes Lagrangian dual multipliers to fine-tune probabilistic throttling through monotonic mapping functions within the percentile space of impression performance distribution. RCPacing combines distribution drift resilience and compatibility with guaranteed allocation mechanism, enabling us to provide near-optimal online services. We also show that RCPacing achieves O( √ T) dynamic regret where T is the length of the horizon. RCPacing’s effectiveness is validated through offline evaluations and online A/B testing conducted on Taobao brand advertising platform. Introduction According to a report by the Internet Advertising Bureau, online display advertising generated a remarkable revenue of $63.5 billion in 2022, demonstrating a substantial yearover-year increase of 12.0% (IAB 2023). Ad exposures in display markets are sold through both guaranteed and nonguaranteed (like real-time bidding or RTB) selling channels. Within the guaranteed display (GD) selling channel, an advertiser (demand side) and publisher (supply side) negotiate a fixed price (cost-per-mille or CPM) for ad placement, including details such as when, where, and how the ad campaigns will be displayed. These contractual arrangements guarantee the delivery of a specified number of impressions that meet specific targeting criteria during a specified period. In addition to contractual agreements, advertisers usually expect their ad campaigns to be delivered smoothly and steadily during the purchased period for various reasons, including making campaign performance as good as possible, reaching a wider audience, increasing the display ratio of the target audience, and maintaining stable online viewership for live streaming events. However, smooth and robust Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. pacing control for hundreds or thousands of GD advertisements on a brand advertising platform that deals with billions of daily requests is a challenging task. To summarize, the main challenges are as follows: • Significant differences among campaigns: the guaranteed daily impressions range from thousands to millions, the targeted audience size also vary greatly. Moreover, different campaigns have different optimization goals, such as click-through rate (CTR) or conversion rate (CVR). • Drastic changes in traffic environment: these changes include significant fluctuations in overall traffic, dynamic shifts in the distribution of user arrival over time, and the impact of other campaigns going online or offline. The existing smooth pacing techniques have primarily focused on RTB ads (Nuara et al. 2022; Liu et al. 2020), which is incompatible with GD allocation. Although some research has considered the smoothness or representativeness in online optimal allocation of GD ads, it is often not optimized and evaluated as a separate key metric. In this paper, we consider smooth pacing for GD ads from the perspective of a publisher. Our contributions can be summarized as follows: • We introduce a novel framework called RCPacing, which employs Lagrangian dual multipliers to adjust probabilistic throttling based on monotonic functions within the percentile space, allowing us to effectively manage risk and ensure optimal ad delivery performance. • We also show that RCPacing attains regret of order O( √ T) when the length of the horizon T and the initial number of resources are scaled proportionally. • As there exists a tradeoff between smooth and optimal allocation in online matching problems, RCPacing offers flexible control over this balance. • We implement RCPacing in our online display advertising system and conduct extensive online/offline experimental evaluations. The results demonstrate that RCPacing is highly effective in improving both the performance and smoothness of online delivery for GD campaigns. Related Work In the past few years, the allocation of GD advertising has received significant attention from researchers (Wu et al. 2021; Wang et al. 2022). It is typically modeled as an online The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7987 matching problem, intending to achieve the maximum match between impressions and contracts(Chen et al. 2012). While the primary objective is to provide each advertiser with a predetermined number of display opportunities, it is also necessary to consider the smoothness of budget consumption. Some researchers include a fixed representative term in objectives (Fang et al. 2019; Dai et al. 2023; Bharadwaj et al. 2012), which aims to minimize the deviation between the allocation probability and its corresponding supply-demand ratio for each contract. However, the representative term is fixed without consideration of dynamic adjustment. Another research direction is to achieve budget pacing through feedback control, which can be further categorized into bid modification (Mehta et al. 2007; Zhou et al. 2021) and probabilistic throttling (Agarwal et al. 2014; Xu et al. 2015; Lee, Jalali, and Dasdan 2013). Bid modification influences the budget spending of an ad by adjusting its bidding price. Mehta et al. (Mehta et al. 2007) modify the bid by multiplying it with a value that reflects the proportion of unused budget, and the impression would be allocated to the ad with the highest modified bid. Balseiro et al. (Balseiro, Lu, and Mirrokni 2020) and Zhou et al. (Zhou et al. 2021) utilize a dual multiplier to form a virtual bid, which is consistently updated based on the variance between the actual and expected budget consumption. These methods adhere to the same principle of decreasing an ad’s bid when the budget is being spent too rapidly. However, both the dramatic change in the bid win-rate curve and bid landscape make it challenging to control the budget through bid modification. On the other hand, probabilistic throttling methods decouple spending control from bid calculation, they directly impact the participating likelihood based on its budget consumption speed. Agarwal et al. (Agarwal et al. 2014) set a global pass-through rate (PTR), which is decreased when the budget consumption speed exceeds the expectation, and increased when the consumption speed falls below. Although this method demonstrates good budget control capability, it heavily relies on the accuracy of traffic forecasting. To further consider performance optimization while achieving budget control, Xu et al. (Xu et al. 2015) group requests with similar response rates (e.g. CTR) together and share a PTR among them. When the PTRs need to be adjusted, the average response rate of each group determines the priority of that group. While effective in budget control, relying solely on PTR regulation is insufficient to ensure guaranteed display for GD allocation. Preliminaries Problem Formulation We formulate the GD allocation problem as the following optimization problem: max x(t)∈X T −1 X t=0 ft(x(t)) = T −1 X t=0 v(t)⊤x(t) s.t. T −1 X t=0 x(t) ≤B (1) where x(t) ∈X ⊆RM is the one-hot decision vector at time t ∈[1, T], M is the total number of campaigns and the impression arrived at time t would be allocated to the jth campaign if the j-th component of x(t) is 1, v(t) ∈RM denotes the impression quality between the impression and the campaigns, ft x(t) = v(t)⊤x(t) ∈R is the revenue obtained at time t, B ∈RM is the positive budget vector which represents the campaign budgets. Following Balseiro et al (Balseiro, Lu, and Mirrokni 2020)., we define the offline dual problem as min α≥0 D (α) = n X i=1 pif ∗ i (α) + α⊤ρ = n X i=1 pi max x∈X n v(i)⊤x −α⊤x o + α⊤ρ (2) where f ∗ i (α) := maxx∈X  fi (x) −α⊤x is the conjugate function of fi (x) (restricted in X ), pi is the probability that the i-th impression has a quality vector of v(i) , n is the total number of impressions, ρ = B/T is the average budget for each time period, α is the dual variable, and the j-th element of α (denoted as αj ) reflects the additional revenue generated by allowing one unit of resources to be added to the j-th campaign’s budget. Dual Mirror Descent Algorithm Our method is built upon the Dual Mirror Descent (DMD) algorithm (Balseiro, Lu, and Mirrokni 2020), which addresses the general online allocation problem with budget constraint. At time t, DMD filters out campaigns that have exhausted their budget and assigns the request to the campaign that offers the highest premium among the remaining campaigns (equation 3). The dual variable is then updated according to the online mirror descent (equation 6). More details about DMD is given in Algorithm 1. Motivation Assumptions In this paper, we adopt the following assumptions: • The small bids assumption. Each impression has only one slot available for displaying ads, which is significantly lower than the demand for campaigns and the supply of publishers (Mehta 2012). • The known IID assumption. The known Independent and Identically Distributed (IID) assumption implies that impressions arrive online according to a known probability distribution with repetition (Huang and Shu 2021), which is a realistic assumption in our problem. Motivation Upon receiving a user’s online request, the ad engine retrieves GD campaigns with recall rate (RR) that meet the targeting criteria and employs real-time prediction models such as deep neural networks (DNN) to estimate performance scores for each campaign (Zhou et al. 2019; Gao et al. 2021). The decision-maker then determines whether The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7988 Algorithm 1: Dual Mirror Descent Algorithm Require: Time period T, remaining resources B(0) = Tρ, reference function h (·) : RM →R, and step-size η. 1: α(0) = 0 2: for t = 0 to T −1 do 3: Receive v(t) ∼P 4: Make the decision ˜x(t) and update the remaining resources B(t+1), where ˜x(t) j =      1, if j = arg max B(t) j ≥1 n v(t) j −α(t) j o 0, otherwise. (3) B(t+1) = B(t) −x(t) (4) 5: Obtain a stochastic sub-gradient of D α(t) ˜g(t) := −˜x(t) + ρ (5) 6: Update the dual variable by mirror descent α(t+1) = arg min α≥0 D ˜g(t), α E + 1 η Vh  α, α(t) where Vh  α, α(t) = h (α) −h  α(t) − D ∇h (α) , α −α(t)E (6) 7: end for and which campaign to display based on the scores. The detailed processing flow is illustrated in figure1. The campaign j passes the pacing control module with the pass-through rate (PTR) before the calculation of the “price premium”. It will only win out if it has the highest positive price compared to all other campaigns. The positive ratio of price and the win-out ratio are referred to as the participation ratio (PR) and win rate (WR). Without loss of generality, we use CTR as performance score in the following paragraph. The cost for campaign j can be denoted as: E [Costj] = E [RRj] X PTRjPRjWRj (7) It is worth noting that the primary risk GD campaigns is over-spend of the budget because it is irreversible once the budget is over-spent. Apart from PTR, the delivery of GD campaign is primarily determined by PR and WR. Let’s consider two GD campaigns Ad1 and Ad2, with identical impression budgets, performance distributions, and similar competitive environments, but with different supply amounts (Ad1 > Ad2). Premium vij - αj Pacing Rate (PTR) Performance Prediction Participation Rate (PR) DNN models filter non-positive Win Rate (WR) filter non-top1 return probabilistic throttling Recall Rate (RR) request retrieve Figure 1: Online process for GD campaign j. 0 2 4 Density 1.0% of PR Beta(2, 8), x=0.54 10.0% of PR Beta(2, 8), x=0.37 0 2 4 Density 4.2% of PR shift left by 0.1 Beta(2, 8), x=0.44 25.8% of PR shift left by 0.1 Beta(2, 8), x=0.27 0.0 0.2 0.4 0.6 0 2 4 Density 3.8% of PR Beta(2, 6), x=0.54 0.0 0.2 0.4 0.6 20.4% of PR Beta(2, 6), x=0.37 Figure 2: Different changes of PR when adjusting for dual variables or distribution drift with the same magnitude. When the online allocation reaches a stable state, the dual variable of Ad1 is located at a higher percentile than Ad2 in performance distribution. • Risk analysis under stable conditions: A higher percentile indicates a greater potential available traffic for Ad1, which makes it more vulnerable to over-spending. Moreover, Ad1 is more challenging to initialize dual variables because a higher percentile implies higher uncertainty especially before the start of delivery. • Risk analysis under dynamic conditions: Higher percentile results in a smaller bid price, making the campaign more susceptible to over-acceleration if other campaigns suddenly go offline. Moreover, as shown in the figure 2, the PR of Ad1 generates greater fluctuations if the dual variable shifts the same distance, and is more sensitive to distribution drift in user arrival or switching of online prediction models, which can be deduced by the proof of Theorem 2 and Theorem 3 in appendix1. Based on the above risk analysis, RCPacing is designed to adjust the dual variables in the dual percentile space, while constraining dual variables within the low-risk region through the pacing module using probabilistic throttling method. Risk-Constrained Pacing Algorithm The factor dependency of RCPacing is illustrated in figure 3. The dual variables and PTRs are adjusted in the dual percentile space of performance distributions. These two factors jointly determine the final win-out of each request. Although RCPacing adjusts the dual variables in percentile space rather than dual space, the Theorem 1 in appendix shows that it attains regret of order O( √ T) when the length of the horizon T and the initial number of resources are scaled proportionally. Parametric Percentile Transformation Forward Transformation RCPacing converts the CTR into the percentile space, which is called forward trans1https://arxiv.org/abs/2312.06174. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7989 Request Ads Impression Performance Bid Pacing Rate Win Out feedback control Budget Spending Percentile Space  Dual Variables Figure 3: Factor dependency graph of RCPacing. formation, to assess the non-smooth risk and standardize the range of dual variables. Specifically, the CTR is first subjected to statistical Box-Cox transformation to achieve a normal shape, after which it is converted into the percentile space using the normal cumulative distribution function Φ(x). The parameter λ∗ j of campaign j can be estimated from global or campaign’s historical logs using the maximum-likelihood method (Sakia 1992): λ∗ j = argmax λj MLE (λj, vij) (8) And Box-Cox transformation can be denoted as: vboxc ij = BoxCox(λ∗ j, vij) =    v λ∗ j ij −1 λ∗ j if λ∗ j ̸= 0 ln (vij) if λ∗ j = 0 (9) The mean µj and standard deviation σj can be estimated: µj = E(vboxc ij ), σj = q E  (vboxc ij −µj)2 (10) To improve the robustness of drifts in the user arrival distribution, RCPacing skews the transformation towards the middle percentile region by a factor ϵ: ¯vij = Φ BoxCox(λ∗ j, vij) −µj σj + ϵσj  , where ϵ ≥0 (11) Backward Transformation RCPacing periodically updates the dual variables in the percentile space through feedback and then performs a backward transformation of the percentile variables ¯αj into the original dual space αj for online service. It guarantees that RCPacing approaches the optimal solution in the original space rather than percentile space. Here is the backward process: αj = BoxCox−1 λ∗ j, µj + Φ−1(¯αj) ∗(σj + ϵσj)  (12) Pacing Rate Factor Decoupling The pacing rate serves multiple functions in RCPacing, including constraining the percentile of dual variables within the safety region and addressing unexpected environmental changes and cold-start problem. RCPacing decouples the pacing rate into different factors to achieve optimal performance, for the campaign j retrieved in request i: PTRij = PTRbase j · fp (¯αj) · fv (¯αj, ¯vij) (13) 2 1 0 1 2 0 1 2 Density Beta(2,6) Box-Cox Standard Normal Percentile Space 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 Density = 0.25 = 0.5 = 1.0 Figure 4: The transformation process from beta distribution to percentile uniform distribution and the different skewness of the distribution under different ϵ. where PTRbase j is the basic statistical PTR, f(·) and fv(·) are the fine-tune factors. Given a safe upper bound of percentile threshold Pub (such as 90%), the expected PTR can be calculated based on its targeted audience TAj without considering the competition from other campaigns: PTRexp j = Bj (1.0 −Pub)TAj (14) The initial value of ¯αj can be expressed as: ¯α(0) j = ( Pub, if PTRexp j ≤1 1 −(1 −Pub)PTRexp j , otherwise. (15) Given the global hyper-parameter WRglb (such as 0.2), the basic PTR considering the competition of WR can be expressed as: PTRbase j = min  1.0, PTRexp j /WRglb (16) During the dynamic update in RCPacing, PTRj should be gradually increased to enhance traffic supply if ¯αj < Pub. Conversely, it should be quickly decayed to reduce the nonsmooth risk. It is illustrated in equation 17 and figure 5: fp (¯αj) = ( 50(Pub−¯αj)/Pub, if ¯αj ≤Pub 0.2(Pub−¯αj)/(Pub−1), otherwise. (17) Taking inspiration from smart pacing, RCPacing assigns a higher PTR to traffic with higher performance scores. Instead of employing discrete layered pacing, RCPacing utilizes linear functions to achieve non-uniform pacing: fv (¯αj, ¯vij) = 10(¯vij −¯αj) + 1 (18) Emergence Control and Cold Start Problem Despite RCPacing’s adaptive adjustment of the PTR, it cannot completely mitigate the risks of non-smooth delivery caused by unpredictable factors, such as sharp increases in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7990 0.0 0.2 0.4 0.6 0.8 1.0 Dual variable in percentile space 0 5 fp safe upper bound 0.0 0.2 0.4 0.6 0.8 1.0 CTR in percentile space 0.0 2.5 5.0 fv = 0.5 = 0.7 = 0.9 Figure 5: Functions of fp and fv in percentile space. user traffic, significant distribution changes due to the switch of online real-time prediction models, offsets caused by updates of Box-Cox parameters, and modifications of budgets. Additionally, due to the absence of historical logs, there is also a risk of non-smoothness during the cold start phase. To address the these risks, RCPacing incorporates an emergent PTR intervention module (ePTR) that can be activated in emergency situations. The final PTR can be denoted as: PTRij = min{1, PTRij} × ePTRj (19) The motivation behind ePTR is to limit the consumption speed within a certain range when a campaign is overaccelerated while maintaining the gradient direction of dual variables. The ratio of the actual cost to the expected cost can represent the spending speed of campaign j during period t: spd(t) j = Cost(t) j eCost(t) j (20) RCPacing uses proportional control instead of gradient methods to quickly control the risks. Given a safe upper ratio 2.0, the update of ePTR is: ePTR(t+1) j = min{1, ePTR(t) j ∗min{2, 2 spd(t) j }} (21) An initial trial rate is usually set for each campaign at the start of delivery to reduce the risks of the cold start problem. Adaptive Gradient Clipping Stable online iterative update of dual variables is also a critical factor for smooth delivery. However, choosing inappropriate learning rates can result in significant fluctuations and may have a cascading effect on the overall competitive environment. A simple and direct method is to restrict the change range into ˆα by gradient clipping (Chen, Wu, and Hong 2020). Given the updated dual variables ˜α(t+1) j , gradient clipping can be denoted as: ¯α(t+1) j = max n ¯α(t) j −ˆα, min n ˜α(t+1) j , ¯α(t) j + ˆα oo (22) Suppose that spd(t) j < 1.0 , which indicates that the campaign’s spending is lower than expected. The feedback control method will decrease the value of α(t) j to α(t+1) j , leading to an increase in the bid price. Assuming that the competition remains the same, which indicates that WR(t+1) ij ≥ WR(t) ij , if v(t+1) ij = v(t) ij . Suppose the expected spending speed in the next period is equal to 1, it can be deduced that: 1.0 = Cost(t+1) j eCost(t+1) j = Cost(t+1) j eCost(t) j = spd(t) j Cost(t+1) j Cost(t) j = spd(t) j E h RR(t+1) j i P PTR(t+1) j PR(t+1) j WR(t+1) j E h RR(t) j i P PTR(t) j PR(t) j WR(t) j ≥spd(t) j X PTR(t+1) j PR(t+1) j / X PTR(t) j PR(t) j = spd(t) j E h PTR(t+1) j PR(t+1) j i /E h PTR(t) j PR(t) j i (23) Without consideration the effect of ePTR, PTR and PR are determined and have a monotonic decreasing relationship with ¯α. We can calculate the expectation using the importance sampling method in uniform percentile space: ψj(¯α(t) j ) = E h PTR(t) j PR(t) j i = Z 1 0 PTR(t) j (¯α(t) j , x) · PRj(¯α(t) j , x)dx, x ∼uniform(0, 1) (24) The lower bound of α(t+1) j can be represented as: ¯α(t+1) j ≥ψ−1 j  ψj(¯α(t) j )/spd(t) j  = ψ−1 j (25) where y = ψ−1 j (¯αj, x) can be approximated through an iterative process by solving the equation ψj(¯αj, y) = x based on the bisection method illustrated in figure 6. To include spd(t) j ≥1.0, ¯α(t+1) j should satisfy the following conditions: ¯α(t+1) j =    max n ˜α(t+1) j , ¯α(t) j −ˆα, ψ−1 j o , if ˜g(t) j ≥0 min n ˜α(t+1) j , ¯α(t) j + ˆα, ψ−1 j o , otherwise. (26) 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 CTR in percentile space 0.00 0.25 0.50 0.75 1.00 PTR = 0.75 = 0.8 Figure 6: The areas of the color section represent the value of ψj(¯α(t) j ) under different variables α(t) j . The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7991 Algorithm 2: RCPacing Require: Budget of the campaigns B, safe upper bound Pub, global win rate WRglb, skew factor ϵ, step size η, static gradient clipping ˆα, total time period T 1: Budget exhausted campaign set G = ∅ 2: Calculate P T Rbase and ¯α(0) with eq. 14 ∼16 3: for t = 0 to T −1 do 4: Estimate λ∗, µ, and σ from historical logs with eq. 8 ∼10 5: Obtain α(t) from ¯α(t) by backward transformation in eq. 12 6: Receive v(t) from online requests 7: Obtain ¯v(t) from v(t) by forward transformation in eq. 11 8: Calculate P T R(t) with eq. 13 and eq. 17 ∼19 9: bid(t) = v(t) −α(t) 10: Element-wise randomly set bid(t) ij = 0 with probability 1 −PTR(t) ij and set bid(t) ij = 0 if j ∈G 11: j∗= arg max n bid(t)o 12: Make the decision ˜x(t), where ˜x(t) ij = ( 1, if bid(t) ij > 0 and j = j∗ i 0, otherwise. (29) 13: B = B −P i ˜x(t) 14: Add budget exhausted campaign to G 15: Calculate ˜α(t+1) with eq. 28 16: Update ¯α(t+1) by clipping ˜α(t+1) with eq. 26 17: Update eP T R(t+1) with eq. 21 18: end for Bregman Divergence Selection Algorithm 1 presents the basic decision process based on Bregman divergence with respect to a given convex reference function. It is obvious that if we use the squared loss function and the dual update becomes: h(α) = α2 ⇒˜α(t+1) j = ¯α(t) j −η˜g(t) j , ∀j (27) However, due to the higher fluctuation of PR in the high percentile region with the same shift, the variation magnitude of dual variables should be smaller to minimize nonsmooth risk. It means that as ¯αj approaches 1.0, the η should become smaller. We propose a modified Itakura-Saito divergence (Banerjee et al. 2005) to achieve this objective: h(α) = −ln(1.5 −α) ⇒ ˜α(t+1) j = ¯α(t) j − (1.5 −¯α(t) j )2 1 −η˜g(t) j (1.5 −¯α(t) j ) η˜g(t) j , ∀j where η˜g(t) j (1.5 −¯α(t) j ) < 1 (28) The overall processing details of RCPacing are described in Algorithm 2. parameter value description ϵ 0.1 skew factor η 0.2 step size ˆα 0.05 static gradient clipping Pub 90% safe percentile upper bound WRglb 15% global win rate Table 1. Optimal values for the important hyper-parameters Experimental Results This section begins with an introduction to the evaluation metrics and the baseline methods, and compares RCPacing to the baselines through offline and online experiments. Evaluation Metrics • Delivery rate is defined as the ratio of allocated impressions to the total budgets of the advertisers: delivery rate = P t P j ˜x(t) j P j Bj (30) • Unsmoothness index (UI) measures the deviation between the actual and expected budget consumption: unsmoothness = 1 M M X j=1 v u u t 1 T T −1 X t=0  ˜x(t) j −ρj 2 (31) • Average CTR reflects the quality of impressions, is calculated as the ratio of clicks to the total impressions: CTRavg = P t P j v(t) j ˜x(t) j P t P j ˜x(t) j (32) Baseline Methods We compare RCPacing with the following four methods: 1) DMD (Balseiro, Lu, and Mirrokni 2020) is a Lagrangian dual-based online allocation framework that maximizes revenue while adhering to resource constraints by adjusting their virtual bids. 2) Smart Pacing (Xu et al. 2015) is a control-based method proposed to achieve smooth delivery and optimal performance by probabilistic throttling. 3) AUAF (Cheng et al. 2022) is a dual-based method that optimizes delivery rate and impression quality with a fixed smoothness term. The dual variables are updated by feedback control algorithm to ensure fairness. 4) PDOA (Zhou et al. 2021) solves online matching in dynamic environments with experts and meta-algorithm. It achieves smoothness by bid modification. Offline Evaluation Datasets We construct a large-scale industrial dataset2 by collecting real-world ad-serving data from our display advertising system, which consists of 600K impressions and 2Dataset and the code for all methods are available in https://github.com/danifree/RCPacing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7992 Method Unsmoothness Delivery Rate (%) CTR (%) DMD 15.71 ± 1.46 100.0 ± 0.0 5.39 ± 0.02 Smart 10.52 ± 1.04 95.9 ± 0.9 7.88 ± 0.37 AUAF 12.95 ± 1.29 100.0 ± 0.0 5.14 ± 0.01 PDOA 15.70 ± 2.27 100.0 ± 0.0 6.06 ± 0.27 RCPacing 6.37 ± 0.72 99.8 ± 0.1 7.46 ± 0.44 Table 2. Offline evaluation results 75% 80% 85% 90% 95% Safety Percentile Upper Bound 6.6% 6.8% 7.0% 7.2% 7.4% 7.6% 7.8% CTR Euclidean Itakura-Saito 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 Unsmoothness Euclidean Itakura-Saito Figure 7: The ablative analysis. 300 GD ads. The impressions are evenly distributed across 50 time periods. The CTR values predicted by a DNN are reserved to measure the impression quality. Implementation Details Table 1 provides a summary of the optimal values for the important hyper-parameters. Evaluation Results In order to exclude the influence of accidental factors, we randomly scale the budget of GD ads by a factor ranging from 0.8 to 1.2, and calculate the mean and standard deviation across 50 rounds. As shown in Table 2, Smart Pacing achieves the highest average CTR, but its low delivery rate is inappropriate for GD allocation, which results in publishers being penalized for unsatisfied demand. RCPacing demonstrates a significant reduction in UI, with a 59.4% and 50.8% improvement compared to PDOA and AUAF, respectively. Furthermore, it delivers superior CTR performance, achieving a 23.1% and 45.1% increase compared to PDOA and AUAF. Ablative Analysis We focus on UI and CTR since the delivery rates of the variants are very close to 100%. • The impact of percentile upper bound: A higher safety percentile upper bound (Pub) allows advertisers to filter low-quality impressions more effectively, but it also raises the risk of fluctuations. As demonstrated in figure 7, RCPacing has higher CTR when using a larger Pub, but there is a 4.14% increase in unsmoothness when Pub is changed from 90% to 95% (Itakura-Saito divergence). • The impact of different divergence: As mentioned earlier, a modified Itakura-Saito divergence helps alleviate the issue of high fluctuations in the high percentile range. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Days 0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% CTR DMD AUAF PDOA RCPacing 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 Unsmoothness DMD AUAF PDOA RCPacing Figure 8: The online evaluation results. Figure 7 illustrates that the proposed Itakura-Saito divergence provides better UI especial when Pub is high (e.g., a 3.74% improvement in smoothness when Pub equals 90%), while the average CTR is comparable to that of the Euclidean divergence. Additional ablative analysis can be found in the appendix. Online Evaluation Implementation Details In order to evaluate the performance of RCPaing in an online environment, we conduct A/B testing on our Taobao brand advertising platform for a continuous period of two weeks. Since the delivery rate for Smart pacing is too low for GD allocation, we only compare our method with DMD, AUAF, and PDOA. Evaluation Results As the delivery rates of all methods exceed 99.5%, we concentrate on the other two metrics in figure 8, RCPacing outperforms all the baselines. For example, compared with PDOA, our method achieves a 35.3% and 23.4% improvement in UI and CTR, respectively. Conclusion GD contracts are a crucial source of revenue for large publishers. This paper presents a robust percentile riskconstrained pacing framework designed from the perspective of a publisher. RCPacing achieves smooth and optimal allocation for GD campaigns by leveraging its compatibility with the guaranteed allocation mechanism. Our analysis presents the relationship between non-smooth risks and percentile of dual variables, and RCPacing is designed to constrain dual variables within the low-risk region. Adaptive gradient clipping and modified Bregman divergence techniques are also employed to achieve more stable update of dual variables. We also illustrate the trade-off and flexible control over smooth and optimal allocation in online matching. Our experimental evaluations on real-world A/B testing demonstrate that RCPacing outperforms other compared methods and it has been widely deployed in Taobao display advertising system. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7993 References Agarwal, D.; Ghosh, S.; Wei, K.; and You, S. 2014. Budget pacing for targeted online advertisements at linkedin. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1613–1619. Balseiro, S.; Lu, H.; and Mirrokni, V. 2020. Dual mirror descent for online allocation problems. In International Conference on Machine Learning, 613–628. PMLR. Banerjee, A.; Merugu, S.; Dhillon, I. S.; Ghosh, J.; and Lafferty, J. 2005. Clustering with Bregman divergences. Journal of machine learning research, 6(10). Bharadwaj, V.; Chen, P.; Ma, W.; Nagarajan, C.; Tomlin, J.; Vassilvitskii, S.; Vee, E.; and Yang, J. 2012. Shale: an efficient algorithm for allocation of guaranteed display advertising. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 1195– 1203. Chen, P.; Ma, W.; Mandalapu, S.; Nagarjan, C.; Shanmugasundaram, J.; Vassilvitskii, S.; Vee, E.; Yu, M.; and Zien, J. 2012. Ad serving using a compact allocation plan. In Proceedings of the 13th ACM Conference on Electronic Commerce, 319–336. Chen, X.; Wu, S. Z.; and Hong, M. 2020. Understanding gradient clipping in private SGD: A geometric perspective. Advances in Neural Information Processing Systems, 33: 13773–13782. Cheng, X.; Liu, C.; Dai, L.; Zhang, P.; Fang, Z.; and Zu, Z. 2022. An Adaptive Unified Allocation Framework for Guaranteed Display Advertising. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 132–140. Dai, L.; Zu, Z.; Wu, H.; Wang, L.; and Zheng, B. 2023. Fairness-aware Guaranteed Display Advertising Allocation under Traffic Cost Constraint. In Proceedings of the ACM Web Conference 2023, 3572–3580. Fang, Z.; Li, Y.; Liu, C.; Zhu, W.; Zheng, Y.; and Zhou, W. 2019. Large-scale personalized delivery for guaranteed display advertising with real-time pacing. In 2019 IEEE International Conference on Data Mining (ICDM), 190–199. IEEE. Gao, C.; Lei, W.; He, X.; de Rijke, M.; and Chua, T.-S. 2021. Advances and challenges in conversational recommender systems: A survey. AI Open, 2: 100–126. Huang, Z.; and Shu, X. 2021. Online stochastic matching, poisson arrivals, and the natural linear program. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, 682–693. IAB. 2023. Internet Advertising Revenue Report. https://www.iab.com/wp-content/uploads/2023/04/ IAB PwC Internet Advertising Revenue Report 2022.pdf. Accessed: 2023-04. Lee, K. C.; Jalali, A.; and Dasdan, A. 2013. Real Time Bid Optimization with Smooth Budget Delivery in Online Advertising. arXiv e-prints. Liu, M.; Yue, W.; Qiu, L.; and Li, J. 2020. An effective budget management framework for real-time bidding in online advertising. IEEE Access, 8: 131107–131118. Mehta, A. 2012. Online Matching and Ad Allocation. Foundations and trends in theoretical computer science, (8-4). Mehta, A.; Saberi, A.; Vazirani, U.; and Vazirani, V. 2007. Adwords and generalized online matching. Journal of the ACM (JACM), 54(5): 22–es. Nuara, A.; Trov`o, F.; Gatti, N.; and Restelli, M. 2022. Online joint bid/daily budget optimization of internet advertising campaigns. Artificial Intelligence, 305: 103663. Sakia, R. M. 1992. The Box-Cox transformation technique: a review. Journal of the Royal Statistical Society Series D: The Statistician, 41(2): 169–178. Wang, X.; Tan, B.; Guo, Y.; Yang, T.; Huang, D.; Xu, L.; Freris, N. M.; Zhou, H.; and Li, X.-Y. 2022. CONFLUX: A Request-level Fusion Framework for Impression Allocation via Cascade Distillation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4070–4078. Wu, D.; Chen, C.; Chen, X.; Pan, J.; Yang, X.; Tan, Q.; Xu, J.; and Lee, K.-C. 2021. Impression Allocation and Policy Search in Display Advertising. In 2021 IEEE International Conference on Data Mining (ICDM), 749–756. IEEE. Xu, J.; Lee, K.-c.; Li, W.; Qi, H.; and Lu, Q. 2015. Smart pacing for effective online ad campaign optimization. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2217–2226. Zhou, G.; Mou, N.; Fan, Y.; Pi, Q.; Bian, W.; Zhou, C.; Zhu, X.; and Gai, K. 2019. Deep interest evolution network for click-through rate prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 5941–5948. Zhou, Y.-H.; Hu, P.; Liang, C.; Xu, H.; Huzhang, G.; Feng, Y.; Da, Q.; Wang, X.; and Zeng, A.-X. 2021. A Primal-Dual Online Algorithm for Online Matching Problem in Dynamic Environments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 11160–11167. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7994
2024
888
18,726
Unifying Decision and Function Queries in Stochastic Boolean Satisfiability Yu-Wei Fan1, Jie-Hong R. Jiang1,2 1Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan 2Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan {r11943096, jhjiang}@ntu.edu.tw Abstract Stochastic Boolean satisfiability (SSAT) is a natural formalism for optimization under uncertainty. Its decision version implicitly imposes a final threshold quantification on an SSAT formula. However, the single threshold quantification restricts the expressive power of SSAT. In this work, we enrich SSAT with an additional threshold quantifier, resulting in a new formalism SSAT(Θ). The increased expressiveness allows SSAT(Θ), which remains in the PSPACE complexity class, to subsume and encode the languages in the counting hierarchy. An SSAT(Θ) solver, ClauSSat(Θ), is developed. Experiments show the applicability of the solver in uniquely solving complex SSAT(Θ) instances of parameter synthesis and SSAT extension. Introduction Stochastic Boolean satisfiability (SSAT) is a logical formalism, enabling natural characterization for optimizing decisions under uncertainty (Papadimitriou 1985). Due to its powerful expressiveness, recent endeavors have been made in both its efficient solving and potential applications. There are recent developments of specialized solvers (Lee, Wang, and Jiang 2017, 2018) designed for specific fragments of SSAT, and general solvers (Majercik and Boots 2005; Chen, Huang, and Jiang 2021; Wang et al. 2022; Fan and Jiang 2023) that place no restrictions on the SSAT formula. Regarding applications, SSAT has been used in encoding problems, such as contingent planning (Majercik and Littman 2003), partially observable Markov decision processes (POMDPs) (Salmon and Poupart 2020), the fairness analysis of machine learning models (Ghosh, Basu, and Meel 2021), and probabilistic graphical models (Hsieh and Jiang 2022). Despite its expressiveness and broad applications, certain limitations are inherent to SSAT. For instance, SSAT implicitly imposes a linear ordering upon the dependency sets of existential variables according to the prefix. In (Lee and Jiang 2021), dependency stochastic Boolean satisfiability (DSSAT) (Lee and Jiang 2021) is formulated to allow explicit representation of dependency sets of existential variables. In this work, we tackle the limitations of SSAT from another aspect. Specifically, the decision version of SSAT Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. enforces a single outermost threshold quantification on an SSAT formula, limiting its expressive power. Inspired by the counting quantifier in the Counting Hierarchy (Wagner 1986), we introduce a new formalism SSAT(Θ), which enriches SSAT with the threshold quantifier. We prove that SSAT(Θ) is in the PSPACE complexity class. Remarkably, while remaining in the same complexity class, the new formalism subsumes both SSAT and the languages in the Counting Hierarchy. Therefore, SSAT(Θ) can be powerful in encoding problems not succinctly expressible before. To demonstrate its practical applications, we develop an SSAT(Θ) solver ClauSSat(Θ), based on the state-of-theart SSAT solver ClauSSat (Chen, Huang, and Jiang 2021). We further provide SSAT(Θ) encodings for probabilistic model checking and its extension specified in bounded probabilistic computation tree logic (BPCTL). To evaluate the SSAT(Θ) solver, experiments were conducted to study the applicability in solving application benchmarks and to investigate the effect of the threshold quantifier on solving efficiency. We note that since SSAT(Θ) subsumes the languages in the Counting Hierarchy, ClauSSat(Θ) is the first solver to tackle general counting formulas, while previous work has primarily concentrated on certain restricted fragments, such as counting formulas with only one (Chou et al. 2016) or two (Oztok, Choi, and Darwiche 2016) counting quantifiers. The rest of this paper is organized as follows. After the preliminary background is first provided, we introduce the new formalism SSAT(Θ) and discuss its properties and complexity results. Then the extension of ClauSSat for SSAT(Θ) solving is to be elaborated. The encodings of the model-checking problem specified in BPCTL and the parameter synthesis problems are further detailed. We then investigate the efficiency of the solver on application benchmarks and the effect of threshold quantifiers on general SSAT(Θ) instances. Finally, we conclude this work and outline future work. Preliminaries In the sequel, the symbols “⊤” and “⊥” represent Boolean values TRUE and FALSE, respectively. Boolean connectives “¬,” “∨,” “∧,” “→,” and “↔” are associated with their conventional meanings. For simplicity, a conjunction ∧may be omitted in a Boolean formula. A literal, associated with a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7995 variable v, is either the variable itself v or the negation ¬v. A clause is a conjunction of literals. A Boolean formula is in the conjunctive normal form (CNF) if it is a conjunction of clauses. For a CNF formula ϕ, we use vars(ϕ) to denote the set of variables that appear in ϕ. A Boolean function, represented by a Boolean formula f over variables V , is a mapping f : B|V | →B. An assignment σ over the variables V , is a mapping σ : V →B. The induced formula over an assignment σ, denoted as f|σ, is obtained by replacing each variable v with its assigned Boolean value σ(v). The interval notation [l..u], for l, u ∈Z+∪{0} and l < u, represents the set of integers from l to u. We use symbols ▷ and ▷◁to denote one of the predicates in the set {>, ≥} and {>, ≥, <, ≤}, respectively. Quantified Boolean and Counting Formulas A quantified Boolean formula (QBF) in the prenex CNF form is expressed as Q.ϕ , (1) where Q = Q1, . . . , Qn, for Qi ∈{∃vi, ∀vi}, and ϕ is a quantifier-free CNF formula. The QBF satisfiability well corresponds to the Polynomial Hierarchy (PH) (Stockmeyer 1976). Specifically, the language of QBFs with k quantifier alternations, denoted QBFk, and Q1 = ∃(resp. ∀) is complete in the complexity class ΣP k+1 (resp. ΠP k+1). When k is unbounded, QBF is PSPACE-complete. The Counting Hierarchy (CH) (Wagner 1986) for the counting problems is a complexity analogy to PH for the decision problems. Similar to QBFs, which well characterize PH, there are counting formulas (CFs), which well characterize CH. A counting formula in the prenex form can be expressed as C V1, . . . , C Vn.ϕ , (2) where C is the counting quantifier, Vi is a set of variables, and ϕ is a quantifier-free CNF formula. The quantification C V asks whether there exist at least half of the assignments over V . For singleton V = {v}, C V is equivalent to ∃v. Quantifier ∀v can also be expressed by C by proper formula negation or rewriting. E.g., ∀v.ϕ can be rewritten as ¬ C v.¬ϕ, or C V.ϕ ∧v′ for V = {v, v′}, where v′ is a fresh new auxiliary variable. Therefore, a counting formula only requires quantifier C, without ∃and ∀. The language of counting formulas of k levels of C quantifiers, denoted CFk, is complete in the complexity class CP k . When k is unbounded, CF is PSPACE-complete. Stochastic Boolean Satisfiability An SSAT formula in the prenex form can be expressed by Eq. (1), but with the prefix Q = Q1, . . . , Qn for Qi ∈ {∃vi, R pi vi} and the matrix ϕ being a quantifier-free CNF formula. In quantifier Qi, variable vi is either existentialquantified, i.e., ∃vi, or random-quantified, i.e., R pi vi, denoting vi = ⊤(resp. ⊥) with probability pi (resp. 1 −pi). The semantics of an SSAT formula Φ = Q1, . . . , Qn.ϕ = Q1.Φ′ is interpreted as its satisfying probability computed recursively by the following rules: • Pr[⊤] = 1, • Pr[⊥] = 0, • Pr[∃v.Φ′] = max{Pr[Φ′|v], Pr[Φ′|¬v]}, • Pr[ R p v.Φ′] = p · Pr[Φ′|v] + (1 −p) · Pr[Φ′|¬v]. Given an SSAT formula Φ, the decision version of SSAT, a PSPACE-complete problem, is to determine if Pr[Φ] is greater than a threshold probability, whereas the optimization (function) version is to return the probability Pr[Φ]. Note that the existential quantifier ∃in SSAT differs from that in QBFs and counting formulas in its function sense, searching for an assignment maximizing the satisfying probability. More precisely, it is a “maximization quantifier.” Nevertheless, we abuse the notation as its meaning should be clear from the context. We remark that SSAT can incorporate the universal quantifier ∀serving as the “minimization quantifier” (Littman, Majercik, and Pitassi 2001). Because ∀quantification can be achieved through ∃quantification and negation, e.g., Pr[¬∃X.ϕ] = Pr[∀X.¬ϕ], we omit the universal quantifier in our discussion for simplicity. Threshold Quantifier and SSAT Before delving into the formal definition of SSAT(Θ), we motivate SSAT(Θ) by showing the limitations of SSAT. Consider the decision problem of the SSAT formula Q.ϕ with a threshold probability p to assert Pr[Q.ϕ] ≥p, (3) which can be expressed as Θ ≥p, Q.ϕ (4) by representing Pr[Φ] ≥p as a threshold quantifier Θ ≥p over Φ = Q.ϕ. Hence, the SSAT decision problem equivalently imposes a single threshold quantification on the entire formula. It restricts both the number and the position of the threshold quantifier. The new formulation SSAT(Θ) relaxes such restrictions, allowing an arbitrary number of threshold quantifiers to be inserted in arbitrary positions in the prefix. The language SSAT(Θ), i.e., SSAT augmented with the threshold quantifier, is defined as follows. Definition 1 (SSAT(Θ) Syntax). The syntax of SSAT(Θ) is the same as that of SSAT except that Qi ∈ {∃vi, R pi vi, Θ ▷pi}, with the additional threshold quantifier Θ ▷pi. We omit to specify the variables within the scope of the threshold quantifier Qi = Θ ▷pi by implicitly assuming its inclusion of all the variables involved in Qi+1, . . . , Qn. Definition 2 (SSAT(Θ) Semantics). Given an SSAT(Θ) formula Φ = Q1, . . . , Qn.ϕ = Q1.Φ′, its satisfying probability is defined the same as that of SSAT except for the following additional rule for the Boolean interpretation of threshold quantifier: • Θ ▷p .Φ′ = ⊤, if Pr[Φ′] ▷p, ⊥, otherwise. Example 1. Consider the SSAT(Θ) formula R 0.4x1, Θ ≥0.4, ∃y1, R 0.3x2. (¯y1)(x1 ∨y1 ∨x2). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7996 We first consider the branch x1 = ⊤. Since a threshold quantifier Θ ≥0.4 is encountered, we check if the satisfying probability of the induced formula Φ′ = ∃y1, R 0.3x2.(¯y1) is greater than or equal to 0.4. It can be checked that Pr[Φ′] = 1. Therefore, the threshold quantifier returns ⊤under x1 = ⊤. Similarly, one can check that under x1 = ⊥, the satisfying probability of formula Φ′′ = ∃y1, R 0.3x2.(¯y1)(y1 ∨x2) is 0.3. Therefore, the threshold quantifier returns ⊥under x1 = ⊥. Finally, the satisfying probability of the entire formula is the weighted sum of the probability of the two branches of x1, which is (0 · 0.6 + 1 · 0.4) = 0.4. We note that SSAT(Θ) may incorporate the universal quantifier and threshold quantifier of predicates {<, ≤}. However, as we can rewrite them by negation along with the existential quantifier and threshold quantifier of predicates {>, ≥}, we omit them in our discussion. E.g., the formula Θ <p, ∃x, R p′ y.ϕ is equivalent to ¬ Θ ≥p, ∃x, R p′ y.ϕ = Θ ≥1−p, ∀x, R p′ y.¬ϕ. An SSAT(Θ) formula is called closed if all the variables are quantified, and opened otherwise. Also, an SSAT(Θ) formula Φ = Q.ϕ is called pure counting if its prefix Q consists of only threshold and random quantifiers, and has Q1 being a threshold quantifier. We also called such prefix Q pure counting. Note that when the random quantifiers in a pure-counting SSAT(Θ) formula are of probability 0.5, Φ is equivalently a counting formula. E.g., the counting formula C X1, . . . , C Xn.ϕ is equivalent to the SSAT(Θ) formula Θ ≥0.5, R 0.5 X1, . . . , Θ ≥0.5, R 0.5 Xn.ϕ. When the random quantifiers in a pure-counting SSAT(Θ) formula are of arbitrary probabilities expressed in binary fractional numbers, we can still derive its equivalent counting formula, exploiting the normalization technique that transforms SSAT formulas to have only probability 0.5 (Wang et al. 2022). We will show that the BPCTL model-checking problem can be naturally encoded with the pure-counting SSAT(Θ) formula. Note that an opened and pure-counting SSAT(Θ) formula Φ with a set of free variables X represents a Boolean function f(X) such that f|σX = ⊤if and only if Φ|σX = ⊤, where σX is an assignment over X. Unless otherwise stated, we assume that an SSAT(Θ) formula is closed in the sequel. Note also that, unlike SSAT, SSAT(Θ) requires no distinction between the decision and function versions as the threshold-quantifier extension unifies the decision and function specification. Properties and Normal Form We study some properties of the threshold quantifier and exploit them for a normal form conversion. Definition 3 (Normal Form of SSAT(Θ)). A prenex SSAT(Θ) formula Φ = Q1, . . . , Qn.ϕ is in a normal form if the following two conditions hold: 1. There are no consecutive threshold quantifiers. I.e., Qi and Qi+1, for i ∈[1..n −1], cannot be both threshold quantifiers. 2. A threshold quantifier Qi, for i ∈[1..n −1], cannot be followed by an existential quantifier Qi+1. The normal form can be enforced for any prenex SSAT(Θ) formula due to Lemmas 1 and 2 stated below. Given two threshold quantifiers Θ ▷p1 and Θ ▷p2, we say quantifier Θ ▷p1 dominates Θ ▷p2 if the implication Θ ▷p2 .Φ → Θ ▷p1 .Φ (5) holds for any SSAT(Θ) formula Φ. The following lemmas are immediate. Lemma 1. Let Θ ▷p1 dominate Θ ▷p2. Then the following equalities hold. Θ ▷p1, Θ ▷p2 .Φ = Θ ▷p2, Θ ▷p1 .Φ = Θ ▷p1 .Φ (6) Lemma 2. By treating ⊤(resp. ⊥) as probability value 1 (resp. 0), and vice versa, the following equality holds. Θ ▷p, ∃v.Φ = ∃v, Θ ▷p .Φ (7) Computation Complexity Just as each level in PH forms a complete class, each level in CH forms a complete class, which can be characterized by adding the majority quantifier to QBF. E.g., E-MAJSAT (Littman, Goldsmith, and Mundhenk 1998) is ∨C-complete (Wagner 1986), the same complexity class as NPPP. As the pure-counting SSAT(Θ) formula subsumes the counting formula, SSAT(Θ) can succinctly encode any problem in the counting hierarchy. On the other hand, while CH consists of only decision problems, SSAT(Θ) allows the encoding of function or optimization problems. Hence, SSAT(Θ) is strictly more expressive. In the following, we give the computation complexity of the decision version of SSAT(Θ). Theorem 1. The decision problem of SSAT(Θ) is PSPACEcomplete. Proof. We shall prove this by showing that the decision problem of SSAT(Θ) is in PSPACE and is PSPACE-hard. Given an SSAT(Θ) formula Φ and a threshold probability p, the decision problem of SSAT(Θ) is to determine if Pr[Φ] is greater than p. To prove it to be in PSPACE, it suffices to show that the algorithm for computing Pr[Φ] requires polynomial spaces. Consider the algorithm to evaluate an SSAT(Θ) formula as shown in Algorithm 1. Let S(i) be the space needed for the ith recursive call, which equals the space needed within the ith call plus the space needed for the (i + 1)st recursive calls. Then the total space needed for the whole formula is S(1). Suppose the number of clauses is m and the number of bits for storing a probability is b. For the base case in Line 3, we have to evaluate the truth value of ϕ over an assignment so the space needed S(n) is O(mn+b), where O(mn) is the space needed for the induced formula and O(b) is the space required for the probability. Now consider the ith recursive call. For the case of threshold quantifier in Line 5, the space S(i) = S(i+1)+O(b). For the case of existential and random quantifiers from Line 6 to Line 12, we reuse the space required for computing p0 when computing p1. Therefore, the space S(i) = S(i+1)+O(mn+b). It turns out that the total space required S(1) is O(mn2 + bn). The PSPACE-hardness of SSAT(Θ) is evident by the fact that it subsumes SSAT, which is PSPACE-hard. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7997 Algorithm 1: SSAT(Θ) Evaluation 1: procedure EVALUATE( Φ = Q1, . . . , Qn.ϕ ) 2: if ϕ = ⊤or ϕ = ⊥then 3: return Pr[ϕ] 4: if Q1 is Θ ▷p then 5: return EVALUATE(Q2, . . . , Qn.ϕ) ▷p 6: v ←the outermost variable 7: p0 ←EVALUATE(Q2, . . . , Qn.ϕ|¬v) 8: p1 ←EVALUATE(Q2, . . . , Qn.ϕ|v) 9: if Q1 is ∃v then 10: return max{p0, p1} 11: if Q1 is R p v then 12: return p · p1 + (1 −p) · p0 An SSAT(Θ) Solver A naive reference procedure for SSAT(Θ) evaluation is shown in Algorithm 1. Although it requires only polynomial space, it may not be effective in run-time efficiency. Therefore, we need a more advanced algorithm to alleviate the intrinsic hardness of SSAT(Θ). As SSAT(Θ) generalizes SSAT, existing SSAT solvers could be extended for SSAT(Θ). In this work, we consider the state-of-the-art general SSAT solvers SharpSSAT (Fan and Jiang 2023) and ClauSSat (Chen, Huang, and Jiang 2021) for such extension. For the case of SharpSSAT, a direct extension is not possible unfortunately because the component decomposition property of SSAT (Salmon and Poupart 2020) does not hold for SSAT(Θ). Lemma 3. Given an SSAT(Θ) formula Φ = Q.ϕ with matrix ϕ = ϕ1∧. . .∧ϕk, where vars(ϕi)∩vars(ϕj) = ∅for i ̸= j, let Φi = Q.ϕi. Then the equality Pr[Φ] = Qk i=1 Pr[Φi] does not hold in general. Proof. As a counterexample, consider the SSAT(Θ) formula Φ = R 0.5 v1, v2, Θ ≥0.5, R 0.5 v3, v4. (v1 ↔v3)∧(v2 ↔v4) . Its matrix can be decomposed into ϕ1 ∧ϕ2, where ϕ1 = (v1 ↔v3) and ϕ2 = (v2 ↔v4) have disjoint support variables. However, Pr[Φ] = 0 ̸= Pr[Φ1] · Pr[Φ2] = 0.5 · 0.5 = 0.25. For the case of ClauSSat, its extension to SSAT(Θ) solving is possible, as discussed below. ClauSSat solves an SSAT formula by partitioning the literals in a clause in the matrix into several groups with respect to the quantification levels according to the prefix. For the extension to SSAT(Θ), given an SSAT(Θ) formula Φ, if the outermost quantifier is a threshold quantifier Θ ▷p, we omit it and solve the remaining formula Φ′. (Otherwise, we solve Φ directly.) Once Pr[Φ′] is computed, we simply check if Pr[Φ′] ▷p and return the corresponding truth or falsity. Moreover, we modify the definition of quantification level as follows: Consider the formula Q1, . . . , Qn.ϕ. For a random- or existential- quantified variable at quantification Qi, let k be the number of the encountered alternations in one of the forms ∃R , R -∃, and ΘR , when traversing from Q1 to Qi. Then the quantification level of that variable is defined as k + 1. E.g., for the formula R x1, ∃x2, R x3, Θ, R x4.ϕ, the quantification levels of x1, x2, x3, and x4 are 1, 2, 3, and 4, respectively. The literals in a clause in the matrix are then partitioned into groups with respect to the newly defined quantification levels. The formulas are then solved recursively on the quantification levels in the same way as SSAT, except that for the random quantifier Qi with an outer threshold quantifier Qi−1 = Θ ▷p, the probability should be mapped to a Boolean value according to Θ ▷p. To effectively prune the search space, ClauSSat incorporates several pruning techniques for the alternations ∃R and R -∃. To handle these techniques, we follow the same implementation as ClauSSat when encountering the ∃R and R -∃alternations. For the ∃-ΘR alternation, as stated in Lemma 2, since the probability is preserved under the reordering between the threshold and existential quantifiers, the techniques for ∃R are applicable in ∃-ΘR . For the R ΘR alternation, we disable all the pruning techniques proposed in ClauSSat. Encoding Application Problems This section first gives some background on discrete-time Markov Chains (DTMCs) and bounded probabilistic computation tree logic (BPCTL). We then show how to encode the model-checking problem with pure-counting formulas. Further, we introduce the parameter synthesis problem and demonstrate its SSAT(Θ) encoding. DTMCs and BPCTL Discrete-Time Markov Chains A discrete-time Markov chain can be viewed as a state transition system where each transition takes place with a transition probability. It is a common model used to describe the behavior of a probabilistic system. Definition 4 (Discrete-Time Markov Chain.). A discretetime Markov Chain is a tuple M = (S, s0, P, AP, L), where • S is a finite set of states, and s0 ∈S is an initial state, • P is a probabilistic matrix characterizes the transition probabilities, where P : S × S →[0, 1] and Σs′∈SP(s, s′) = 1 for each s ∈S, • AP is a set of atomic propositions, and • L : S →2AP is a labelling function for states. A k-path π in a DTMC is a finite sequence of states (s1, · · · , sk) with length k. We use π(i) to denote the state si. The set of all the k-paths starting with a state s is denoted as Pathk(s). For simplicity, we use path to refer to a k-path in the sequel. The probability of a path π, denoted as Pr[π], is Πk−1 i=1 P(π(i), π(i + 1)). Bounded Probabilistic Computation Tree Logic Given a probabilistic system and a temporal property, the problem of probabilistic model checking is to check if the system satisfies the given property. In this work, the considered system description is the DTMC, and the concerned properties are specified in the bounded probabilistic computation tree logic The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7998 (BPCTL), which is a bounded-step fragment of the probabilistic computation tree logic. The syntax of BPCTL is defined as follows. Definition 5 (BPCTL Syntax). The state formula, denoted as φ, and path formula, denoted as ψ, are the two main components of a BPCTL formula with the following definition: φ := ⊤| a | ¬φ | φ1 ∧φ2 | P▷◁p[ψ] ψ := F≤kφ | φ1U≤kφ2 | Xφ where a is an atomic proposition, and F, U, X, and P are the future, until, next, and probability operators, respectively (Ciesinski and Größer 2004). A BPCTL formula is a state formula. The semantics of BPCTL can then be defined with respect to a DTMC as follows. Definition 6 (BPCTL Semantics). Let the DTMC M = (S, s0, P, AP, L). The semantics of the satisfaction relation are defined by: M, s ⊨⊤ for all s ∈S, M, s ⊨a if a ∈L(s), M, s ⊨¬φ if M, s ⊭φ, M, s ⊨φ1 ∧φ2 if M, s ⊨φ1 and M, s ⊨φ2, M, s ⊨P▷◁p[ψ] if Σπ∈Path(s),M,π⊨ψ Pr[π] ▷◁p, M, π ⊨F≤kφ if ∃i ∈[1..k] : M, π(i) ⊨φ, M, π ⊨φ1U≤kφ2 if ∃i ∈[1..k] : M, π(i) ⊨φ2 and M, π(j) ⊨φ1 ∀j ∈[1..i −1], and M, π ⊨Xφ if M, π(2) ⊨φ. With Definition 6, we say the DTMC M satisfies φ for M, s0 ⊨φ, and say state s satisfies φ for M, s ⊨ φ. In the sequel, we refer to the model-checking problem of a DTMC specified in BPCTL as model checking. We note that when the outermost operator in φ is the Poperator, the state-of-the-art probabilistic model checkers, e.g., Prism (Kwiatkowska, Norman, and Parker 2002) and Storm (Dehnert et al. 2017), provide the option P=? to return a probability value rather than a Boolean value. We allow such an extension in the following SSAT(Θ) encoding of model checking. BPCTL Model Checking for DTMC The main idea is to somehow map the P-operator in BPCTL to the threshold quantifier. In consequence, a BPCTL formula nested with the P-operator would correspond to a purecounting SSAT(Θ) formula. Before delving into the details, we shall first transform φ into the form where only P>p and P≥p are allowed for the P-operator. This can be done by replacing each P<p and P≤p with ¬P≥p and ¬P>p, respectively. In the following, we assume that a DTMC M = (S, s0, P, AP, L) and a BPCTL formula φ of the mentioned form are given. Also, given an atomic proposition a and any state s, let Fa be the Boolean function such that Fa(s) = ⊤ if and only if a ∈L(s). BPCTL Encoding For readability, we assume the state space S = [0..|S| −1] and let n = ⌊log2 |S|⌋+ 1. We refer the state variables to a vector of n-bits Boolean variables X = (x1, · · · , xn), who takes integer values in S. We introduce one new random-quantified variable for each transition (s, s′), denoted as xr(s,s′), where xr(s,s′) = ⊤ with probability P(s, s′). We let the set of the allocated random-quantified variables be Xr = {xr(s,s′) | s, s′ ∈ S, P(s, s′) > 0}. To derive an equivalent SSAT(Θ) formula, we recursively define the opened pure-counting formulas for both the state formula and path formula. In order for that, we need a probabilistic transition relation T(Xs, Xs′) over the current state variables Xs and next state variables Xs′. Intuitively, given a current state s and a next state s′, we want the satisfying probability Pr[T(s, s′)] = P(s, s′). The following is the considered probabilistic transition relation T(Xs, Xs′): R Xr. V s,s′∈S((Xs = s ∧Xs′ = s′) →xr(s,s′))∧ W s,s′∈S,P(s,s′)>0(Xs = s ∧Xs′ = s′) (8) With Eq. (8), we can expand time-frame for a k-path through proper variable-renaming as follows: T k−1(Xs1, · · · , Xsk) = k−1 ^ i=1 T(Xsi, Xsi+1), (9) where the state variables Xsi corresponds to the ith state in the path. Given a path (s1, · · · , sk), Pr  T k−1(s1, · · · , sk)  equals Πk−1 i=1 P(si, si+1). We use CΨ to represent the opened pure-counting formula of Ψ, either a state formula or a path formula. Recall that we already transformed the given BPCTL formula with only P▷p for the P-operator. For the rules in Definition 5, we derive their corresponding opened pure-counting formulas: C⊤(Xs) = ⊤, Ca(Xs) = Fa(Xs), C¬φ(Xs) = ¬Cφ(Xs), Cφ1∧φ2(Xs) = Cφ1(Xs) ∧Cφ2(Xs), CP▷p[ψ](Xs) = Θ ▷p/2t Cψ(Xs), CF≤kφ(Xs) = R 0.5 Xs2, . . . , R 0.5 Xsk+1. T k(Xs, Xs2, · · · , Xsk+1)∧ (Cφ(Xs) ∨Wk+1 i=2 Cφ(Xsi)), Cφ1U≤kφ2(Xs) = R 0.5 Xs2, . . . , R 0.5 Xsk+1. T k(Xs, Xs2, · · · , Xsk+1)∧ (Cφ2(Xs) ∨(Cφ1(Xs)∧ Wk+1 i=2 (Cφ2(Xsi) ∧Vi−1 j=2 Cφ1(Xsj)))), CXφ(Xs) = R 0.5 Xs′.T(Xs, Xs′) ∧Cφ(Xs′). (10) Intuitively, for a state formula φ and a state s, Cφ(s) = ⊤ if and only if s satisfies φ. For a path formula ψ, Pr[Cψ(s)] gives the sum of the probabilities of the paths in Pathk(s) satisfying ψ. In the construction for the P-operator, the factor 1/2t, with t being the number of the extra quantified state variables in the construction of the outermost opened purecounting formula, is a scaling factor for the original threshold p since we use probability 0.5 for each state variable in the construction. On the other hand, in the constructions The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7999 involving multiple opened pure-counting formulas, namely, Cφ1∧φ2, CF≤kφ, and Cφ1U≤kφ2, the quantified variables for transition relations have to be duplicated among the formulas so that their quantified variables are disjoint. The entire working flow is as follows: We first derive the opened pure-counting formula Cφ(Xs) of the BPCTL formula φ and then transform it into an equivalent SSAT(Θ) in the prenex form Θ, R , . . . , Θ, R .ϕ with the rules stated in the following three propositions. Proposition 1 (Conjunction). Let Φ1 = Θ ▷p1, R Y, Q1.ϕ1 and Φ2 = Θ ▷p2, Q2.ϕ2 be two pure-counting formulas, possibly with a common set of free variables X. Also, let Q1 be pure counting. If all the variables in Y are not quantified in Φ2, then Φ1 ∧Φ2 ↔ Θ ▷p1, R Y.((Q1.ϕ1) ∧( Θ ▷p2, Q2.ϕ2)). Proof. We show that under any assignment over X, the lefthand side (LHS) Φ1 ∧Φ2 is equivalent to the right-hand side (RHS) Θ ▷p1, R Y.((Q1.ϕ1)∧( Θ ▷p2, Q2.ϕ2)). Let Φ′ and Φ′′ be the LHS formula Φ1|σX ∧Φ2|σX and the RHS formula Θ ▷p1, R Y.((Q1.ϕ1|σX) ∧( Θ ▷p2, Q2.ϕ2|σX)), respectively, induced under assignment σX. To show Φ′ →Φ′′, we have Φ′ = ⊤if and only if Θ ▷p1, R Y, Q1.ϕ1|σX = ⊤ (11) and Θ ▷p2, Q2.ϕ2|σX = ⊤. (12) We have Φ′′ = Θ ▷p1, R Y.Q1.ϕ1|σX by Eq. (12), and thus Φ′′ = ⊤by Eq. (11). To show Φ′ ← Φ′′, we have Φ′′ = ⊤only if Eq. (12) holds. Otherwise, Φ′′ = Θ ▷p1, R Y.((Q1.ϕ1|σX) ∧ ⊥) = ⊥, leading to a contradiction. It follows that Φ′′ = Θ ▷p1, R Y.Q1.ϕ1|σX. Under the assumption Φ′′ = ⊤, Eq. (11) must hold. Proposition 2 (Disjunction). Let Φ1 = Θ ▷p1, R Y, Q1.ϕ1 and Φ2 = Θ ▷p2, Q2.ϕ2 be two pure-counting formulas, possibly with a common set of free variables X. Also, let Q1 be pure counting. If all the variables in Y are not quantified in Φ2, then Φ1 ∨Φ2 ↔ Θ ▷p1, R Y.((Q1.ϕ1) ∨( Θ ▷p2, Q2.ϕ2)). Proof. The proof is similar to that of Proposition 1, with “∧” being replaced by “∨”. Proposition 3 (Negation). Let Φ = Θ ▷p, R Y, Q.ϕ be a pure-counting formula, possibly with free variables X. Also, let Q be pure counting. Then ¬Φ ↔ Θ ▷(1−p), R Y, ¬Q.ϕ. Proof. We prove the case when there is only one variable y in Y . The proof can be generalized to a set Y of variables. Without loss of generality, we assume y = ⊤with probability p′. We show that under any assignment σX, the LHS ¬Φ is equivalent to the RHS Θ ▷(1−p), R p′ y, ¬Q.ϕ. We only consider the case with the outermost threshold quantifier being Θ >p as the case of Θ ≥p is similar. Let ϕ′ = ϕ|σX, Φ′ = ¬Φ|σX, and Φ′′ = Θ >(1−p), R p′ y, ¬Q.ϕ′, induced under assignment σX. We have Φ′ = ¬(Pr h R p′ y, Q.ϕ′i > p) = Pr h R p′ y, Q.ϕ′i ≤p = (Pr[Q.ϕ′|y] · p′ + Pr[Q.ϕ′|¬y] · (1 −p′)) ≤p = ((1 −Pr[Q.ϕ′|y]) · p′+ (1 −Pr[Q.ϕ′|¬y]) · (1 −p′)) > 1 −p = (Pr[¬Q.ϕ′|y] · p′+ Pr[¬Q.ϕ′|¬y] · (1 −p′)) > 1 −p = Θ >(1−p), R p′ y.¬Q.ϕ′ = Φ′′. Finally, the resulting SSAT(Θ) formula is derived by assigning the state variables Xs to the initial state s0 and converting the matrix into a CNF formula by Tseitin transformation: Q, ∃XD.ϕ′ , (13) where ϕ′ is the CNF formula, XD is the set of extra definition variables introduced by the Tseitin transformation, and Q is pure counting. Recall that state-of-the-art probabilistic model checkers allow P=? when the outermost operator is the P-operator. To handle this case, we can simply remove the outermost threshold quantifier of Q, yielding Q′, say, in Eq. (13). The resulting formula is Q′, ∃XD.ϕ′ . Parameter Synthesis with SSAT(Θ) In most applications, the DTMCs are parameterized by several parameters. The probability of a certain BPCTL property is also dependent on the parameters. Take the Crowds (Shmatikov 2004) protocol for message transmission, which provides a probabilistic guarantee of the anonymity of the sender. The protocol assumes there are certain numbers of good people and bad people, where the good people cannot communicate with each other and the bad people would cooperate with each other. To provide the probabilistic anonymity guarantee, the message is passed to a randomly chosen bad person with probability p and to a randomly chosen good person with probability 1 −p. The only method the bad people can acquire information about the real sender is to observe the identity of the one who passed the message to the bad people. One important parameter is the size of the good people or called crowd size, which affects the probabilistic bound of anonymity — the greater the crowd size the better anonymity Crowds can provide. Under such DTMC, we ask the parameter synthesis problem: Given a DTMC and a BPCTL property, how to determine the size of the DTMC, in terms of the parameters, so that the property is maximized? Note that as long as we have the transition relation of the parameterized DTMC, the parameter synthesis problem can be naturally encoded using SSAT(Θ). We first construct The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8000 the transition relation of each instantiated DTMC with a certain parameter value. Suppose the set of variables for the parameter is Xq, which can take values from the finite set of range R. Suppose the maximum value in R is rmax and the minimum value in R is rmin. We introduce ⌊log2 (rmax −rmin + 1)⌋+ 1 variables for Xq. We use Tq to denote the transition relation of DTMC with the parameter value parameter being q ∈R. Then the parametric transition relation is constructed as follows: T(Xq, Xs, Xs′) = ^ q∈R ((Xq = q) →Tq). (14) We follow the same BPCTL encoding procedure in the previous subsection except that we use the parametric transition relation in Eq. (14) instead. Suppose we have the SSAT(Θ) formula in the form in Eq. (13). Then the parameter synthesis problem can be encoded as: ∃Xq, R X, Q, ∃XD.ϕ′ ∧ ^ q∈R (Xq = q) , (15) where Q is pure counting and X are some outermost state variables. Finally, we remark that both the BPCTL model checking and the parameter synthesis problems cannot be naturally encoded as regular SSAT since they involve multiple threshold operations. Experimental Results We implemented our SSAT(Θ) solver, named ClauSSat(Θ),1 in the C++ language by extending ClauSSat (Chen, Huang, and Jiang 2021). We note that we also implemented Algorithm 1 as a baseline for comparison. However, since the baseline version solved none of the instances in our experiments, its results are not included in our discussion. The experiments were conducted on a Linux machine with 2.2 GHz Intel Xeon CPU and 128 GB RAM. Two benchmark sets were experimented for evaluation. The first set includes instances from the case study of parameter synthesis on the DTMC Crowds (Shmatikov 2004) protocol.2 The second set includes instances converted from SSAT formulas. A 1000-second time limit was imposed on solving each instance. Evaluation on Instances of Parameter Synthesis To create the benchmark instances of parameter synthesis, we adopted the Crowds (Shmatikov 2004) protocol. 1Available at https://github.com/NTU-ALComLab/ClauSSatTheta. 2We note that although BPCTL model checking for DTMC can be encoded in SSAT(Θ), the converted instances are often too large to be solved. For this problem, there are dedicated model checkers, such as Prism (Kwiatkowska, Norman, and Parker 2002), Storm (Dehnert et al. 2017), and Epmc (Hahn et al. 2014), for more direct and effective solving. Hence, we focused on evaluating instances with complex queries that cannot be handled by existing model checkers. Range k Formula Run Time (s) #Cls #Vars 1 6112 2203 9.13 2 6763 2428 0.07 [2..4] 3 8697 3113 83.78 4 9318 3337 554.75 5 10093 3626 TO 1 6923 2489 25.00 2 9961 3541 277.54 [2..6] 3 10561 3761 394.02 4 11625 4139 658.14 5 11786 4207 TO Table 1: Results on instances of parameter synthesis. A DTMC model was first specified in the prism model format (Kwiatkowska, Norman, and Parker 2002), then converted to a transition relation in a bit-vector form of the QF-BV SMT format, then further bit-blasted using the SMT solver Boolector (Niemetz, Preiner, and Biere 2014).3 Given the BPCTL property to be checked and the transition relation, the SSAT(Θ) formula was created based on the proposed encoding method. Recall that Crowds is a protocol for providing anonymity of the actual sender. The protocol assumes that there is only one actual sender among t potential senders, for t being the crowd size. We define the “safei” property as follows. safei = P>0.5[F≤m(observei < 1)] , (16) where observei is an integer state variable and (observei < 1) asserts the actual sender i is not observed. It asserts that the probability that the adversary does not observe the actual sender i in the future m-steps is greater than 0.5. We say that the system is in a safe state for sender i if the system state satisfies property safei.4 For the BPCTL property to be checked, we extend the safei property to k-step safei = P=?[F≤k(safei)] . (17) Thereby, with Eq. (15), the problem is to search for a crowd size value such that the k-step safei property is maximized starting from the initial state of the DTMC. In the experiment, we checked “safe0” under m = 5. All the created formulas share the same prefix form ∃R -ΘR -∃. The results are shown in Table 1, where columns “Range,” “k,” and “Run Time (s)” report the range of the parameter values, k of the k-step safe property, and the time spent for solving each case, respectively, and the numbers of clauses and variables of the SSAT(Θ) formula are reported in columns “#Cls” and “#Vars,” respectively. The cases that 3We disabled the -vs option in Boolector as it may perform aggressive SAT-based encoding that might not be sound in SSAT(Θ). The instances were only simplified using Boolean constraint propagation. 4The safei property can be encoded as an ∃R -∃quantified SSAT formula. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8001 Family Instance #R Formula Run Time (s) #Cls #Vars SSAT SSAT(Θ) depth-7 2 7491 2809 0.56 0.47 tlc depth-8 2 8430 3160 0.73 0.61 depth-9 2 9360 3511 0.92 0.78 2_2_0010 9 2958 1133 10.05 26.85 gttt 2_2_001020 9 2814 1085 144.16 310.16 2_2_000111 9 3214 1165 145.36 298.59 8 39 30444 8892 41.97 61.14 Robot 9 45 34078 9880 50.32 34.35 10 50 37712 10868 63.78 286.01 25.9 25 2278 779 0.45 309.74 stracomp 30.9 30 2754 934 0.38 TO 35.9 35 3227 1089 0.37 TO Table 2: Results of evaluation on general SSAT(Θ) instances converted from SSAT instances. failed to be solved within the time limit are denoted with “TO.” Unsurprisingly, the running time increases drastically as the k increases since the transition relation and the variables have to be duplicated, as mentioned in the previous section. When comparing the solving time of the two different configurations of range, we notice that the formulas failed to be solved within the time limit when k = 5 for both configurations. We observed that for the [2..4] configuration, the run time seems to increase more acutely as k increases than the [2..6] configuration. As two (resp. three) existential variables are allocated (for Xp mentioned in the subsection of Parameter Synthesis) for the range [2..4] (resp. [2..6]), it would be interesting to investigate the effect of the number of these outermost existential variables on the solving efficiency for the parameter synthesis benchmarks. Finally, we remark that although the current SSAT(Θ) solver can solve most of the generated instances, the step size k and the range size of the parameter values are relatively small, and the efficiency is sensitive to the increase of the range size. In order to alleviate the high computation complexity, it may be crucial to develop specialized SSAT(Θ) solver and preprocessing techniques. Evaluation on Instances of SSAT Extension To study the impact of the threshold quantifier on SSAT instances, we assess the solver’s efficiency by evaluating SSAT(Θ) instances generated from existing SSAT benchmarks, taken from (Chen, Huang, and Jiang 2021).5 The conversion was done by randomly selecting a random quantification block R V , bi-partitioning variables V into V1 and V2, and inserting a randomly generated threshold quantifier Θ ▷p so that the quantification becomes R V1, Θ ▷p, R V2. The bi-partition of variables V is also determined randomly. Specifically, we selected four families of benchmarks where all the cases in each family are solved by ClauSSat within the time limit. To examine the effect of the threshold quanti5Available at https://github.com/NTU-ALComLab/ClauSSat. fier, we compare the performance on the SSAT(Θ) instances with that on the original SSAT instances. The results are shown in Table 2, where three representative instances for each family are listed due to space limit. The column “#R” represents the original number of random-quantified variables in the inserted quantification and the column “SSAT” (resp. “SSAT(Θ)”) reports the time spent on solving the SSAT (resp. SSAT(Θ)) instance. Comparing the performance on SSAT and SSAT(Θ) instances, we observed that in most cases within the families, inserting threshold quantifiers deteriorates the performance as expected. It is expected because the threshold quantifier increases the number of quantification levels, and the current implementation does not equip efficient pruning techniques for the alternation R -ΘR . Also, observe that the number of random-quantified variables may play a role in affecting the run time due to the increased complexity introduced by the threshold quantifier. This effect is especially significant in the family stracomp, where all the listed SSAT instances can be solved in less than one second, while their SSAT(Θ) counterparts fail to be solved within the time limit as the number of random-quantified variables attains or exceeds 30. However, there is an exception in instance 9 of the Robot family, whose SSAT(Θ) instance, in contrast, takes less time to be solved. Some other factors contributing to the hardness of computation remain to be further investigated. Conclusions and Future Work This work presents SSAT(Θ), which unifies the decision and function queries of SSAT by augmenting SSAT with the threshold quantifier. SSAT(Θ) subsumes the counting formulas and can encode problems in the Polynomial and Counting Hierarchies. For a practical case study, we encode BPCTL model checking and the parameter synthesis problem of DTMCs into SSAT(Θ). Experiments demonstrate the feasibility of extending ClauSSat for solving SSAT(Θ) instances. For future work, we plan an extension to DSSAT (Lee and Jiang 2021). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8002 Acknowledgements This work was supported in part by the National Science and Technology Council of Taiwan under Grant NSTC 1112923-E-002-013-MY3. References Chen, P.-W.; Huang, Y.-C.; and Jiang, J.-H. R. 2021. A Sharp Leap from Quantified Boolean Formula to Stochastic Boolean Satisfiability Solving. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3697–3706. Chou, Y.-M.; Chen, Y.-C.; Wang, C.-Y.; and Huang, C.-Y. 2016. MajorSat: A SAT Solver to Majority Logic. In Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC), 480–485. Ciesinski, F.; and Größer, M. 2004. On probabilistic computation tree logic, volume 2925. Springer. Dehnert, C.; Junges, S.; Katoen, J.-P.; and Volk, M. 2017. A Storm Is Coming: A Modern Probabilistic Model Checker. In Proceedings of the International Conference on Computer Aided Verification (CAV), 592–600. Fan, Y.-W.; and Jiang, J.-H. R. 2023. SharpSSAT: A Witness-Generating Stochastic Boolean Satisfiability Solver. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3949–3958. Ghosh, B.; Basu, D.; and Meel, K. S. 2021. Justicia: A Stochastic SAT Approach to Formally Verify Fairness. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 7554–7563. Hahn, E. M.; Li, Y.; Schewe, S.; Turrini, A.; and Zhang, L. 2014. iscasMc: A Web-Based Probabilistic Model Checker. In Proceedings of the International Symposium of Formal Methods (FM), 312–317. Hsieh, C.-H.; and Jiang, J.-H. R. 2022. Encoding Probabilistic Graphical Models into Stochastic Boolean Satisfiability. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 1834–1842. Kwiatkowska, M.; Norman, G.; and Parker, D. 2002. PRISM: Probabilistic symbolic model checker. In Proceedings of the International Conference on Modelling Techniques and Tools for Computer Performance Evaluation, 200–204. Lee, N.-Z.; and Jiang, J.-H. R. 2021. Dependency Stochastic Boolean Satisfiability: A Logical Formalism for NEXPTIME Decision Problems with Uncertainty. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3877–3885. Lee, N.-Z.; Wang, Y.-S.; and Jiang, J.-H. R. 2017. Solving Stochastic Boolean Satisfiability under Random-Exist Quantification. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 688–694. Lee, N.-Z.; Wang, Y.-S.; and Jiang, J.-H. R. 2018. Solving Exist-Random Quantified Stochastic Boolean Satisfiability via Clause Selection. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 1339– 1345. Littman, M. L.; Goldsmith, J.; and Mundhenk, M. 1998. The Computational Complexity of Probabilistic Planning. Journal of Artificial Intelligence Research (JAIR), 9(1): 1–36. Littman, M. L.; Majercik, S. M.; and Pitassi, T. 2001. Stochastic Boolean Satisfiability. Journal of Automated Reasoning (JAR), 27(3): 251–296. Majercik, S. M.; and Boots, B. 2005. DC-SSAT: A Divideand-Conquer Approach to Solving Stochastic Satisfiability Problems Efficiently. In Proceedings of the National Conference on Artificial Intelligence (AAAI), 416–422. Majercik, S. M.; and Littman, M. L. 2003. Contingent Planning under Uncertainty via Stochastic Satisfiability. Artificial Intelligence (AI), 147(1-2): 119–162. Niemetz, A.; Preiner, M.; and Biere, A. 2014. Boolector 2.0. Journal on Satisfiability, Boolean Modeling and Computation (JSAT), 9(1): 53–58. Oztok, U.; Choi, A.; and Darwiche, A. 2016. Solving PPPPComplete Problems Using Knowledge Compilation. In Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning (KR), 94–103. Papadimitriou, C. H. 1985. Games Against Nature. Journal of Computer and System Sciences (JSCC), 31(2): 288–301. Salmon, R.; and Poupart, P. 2020. On the Relationship Between Satisfiability and Markov Decision Processes. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), 1105–1115. Shmatikov, V. 2004. Probabilistic Model Checking of an Anonymity System. Journal of Computer Security (JCS), 12(3-4): 355–377. Stockmeyer, L. J. 1976. The Polynomial-Time Hierarchy. Theoretical Computer Science (TCS), 3(1): 1–22. Wagner, K. W. 1986. The Complexity of Combinatorial Problems with Succinct Input Representation. Acta Informatica, 23(3): 325–356. Wang, H.-R.; Tu, K.-H.; Jiang, J.-H. R.; and Scholl, C. 2022. Quantifier Elimination in Stochastic Boolean Satisfiability. In Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), 23:1–23:17. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8003
2024
889
18,727
DiffSED: Sound Event Detection with Denoising Diffusion Swapnil Bhosale1*, Sauradip Nag1*, Diptesh Kanojia1, Jiankang Deng2, Xiatian Zhu1 1University of Surrey, UK 2Imperial College London, UK [email protected]*, [email protected]* Abstract Sound Event Detection (SED) aims to predict the temporal boundaries of all the events of interest and their class labels, given an unconstrained audio sample. Taking either the splitand-classify (i.e., frame-level) strategy or the more principled event-level modeling approach, all existing methods consider the SED problem from the discriminative learning perspective. In this work, we reformulate the SED problem by taking a generative learning perspective. Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process, conditioned on a target audio sample. During training, our model learns to reverse the noising process by converting noisy latent queries to the groundtruth versions in the elegant Transformer decoder framework. Doing so enables the model generate accurate event boundaries from even noisy queries during inference. Extensive experiments on the Urban-SED and EPIC-Sounds datasets demonstrate that our model significantly outperforms existing alternatives, with 40+% faster convergence in training. Code: https://github.com/Surrey-UPLab/DiffSED. Introduction Sound event detection (SED) aims to temporally localize sound events of interest (i.e., the start and end time) and recognize their class labels in a long audio stream (Mesaros et al. 2021). As a fundamental audio signal processing task, it has become the cornerstone of many related recognition scenarios, such as audio captioning (Xu et al. 2021; Bhosale, Chakraborty, and Kopparapu 2023; Xie et al. 2023), and acoustic scene understanding (Igarashi et al. 2022; Bear, Nolasco, and Benetos 2019). In the literature, all existing SED methods can be grouped into two categories namely, frame-level and event-level approaches. Frame-level approaches classify each audio frame/segment into event classes and then aggregate the consecutive frame-level predictions to identify sound event boundaries or endpoints (Miyazaki et al. 2020a; Lin et al. 2019). They are often heavily manually designed with plenty of heuristics and data-specific parameter optimization, hence less scalable and reliable across different audio data. Event-level approaches, on the other hand, directly model the temporal boundaries of sound events, taking into Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Architectural comparison: (a) Conventional discriminative DETR-based Sound Event Detector Transformer (SEDT) (Ye et al. 2021) incorporates a single decoding step with clean queries. (b) Our diffusion-infused generative DETR-based Sound Event Detector (DiffSED) conducts multi-step decoding/denoising over noised queries. account the correlation between frames, thereby eliminating the mundane post-processing step and are more generalizable (Ye et al. 2021). In both approaches, existing methods rely on proposal prediction by regressing the start and end times of each, i.e., discriminative learning based. Recently, generative learning models such as diffusion models (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020) have emerged strongly in computer vision. Conceptually, we draw an analogy between the SED problem and image-based object detection (Duan et al. 2019; Chen et al. 2019). We consider the latest generative learning based object detection approach (Chen et al. 2022b) represents a new direction for designing detection models in general. Although conceptually similar to object detection, the SED The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 792 problem still presents unique challenges and complexity due to the presence of temporal dynamics. Besides, there are several limitations with the detection diffusion formulation in (Chen et al. 2022b). First, a two-stage pipeline (e.g., RCNN (Chao et al. 2018)) is adopted, giving rise to localizationerror propagation from proposal generation to proposal classification (Nag et al. 2022). Second, as each event proposal is processed individually, their intrinsic relationship modeling is overlooked, potentially hurting the learning efficacy. To address these issues, we present two different designs: (a) Adopting the one-stage detection pipeline (Tian et al. 2019; Wang et al. 2020) that have already shown excellent performance with a relatively simpler design, in particular, DETR (Carion et al. 2020). Even within the SED literature, this simpler pipeline has shown to achieve higher accuracy than frame-level models on a variety of sound event detection datasets due to the better temporal resolution, as well as its ability to learn long-range dependencies between sound events (Ye et al. 2021). (b) A unique challenge with SED is big boundary ambiguity as compared to object detection. This is because temporal audio events are continuous in time without clear start and end points (e.g., non-zero momentum), and the transition between consecutive events is often stochastic. Further, human perception of event boundaries is also instinctive and subjective. For the above reasons, we reckon that diffusion-based models could be a great fit for sound event detection. Nonetheless, it is non-trivial to integrate denoising diffusion with existing sound event detection models, due to several reasons. (1) Whilst efficient at processing highdimension data simultaneously, diffusion models (Dhariwal and Nichol 2021; Li et al. 2022) have typically been shown to work with continuous input data. But event boundaries in SED are discrete. (2) Denoising diffusion and SED both suffer low efficiency, and their combination would even get worse. Both of the problems have not been investigated systematically thus far. To address the aforementioned challenges, a novel conditioned event diffusion method is proposed for efficiently tackling the SED task, abbreviated as DiffSED. In the forward diffusion process, Gaussian noises are added to the event latents iteratively. In the reverse denoising process, the noisy latents are passed as queries to a denoiser (e.g., DETR (Carion et al. 2020)) for denoising the event latents so that desired event proposals can be obtained, with the condition on the observation of an input audio stream. The usage of noisy latents allows our model to bypass the need for continuous input, as the denoising diffusion process takes place in the designated latent space. During inference, the model can take as input the noisy latents composed of noises sampled from Gaussian distribution and learned components, and outputs the event proposals of a given audio stream (i.e., the condition). The proposed noise-to-queries strategy for denoising diffusion has several appealing properties: (i) Evolutionary enhancement of queries during inference wherein each denoising step can be interpreted as a unique distribution of noise thus adding stochasticity to solve the boundary ambiguity problem. (ii) Integrating denoising diffusion with this noisy-latent decoder design solves the typical slowconvergence limitation. We summarize the contributions of this work. (a) We reformulate sound event detection (SED) as a generative denoising process (see Fig. 1) in an elegant transformer decoder framework. This is the first study to apply the diffusion model for the SED task to the best of our knowledge. (b) The proposed generative adaptation uses a noise-to-queries strategy with several appealing properties such as evolutionary enhancement of queries and faster convergence. (c) Our comprehensive experiments on the URBAN-SED (Salamon, Jacoby, and Bello 2014) and the EPIC-Sounds (Huh et al. 2023) datasets validate the significant performance advantage of our DiffSED over existing alternatives. Related Work Sound Event Detection The existing SED literature can be divided into two categories, namely, frame-level approaches and event-level approaches. In frame-level approaches (Lim, Park, and Han 2017; Turpault et al. 2019; Miyazaki et al. 2020a), the input audio signal is first divided into short, fixed-length segments, and the sound events within each segment are further classified independently. Despite strong performance and good intuition, this split-and-classify strategy requires plenty of heuristics designs, unscalable parameter settings (e.g., segment duration), as well as time-consuming postprocessing (e.g., aggregating frame-level predictions). To overcome these limitations, event-level approaches (Ye et al. 2021) present a more principled and scalable solution with end-to-end learning frameworks, inspired by the model designs in object detection (Carion et al. 2020; Zhu et al. 2020; Zhang et al. 2022) and video action recognition domains (Tan et al. 2021; Shi et al. 2022). Whilst being understudied, this strategy has shown to be more efficient and robust to longer and more complex (overlapping) events, such as those in music and human speech as well as short and frequently occurring events such as those in urban soundscapes or environmental monitoring. Our DiffSED belongs to this category, further pushing this forefront of performance. Deep learning techniques have achieved excellent performance in SED. For instance, convolutional neural networks (CNNs) have been widely investigated for audio event classification (Cakır et al. 2017; Kumar, Khadkevich, and F¨ugen 2018) owing to their ability to efficiently capture and analyze local patterns within the acoustic waveform of sound. Additionally, recurrent neural networks (RNNs) have been used for temporal modeling of audio signals in arrears to their propensity to capture long-term temporal dependencies in sequential data - an innate property of audio signals. Interestingly, apart from the hybrid approaches (Li et al. 2020; Koh et al. 2021), that utilize CNNs to extract features from the audio signal, which are then fed into an RNN to model temporal dependencies, recently, transformer based architectures (Wakayama and Saito 2022; Chen et al. 2022a) have been shown as equally promising, particularly, leveraging the self-attention mechanisms to model temporal relationships in audio signals and capturing complex patterns over time. Commonly, all the prior methods consider the SED The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 793 cv Model Input L 0 Latent Query Clean Distribution Noise Scheduler Add Noise Noisy Latent Query Noisy Distribution Forward Diffusion Audio Melspectogram c Audio Encoder XT Noisy Latent Queries Reverse Diffusion Transformer Decoder Audio Decoder  XT-1 x ( T - 1 ) X0 Time Model Output L 0 Audio as a Condition Dog Bark Coughing Whistling v v v v v v v v v Gaussian Noise Latent Query * Figure 2: Overview of our proposed DiffSED. (Top) In the forward diffusion process, Gaussian noises are added to the event latents iteratively to obtain noisy latents XT . (Bottom) In the reverse denoising process, an audio melspectogram is passed as the condition along with random noisy latents sampled from the Gaussian distribution. The noisy latents are passed as the query to the denoiser for denoising the event latents in an iterative fashion to obtain event proposals. problem as discriminative learning. In contrast, we treat for the first time this problem in a unique perspective of generative learning. In particular, we generate the sound event bounds and predict the class labels from noise latents, with the condition to the input audio sample. Diffusion-Based Models for Audio Tasks As a new class of deep generative models, diffusion models have been gaining popularity in different fields. Beginning with a sample from a random distribution, the diffusion model is optimized to gradually learn a denoising schedule to obtain a noise-free target. This paradigm has yielded remarkable results in audio processing tasks ranging from audio generation (Leng et al. 2022; Huang et al. 2022), audio enhancement (Lemercier et al. 2022), audio separation (Lutati, Nachmani, and Wolf 2023) etc. To the best of our knowledge, this is the first work that exploits a diffusion model for the SED task. Methodology Problem Definition Sound event detection (SED) involves both classification and temporal localization given an audio sequence. In this task, the audio sequence is usually represented as a 2-dimensional feature, such as a melspectrogram. We want a model to output the onset and offset times of all target events and the corresponding event labels (Wakayama and Saito 2022). To train the model, we collect a set of labeled audio sequence set Dtrain = {Ai, ψi}. Each audio Ai ∈RT ×F (where T × F represents the spectrotemporal dimension) is labeled with temporal annotation ψi = {(Ψj, ξj, yj)}Mi j=1 where Ψj/ξj represents onset/offset of an event and yj denotes the acoustic class event label. Preliminaries on Diffusion Model Diffusion models are a class of generative models that use the diffusion process to model complex probability distributions (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020). In a diffusion model, the forward process generates samples by iteratively applying a diffusion equation to a starting noise vector. The forward process can be represented by the following equation: zt = p 1 −βt ∗zt−1 + p βt ∗xt (1) where zt is the diffusion state at time t, xt is the input at time t, and βt is the diffusion coefficient at time t. The noise scale is controlled by βt which adopts a monotonically decreasing cosine schedule (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020) in every different time step t. Denoising in diffusion models is the process of generating a clean representation from a noisy observation by reversing the diffusion process. In other words, the goal is to obtain an estimate of the original representation from the final diffusion state. The denoising process can be performed using the reverse diffusion process, which can be represented by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 794 Algorithm 1: Training 1 def train_loss(audio, event_query): 2 """ 3 audio: [B, T, F] 4 event_queries: [B, N, D] 5 # B: batch_size 6 # N: number of event queries 7 """ 8 # Encode audio features 9 audio_feats = audio_encoder(audio) 10 # Signal scaling 11 event_queries = (event_queries * 2 - 1) * scale 12 # Corrupt event_queries 13 t = randint(0, T) # time step 14 eps = normal(mean=0, std=1) # noise: [B, N, D ] 15 event_queries_crpt = sqrt( alpha_cumprod(t )) * event_queries + 16 sqrt(1 - alpha_cumprod(t)) * eps 17 # Predict bounding boxes 18 pb_pred = detection_decoder( event_queries_crpt, audio_feats, t) 19 # Set prediction loss 20 loss = set_prediction_loss(pb_pred, gt_boxes) 21 return loss the following equation: zT −t = (zT −t+1 − p βT −t ∗xT −t)/ p 1 −βT −t (2) where zT is the final diffusion state, xt is the noisy input at time t, and βT −t is the diffusion coefficient at time T −t. The denoising process starts from the final diffusion state zT and iteratively applies the reverse diffusion equation to obtain an estimate of the original representation x0 = z1. xt = (zt+1 − p βt ∗xt−1)/ p 1 −βt (3) where ∀t ∈[1, T −1] such that xt is the estimate of the original representation at time t. The denoising process can be improved by adding regularization or constraints to the estimate of the original representation. DiffSED: Architecture Design Diffusion-Based SED Formulation In this work, we formulate the SED task in a conditional denoising diffusion framework. In our setting, data samples are a set of learnable event query embeddings z0 = b, where b ∈RN×D denotes N event query embeddings at the dimension of D. In our implementation, the event queries are retrieved from a simple lookup table that stores embeddings of a fixed dictionary of size N (initialized from N(0, 1)). A neural network fθ(zt, t, A) is trained to predict z0 from noisy proposals zt, conditioned on the corresponding audio A. The audio category ˆy is predicted subsequently. See Algorithm 1 for more details. Since the diffusion model generates a data sample iteratively, it needs to run the model fθ multiple times in inference. It would be computationally intractable to directly apply fθ on the raw audio at every iterative step. For efficiency, we propose to separate the whole model into two parts, audio encoder and detection decoder, where the former runs only once to extract a feature representation of the input audio Ai, and the latter takes this feature as a condition to progressively refine the noisy proposals zt (please refer Fig. 2). Audio Encoder The audio encoder takes as input the preextracted audio mel-spectograms and extracts high-level features for the following detection decoder. In general, any audio encoder can be used. We follow (Ye et al. 2021) for the audio encoder. More specifically, the raw audio is first encoded using a CNN based encoder backbone (i.e., ResNet50) to obtain the audio feature Af ∈RT ′×F ′ respectively. This is followed by a multi-layered temporal transformer (Vaswani et al. 2017) τ that performs global attention across the time dimension to obtain the global feature as: Ca = τ(Af) (4) where query, key, and value of the transformer is set to Af. We also append positional encoding to Af before passing it into the transformer. Detection Decoder Similar to SEDT (Ye et al. 2021), we use a transformer decoder (Vaswani et al. 2017) (denoted by fθ) for detection. Functionally, in our formulation it serves as a denoiser. In traditional DETR (Lin et al. 2021), the queries are learnable continuous embeddings with random initialization. In DiffSED, however, we exploit the queries as the denoising targets. As opposed to adding noises to object boundaries (Chen et al. 2022b), we inject the Gaussian noise to the randomly initialized latent queries. This is similar to the concept of event queries (Rombach et al. 2022). To detect multiple events occurring simultaneously, we sample N such noisy event queries to form Q ∈RN×D which will be subsequently passed on to the detection decoder for denoising. Taking Q as input, the decoder predicts N outputs: Fd = fθ(Q; Ca) ∈RN×D (5) where Ca is the encoded audio feature and the Fd is the final embedding. Fd is finally decoded using two parallel heads namely (1) event classification head and (2) event localization head respectively. The first estimates the probability of a particular event within the event proposal. The second estimates the onset and offset of event in the raw audio. Model Training During training, we first construct the diffusion process that corrupts the event latents to noisy latents. We then train the model to reverse this noising process. We add Gaussian noises to the learnable queries. The noise scale is controlled by βt (Eq. (1)), which adopts a monotonically decreasing cosine schedule in different timestep t, following (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020). The decoder uses the noisy event queries (corresponding to t) and the global feature Ca as the condition (see Fig 1 (b)) to generate the denoised event queries (corresponding to t −1) repeatedly until an approximation of Q is obtained. The output from the last denoising step (corresponding to each input event query) is projected into sigmoidal onset and offset timestamps and an event probability distribution using separate feedforward projection layers. We observe that SED The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 795 favors a relatively high signal scaling value than object detection (Chen et al. 2022b) (see Table 5). The event-based objective is defined as a combination of a binary classification loss for event onset and offset prediction and a cross-entropy loss for event class prediction. We compute Hungarian assignment between ground truth boxes and the outputs of the model. We supervise the model training using each pair of matched ground-truth/prediction (event class and the temporal boundary). Model Inference In inference, the noisy event queries are randomly sampled from a Gaussian distribution. Starting from noisy latents sampled from a Gaussian distribution, the model progressively refines the predictions. At each sampling step, the random or estimated latents from the last sampling step are sent into the detection decoder to predict the event category and the event onset/offsets. After obtaining the event proposals of the current step, DDIM (Song, Meng, and Ermon 2021) is adopted to estimate the proposals for the next step. DiffSED has a simple event proposal generation pipeline without post-processing (e.g., non-maximum suppression). Key Insights One Model Multiple Trade-Offs Once trained, DiffSED works under a varying number of event queries and sampling steps in inference. While inferring, each sampling step involves, estimating event queries from the last sampling step and sending them back into the detection decoder to eventually predict the event classes and event boundaries at the t0 step, i.e., fully denoised. In general, better accuracy can be obtained using more queries and fewer steps (see Table 3 and Table 4). We discuss the multistep decoding experiments in detail in our ablation study. Ultimately, it can be determined that a single DiffSED can meet a number of different trade-off needs between speed and accuracy. Faster Convergence DETR-style detection models suffer generally slow convergence (Liu et al. 2022) due to inconsistent matching of event queries to the event proposals. Concretely, for the same audio, an event query is often matched with different event boundaries in different epochs, making the optimization oscillating and difficult. In DiffSED each query is designed as a proposal proxy – a noised event query that can be regarded as a good event proposal due to staying close to the corresponding ground truth boundary. Our query denoising task thus has a definite optimization objective which is the ground truth proposal. We validate that query denoising based DiffSED converges faster than SEDT (see Fig 3), whilst achieving superior performance (Table 1). Experiments Datasets We present our results on two datasets namely, URBAN-SED (Salamon, Jacoby, and Bello 2014) and EPIC-Sounds (Huh et al. 2023). URBAN-SED is a publicly available dataset for SED in urban environments. It is accompanied by detailed annotations, including onset and offset times for each sound event, along with human generated accurate annotations. The EPIC-Sounds dataset consists of Figure 3: Convergence rates for SEDT and DiffSED on the URBAN-SED dataset. The dotted lines represent the training epoch when the best-performing checkpoint (the one with the best audio-tagging F1 score on the validation set) arrived. DiffSED trains faster (>40%) and achieves better optimum than SEDT. more than 36,000 audio recordings of various lengths, totaling over 500 hours of audio. The recordings were made in a variety of indoor and outdoor environments, including office spaces, public places, and natural environments. They cover a wide range of sound classes, including human speech, animal sounds, environmental sounds, and music. Evaluation Metrics To evaluate the model’s performance on the URBAN-SED dataset, we measure F1score, precision, and recall for both event-level and segment-level settings on the test split. For the EPIC-Sounds dataset, we report the top-1 and top-5 accuracy, as well as mean average precision (mAP), mean area under ROC curve (mAUC), and mean per class accuracy (mCA) on the validation split, following the protocol of (Huh et al. 2023). Implementation Details Training Schedule We use a pre-trained encoder backbone ResNet-50 for feature extraction, for fair comparisons with previous methods (Ye et al. 2021). Our model is trained for 400 epochs, while re-initializing the weights from the best checkpoint for every 100 epochs, using Adam optimizer with an initial learning rate of 10−4 with a decay schedule of 10−2. The batch size is set to 64 for URBAN-SED and 128 for EPIC-Sounds. All models are trained with 2 NVIDIAA5500 GPUs. Testing Schedule At the inference stage, the detection decoder iteratively refines the predictions from Gaussian random latent queries. For efficiency, by default, we denoise for a single time-step, i.e., T0 ←T1000 timestep. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 796 Model Event-based [%] Segment-based[%] Audio tagging[%] F1 P R F1 P R F1 CRNN-CWin (Miyazaki et al. 2020b) 36.75 − − 65.74 − − 74.19 Ctrans-CWin (Miyazaki et al. 2020b) 34.36 − − 64.73 − − 74.05 SEDT (Ye et al. 2021) 37.27 43.32 33.21 65.21 74.82 58.46 74.37 DiffSED (Ours) 43.89 48.46 37.82 69.24 77.49 62.05 77.87 Table 1: Results on URBAN-SED (Test set) Model Top-1 Top-5 mCA mAP mAUC ASF (Kazakos et al. 2021) 53.47 84.56 20.22 0.235 0.879 SSAST (Gong et al. 2022) 53.75 84.54 20.11 0.254 0.873 DiffSED (Ours) 56.85 87.45 20.75 0.277 0.861 Table 2: Results on EPIC-Sounds (Validation set) Algorithm 2: Noise corruption 1 \resizebox{0.75\linewidth}{!}{ 2 def add_noise(): 3 """ 4 gt_boxes: [B, *, 2] 5 event_queries: [B, N, D] 6 B: batch_size 7 N: number of event queries 8 """ 9 if corrupt bounding_boxes: # Diff-SED-BB 10 # Padding (repeat) bounding boxes 11 pb = Pad(gt_boxes, N) #[B, N, 2] 12 # Signal scaling 13 pb = (pb * 2 - 1) * scale 14 # Corrupt bounding boxes 15 t = randint(0, T) #time step 16 eps = normal(mean=0, std=1) #noise: [B, N, 2] 17 pb_crpt = sqrt(alpha_cumprod(t)) * pb + sqrt(1 - alpha_cumprod(t)) * eps 18 event_queries_crpt = Project(pb_crpt) 19 #[B, N, 2] -> [B, N, D] 20 else: # DiffSED 21 # Signal scaling 22 event_queries = (event_queries * 2 - 1) * scale 23 # Corrupt event_queries 24 t = randint(0, T) #time step 25 eps = normal(mean=0, std=1) #noise: [B, N, D] 26 event_queries_crpt = sqrt(alpha_cumprod(t)) * event_queries + 27 sqrt(1 - alpha_cumprod(t)) * eps 28 return event_queries_crpt Main Results Results on URBAN-SED We compare our model with previous end-to-end approaches under the supervised learning setting. The primary contribution of our work lies in proposing a diffusion-infused transformer decoder that provides a more robust representation of grounded event boundaries in the encoded acoustic features. From Table 1, we draw the following conclusions: (1) The diffusion-based decoder of DiffSED performs significantly better than all the other methods for both event-level and segment-level metrics, with a 6.62% and 4.03% absolute improvement, respectively. (2) Additionally, our model outperforms existing approaches in terms of audio-tagging results, with a 3.5% absolute improvement. This validates our model formulation in exploiting the SED problem as generative learning in the denoising diffusion framework. Results on EPIC-Sounds We use the publicly available pre-trained backbones ASF (Kazakos et al. 2021) and SSAST (Gong et al. 2022) as competing models. We observe from Table 2 that: (1) DiffSED consistently outperforms both the alternatives with 3.1% and 2.89% improvement in the Top-1 and Top-5 accuracies, respectively; (2) Our model performs competitively in the mAUC score. Ablation Study We conduct ablation experiments on URBAN-SED to study DiffSED in detail. All experiments use the pre-trained ResNet-50 backbone features for training and inference without further specification. Denoising Strategy Due to the inherent query based design with the detection decoder, we discuss and compare two denoising strategies: (1) Corrupting the event latents in the continuous space and passing it as queries (referred as DiffSED, our choice). (2) Corrupting discrete event proposals (i.e., ground-truth bounding boxes) and projecting it as queries (denoted as DiffSED-BB, detailed in Algorithm 2). Additionally, we corrupt the label queries using random shuffle as the noise in the forward diffusion step. To evaluate the effect of the denoising strategy experimentally, we test both variants using different numbers of event proposals. It can be observed in Table 3 that both variants achieve the best audio-tagging performance when using 30 event proposals as input to the decoder. Also, the overall scores in both event-level and segment-level metrics are lesser for DiffSED-BB compared to DiffSED. We hypothesize this is caused by some adversarial effect in projecting the groundtruth bounding box (2-dimensional) to the latent event query. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 797 #Queries Event-F1[%] Segment-F1[%] AT[%] DiffSED-BB 10 31.43 58.85 68.87 20 35.32 60.53 68.84 30 37.29 60.91 69.61 40 31.95 58.79 68.41 50 31.81 57.89 68.31 DiffSED 10 40.78 68.41 77.22 20 41.42 68.73 76.54 30 41.3 68.21 77.46 40 38.65 67.21 75.21 50 36.28 64.22 72.77 Table 3: Effect of the number of queries on the performance for URBAN-SED Test set. (AT: Audio Tagging performance) Multistep Decoding We tabulate the results upon varying the number of denoising steps for both DiffSED and DiffSED-BB in Table 4. We observe a steady improvement over the event-level and segment-level F1 scores as we increase the number of denoising steps from 1 to 5 and then gradually decrease when using 10 decoding steps. However, the best audio tagging performance is achieved when performing a single-step decoding. We hypothesize this is primarily because the event boundaries have short-range temporal dependencies that might not benefit significantly from multistep denoising. The noise addition mainly affects each time step independently and doesn’t accumulate over multiple steps hence does not yield substantial improvements. Denoising over multiple timesteps requires more computing, while providing only a marginal gain thus not worthwhile. #steps Event-F1[%] Segment-F1[%] AT[%] DiffSED-BB 1 39.78 64.74 72.92 5 38.27 65.72 71.88 10 38.3 64.82 72.17 DiffSED 1 43.89 69.24 77.87 5 44.35 70.75 77.07 10 43.50 69.05 77.36 Table 4: Effect of the number of denoising steps used while inference on the performance for URBAN-SED Test set. (AT: Audio Tagging performance) Signal Scaling The signal scaling factor controls the signal-to-noise ratio (SNR) of the diffusion process. We study the influence of scaling factors. The results in Table 5 demonstrate that the scaling factor of 0.4 achieves the highest audio-tagging performance as well as all other metrics for DiffSED, whereas for DiffSED-BB the best audio tagging performance is obtained for a scaling factor of 0.2 whilst achieving the best event-level and segment-level F1 score for a scaling factor of 0.4. This suggests the relationship between optimal scaling and the denoising strategy. Noise scale Event-F1[%] Segment-F1[%] AT[%] DiffSED-BB 0.1 32.61 32.45 73.49 0.2 35.91 35.73 75.73 0.3 37.29 60.91 69.61 0.4 39.78 64.74 72.92 0.5 33.14 61.79 71.12 DiffSED 0.1 37.61 54.63 72.2 0.2 39.65 58.17 73.89 0.3 41.3 68.21 77.46 0.4 43.89 69.24 77.87 0.5 39.23 59.25 72.78 Table 5: Effect of scaling the noise factor on the performance for URBAN-SED Test set. (AT: Audio Tagging Performance) Runs Event-F1[%] Segment-F1[%] AT[%] DiffSED-BB 1 38.6(↑0.2) 64.32(↓0.09) 72.48(0.0) 2 39.45(↓0.57) 64.15(↑0.07) 72.88(↓0.4) 3 38.57(↑0.3) 64.21(↑0.01) 72.08(↑0.4) Avg 38.87 64.22 72.48 DiffSED 1 43.12(↓0.2) 68.38(↑0.5) 77.62(↓0.01) 2 42.35(↑0.5) 68.97(↑0.01) 77.59(↑0.02) 3 43.29(↓0.3) 69.54(↓0.5) 77.62(↓0.01) Avg 42.92 68.96 77.61 Table 6: Effect of changing the seed value for inducing noise during inference. Values inside (.) indicate deviation from the mean calculated over 3 runs. Random Seed DiffSED starts with random noisy event queries as input during inference. We evaluate the stability of DiffSED and DiffSED-BB by training three models independently with strictly the same configurations (30 noisy event proposals as input to the decoder and a scaling factor of 0.4) except for random seed on URBAN-SED dataset. Then, we evaluate each model instance with 3 different random seeds to measure the distribution of performance, inspired by (Chen et al. 2022b). As shown in Table 6, most evaluation results are distributed closely to the average metrics for both variants. This demonstrates that our models are robust to random event queries. Conclusion In this work, we reformulate the Sound Event Detection (SED) problem from the generative learning perspective, in particular under the diffusion-based transformer framework. We introduce a diffusion adaptation method characterized by noisy event latents denoising. This design has the advantage of being able to model the global dependencies of sound events, while still being computationally efficient. Our study verifies the efficacy of diffusion models in a new problem context (i.e., SED), consistent with previous findings. Experiments show that our method is superior to existing art alternatives on standard benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 798 References Bear, H. L.; Nolasco, I.; and Benetos, E. 2019. Towards joint sound scene and polyphonic sound event recognition. In INTERSPEECH. Bhosale, S.; Chakraborty, R.; and Kopparapu, S. K. 2023. A Novel Metric For Evaluating Audio Caption Similarity. In IEEE ICASSP. Cakır, E.; Parascandolo, G.; Heittola, T.; Huttunen, H.; and Virtanen, T. 2017. Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection. IEEE/ACM Transactions on Audio, Speech, and Language Processing. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vision, 213–229. Springer. Chao, Y.-W.; Vijayanarasimhan, S.; Seybold, B.; Ross, D. A.; Deng, J.; and Sukthankar, R. 2018. Rethinking the faster r-cnn architecture for temporal action localization. In IEEE CVPR. Chen, K.; Du, X.; Zhu, B.; Ma, Z.; Berg-Kirkpatrick, T.; and Dubnov, S. 2022a. HTS-AT: A hierarchical token-semantic audio transformer for sound classification and detection. In IEEE ICASSP. Chen, K.; Wang, J.; Pang, J.; Cao, Y.; Xiong, Y.; Li, X.; Sun, S.; Feng, W.; Liu, Z.; Xu, J.; Zhang, Z.; Cheng, D.; Zhu, C.; Cheng, T.; Zhao, Q.; Li, B.; Lu, X.; Zhu, R.; Wu, Y.; Dai, J.; Wang, J.; Shi, J.; Ouyang, W.; Loy, C. C.; and Lin, D. 2019. MMDetection: Open MMLab Detection Toolbox and Benchmark. arXiv preprint arXiv:1906.07155. Chen, S.; Sun, P.; Song, Y.; and Luo, P. 2022b. Diffusiondet: Diffusion model for object detection. arXiv preprint arXiv:2211.09788. Dhariwal, P.; and Nichol, A. 2021. Diffusion models beat gans on image synthesis. NeurIPS. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; and Tian, Q. 2019. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 6569–6578. Gong, Y.; Lai, C.-I.; Chung, Y.-A.; and Glass, J. 2022. Ssast: Self-supervised audio spectrogram transformer. In AAAI Conference on Artificial Intelligence. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Neural Information Processing Systems (NeurIPS). Huang, R.; Lam, M. W.; Wang, J.; Su, D.; Yu, D.; Ren, Y.; and Zhao, Z. 2022. Fastdiff: A fast conditional diffusion model for high-quality speech synthesis. International Joint Conferences on Artificial Intelligence (IJCAI). Huh, J.; Chalk, J.; Kazakos, E.; Damen, D.; and Zisserman, A. 2023. EPIC-SOUNDS: A Large-Scale Dataset of Actions that Sound. In IEEE ICASSP. Igarashi, A.; Imoto, K.; Komatsu, Y.; Tsubaki, S.; Hario, S.; and Komatsu, T. 2022. How Information on Acoustic Scenes and Sound Events Mutually Benefits Event Detection and Scene Classification Tasks. In IEEE Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). Kazakos, E.; Nagrani, A.; Zisserman, A.; and Damen, D. 2021. Slow-fast auditory streams for audio recognition. In IEEE ICASSP. Koh, C.-Y.; Chen, Y.-S.; Liu, Y.-W.; and Bai, M. R. 2021. Sound event detection by consistency training and pseudolabeling with feature-pyramid convolutional recurrent neural networks. In IEEE ICASSP. Kumar, A.; Khadkevich, M.; and F¨ugen, C. 2018. Knowledge transfer from weakly labeled audio using convolutional neural network for sound events and scenes. In IEEE ICASSP. Lemercier, J.-M.; Richter, J.; Welker, S.; and Gerkmann, T. 2022. Analysing Diffusion-based Generative Approaches versus Discriminative Approaches for Speech Restoration. In IEEE ICASSP. Leng, Y.; Chen, Z.; Guo, J.; Liu, H.; Chen, J.; Tan, X.; Mandic, D.; He, L.; Li, X.-Y.; Qin, T.; et al. 2022. Binauralgrad: A two-stage conditional diffusion probabilistic model for binaural audio synthesis. Neural Information Processing Systems (NeurIPS). Li, X. L.; Thickstun, J.; Gulrajani, I.; Liang, P.; and Hashimoto, T. B. 2022. Diffusion-LM Improves Controllable Text Generation. arXiv preprint arXiv:2205.14217. Li, Y.; Liu, M.; Drossos, K.; and Virtanen, T. 2020. Sound event detection via dilated convolutional recurrent neural networks. In IEEE ICASSP. Lim, H.; Park, J.-S.; and Han, Y. 2017. Rare Sound Event Detection Using 1D Convolutional Recurrent Neural Networks. In Detection Classification Acoustic Scenes Events (DCASE) Workshop. Lin, L.; Wang, X.; Liu, H.; and Qian, Y. 2019. Guided learning convolution system for dcase 2019 task 4. arXiv preprint arXiv:1909.06178. Lin, M.; Li, C.; Bu, X.; Sun, M.; Lin, C.; Yan, J.; Ouyang, W.; and Deng, Z. 2021. DETR for Crowd Pedestrian Detection. arXiv:2012.06785. Liu, S.; Li, F.; Zhang, H.; Yang, X.; Qi, X.; Su, H.; Zhu, J.; and Zhang, L. 2022. DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR. In International Conference on Learning Representations. Lutati, S.; Nachmani, E.; and Wolf, L. 2023. Separate And Diffuse: Using a Pretrained Diffusion Model for Improving Source Separation. arXiv preprint arXiv:2301.10752. Mesaros, A.; Heittola, T.; Virtanen, T.; and Plumbley, M. D. 2021. Sound event detection: A tutorial. IEEE Signal Processing Magazine. Miyazaki, K.; Komatsu, T.; Hayashi, T.; Watanabe, S.; Toda, T.; and Takeda, K. 2020a. Convolution-augmented transformer for semi-supervised sound event detection. In Detection Classification Acoustic Scenes Events (DCASE) Workshop. Miyazaki, K.; Komatsu, T.; Hayashi, T.; Watanabe, S.; Toda, T.; and Takeda, K. 2020b. Weakly-supervised sound event detection with self-attention. In IEEE ICASSP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 799 Nag, S.; Zhu, X.; Song, Y.-Z.; and Xiang, T. 2022. Proposalfree temporal action detection via global segmentation mask learning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III, 645–662. Springer. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Salamon, J.; Jacoby, C.; and Bello, J. P. 2014. A dataset and taxonomy for urban sound research. In ACM International Conference on Multimedia. Shi, D.; Zhong, Y.; Cao, Q.; Zhang, J.; Ma, L.; Li, J.; and Tao, D. 2022. ReAct: Temporal Action Detection with Relational Queries. In European conference on computer vision. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. Song, J.; Meng, C.; and Ermon, S. 2021. Denoising Diffusion Implicit Models. In International Conference on Learning Representations. Tan, J.; Tang, J.; Wang, L.; and Wu, G. 2021. Relaxed transformer decoders for direct action proposal generation. In IEEE/CVF International Conference on Computer Vision (ICCV). Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627–9636. Turpault, N.; Serizel, R.; Salamon, J.; and Shah, A. P. 2019. Sound event detection in domestic environments with weakly labeled data and soundscape synthesis. Detection Classification Acoustic Scenes Events (DCASE) Workshop. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wakayama, K.; and Saito, S. 2022. CNN-Transformer with Self-Attention Network for Sound Event Detection. In IEEE ICASSP. Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; and Li, L. 2020. Solo: Segmenting objects by locations. In ECCV. Xie, Z.; Xu, X.; Wu, M.; and Yu, K. 2023. Enhance Temporal Relations in Audio Captioning with Sound Event Detection. arXiv preprint arXiv:2306.01533. Xu, X.; Dinkel, H.; Wu, M.; and Yu, K. 2021. Text-to-audio grounding: Building correspondence between captions and sound events. In IEEE ICASSP. Ye, Z.; Wang, X.; Liu, H.; Qian, Y.; Tao, R.; Yan, L.; and Ouchi, K. 2021. Sound Event Detection Transformer: An Event-based End-to-End Model for Sound Event Detection. arXiv preprint arXiv:2110.02011. Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L.; and Shum, H. 2022. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. In International Conference on Learning Representations (ICLR). Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable DETR: Deformable Transformers for End-toEnd Object Detection. arXiv preprint. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 800
2024
89
18,728
Parallel Empirical Evaluations: Resilience despite Concurrency Johannes K. Fichte1, Tobias Geibinger2, Markus Hecher3, Matthias Schlögel2 1 AIICS, IDA, Linköping University, Sweden 2 KBS Group, Institute for Logic and Computation, TU Wien, Austria 3 Massachusetts Institute of Technology, USA [email protected], {tobias.geibinger,matthias.schloegel}@tuwien.ac.at, [email protected] Abstract Computational evaluations are crucial in modern problemsolving when we surpass theoretical algorithms or bounds. These experiments frequently take much work, and the sheer amount of needed resources makes it impossible to execute them on a single personal computer or laptop. Cluster schedulers allow for automatizing these tasks and scale to many computers. But, when we evaluate implementations of combinatorial algorithms, we depend on stable runtime results. Common approaches either limit parallelism or suffer from unstable runtime measurements due to interference among jobs on modern hardware. The former is inefficient and not sustainable. The latter results in unreplicable experiments. In this work, we address this issue and offer an acceptable balance between efficiency, software, hardware complexity, reliability, and replicability. We investigate effects towards replicability stability and illustrate how to efficiently use widely employed cluster resources for parallel evaluations. Furthermore, we present solutions which mitigate issues that emerge from the concurrent execution of benchmark jobs. Our experimental evaluation shows that – despite parallel execution – our approach reduces the runtime instability on the majority of instances to one second. Introduction “Can we please do science again” was a highly provocative catchphrase by Karem Shakallah in a roadmap talk on his perspective for the next stage of research in solving the Boolean satisfiability (SAT) problem (Sakallah 2023). He argued that we have a limited understanding of certain aspects of modern solving techniques, that understanding could be purely driven by empirical competitions catching for slightly improving another technique or implementation, and that models and techniques are too complex for grasp. Still, not just in the SAT community but also in various combinatorial-solving communities (Bartocci et al. 2019), empirical evaluations are at heart and certainly a tedious part of scientific work (McGeoch 2012). Competitions (Bartocci et al. 2019), obtaining conclusions about practical algorithms (Elffers et al. 2018), verifying proof traces (Heule and Kullmann 2017), or estimating the benefits of some new algorithm or technical improvement (Müller-Hannemann and Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Schirra 2010) all require in-depth, breadth, and reliable empirical experiments. The sheer amount of required resources often makes it impossible to execute the experiments on a single computer. Instead, we use clusters of computers for experiments, which cannot be used exclusively by single persons or groups due to cost-efficiency demands. In engineering, complexity, but there referring to complicatedness, is often seen as an enemy of reliability (Geer et al. 2003). On computer clusters, the complicatedness of the underlying architecture results in drawbacks and pitfalls during operations for combinatorial solving. Parallel executions, diverging runtimes of instances and solvers, shared resources (network or local), and scheduling efficiency are drawbacks that might result in unreliable execution and hence irreproducible empirical results. In engineering (Henderson and Patel 2002; Force 1993), standardization is used to control complicatedness and reduce uncontrolled parts’ potential side effects. Unfortunately, to our knowledge, no standardization on configuring hardware and software is available in the combinatorialsolving community. Mistakes are common and far from easy to spot. Reliable empirical experiments on individual systems and performance improvements are well investigated (Georgiou et al. 2014; Beyer, Löwe, and Wendler 2019; Vercellino et al. 2023), and tools focusing on precise measurements, repeatability, optimal throughput, and efficiency of participating systems are available for installation (Beyer, Löwe, and Wendler 2019). However, a primary disadvantage of most dedicated tools is exclusive and very permissive system access, which is often hard to establish or requires dedicated resources for a limited number of people resulting in extremely unsustainable and inefficient usage of computer hardware. Still, cost efficiency demands from a management or resource availability perspective, and sustainability require that resources are shared among many applications. Highperformance computing (HPC) data centers have resources in the form of computation clusters widely available and are often very well maintained (Green500 Authors 2022; Strevell et al. 2019). Fortunately, when applying the right restrictions, widely developed technology from HPC environments can be highly useful for computational experiments in various combinatorial communities. Furthermore, these systems allow us to “maximize” throughput and use of shared reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8004 sources to increase cost efficiency and sustainability. Contributions Our contributions are as follows: 1. We investigate foundations for stable, parallel, repeatable empirical evaluations of computational experiments. In contrast to previous works, we focus on the memory design in modern computer architecture, which is a significant factor for variation and irreproducibility. 2. We provide a novel technique for stable and repeatable experiments. Our main ingredient cache partitioning enables us to eliminate issues that arise from modern memory architectures. To our knowledge, cache partitioning as yet not been suggested for repeatable experiments. 3. Our overall approach fits well into standard highperformance computing (HPC) environments, which encourages the use of modern cluster environments that was previously seen highly problematic for replicability. Related Works. McGeoch (2012) provides extended insights into setting up experiments. Müller-Hannemann and Schirra (2010) discuss basic algorithm engineering. Researchers developed techniques to optimize scheduling within computer clusters (Bridi 2018; Galleguillos et al. 2019). The influence of hardware efficiency and measurements is well established (Georgiou et al. 2014; Beyer, Löwe, and Wendler 2019; Vercellino et al. 2023). Unfortunately, effective, specialized resource limitation and benchmarking tools are often impracticable. Highly effective tools such as BenchExec (Beyer, Löwe, and Wendler 2019) must be installed deep into the system, require very permissive access, maintaining of different user groups, and additional finetuning. These tools often require to run an entire benchmarking toolchain making testing and debugging hard. Complications of modern hardware on combinatorial solvers have been studied (Koopmann, Hähnel, and Turhan 2017; Fichte et al. 2021). For algorithm configuration, pitfalls in empirical work have been previously identified (Bocchese et al. 2018) and best practices to avoid them suggested (Eggensperger, Lindauer, and Hutter 2019). Recent initiatives on reproducible research focus on transparent research artifacts with Guix, a system that enables the building of computation environments (Vallet, Michonneau, and Tournier 2022). Software heritage projects aim at preserving source code and binaries from research software (Cosmo and Zacchiroli 2017; Audemard, Paulevé, and Simon 2020). Experiments on SAT solver development over time and hardware influence exist in the literature (Biere et al. 2023; Fichte, Hecher, and Szeider 2023). Basics of Empirical Evaluations Empirical evaluations are crucial to modern combinatorial problem-solving when we reach beyond theoretical algorithms or bounds. In practice, we often start from an algorithm, its implementation, and hypotheses about the behavior of the implementation on certain inputs, called instances, and variations of parameters of the implementation, called configurations. Then, we decide on a design for the experiment (DOE), which contains information about the implementations, configurations, instances, and appropriate measures to evaluate our hypotheses. Requirements. When we execute a designed experiment, we need to ensure basic principles to obtain a scientific value. Two fundamental principles are repeatability and replicability. The goal of repeatability is to obtain the same result on the same computer reliably. When repeating a computation of a solver in the same configuration and with the same instances multiple times, we aim for the identical outcome assuming that the algorithm is deterministic. Replicability or recomputability encompasses the principle that we can obtain the same results confidently given the original artifacts and comparable hard- and software. To ensure these fundamental principles, we are interested in deterministic hard- and software platforms for our evaluation, which ensures repeatability and allows us to estimate random errors or study non-deterministic algorithms. Structuring Work. Since tasks can frequently take up much work and the sheer amount of required resources makes it often impossible to execute the experiments on a single computer, we need scalability of the experiment to multiple computers. Therefore, we describe the instances, solvers, and conditions under which the experiment will be expected to run successfully. We call the execution of this description a job, which is more generally an allocation of resources for a specific amount of time to execute a dedicated task. A job may consist of one or more steps, each consisting of one or more process using one or more CPUs. Later, we refer to a process by exactly one sequential process. This leads us to automated job execution for which we can formulate natural requirements. Since we are interested in fast scheduling and high throughput of our experiments, the system needs to be resource efficient. We need stability and resilience of the job execution system, as we are interested in repeatability and recomputability. Automating Job Execution. Many different job execution systems exist (Wasik et al. 2016; Stump, Sutcliffe, and Tinelli 2014; Beyer, Löwe, and Wendler 2019; Ceri et al. 2003; Chappell 2004; Ibsen and Anstey 2018; Luksa 2017). In academia, the largest available computation power is in high-performance computing (HPC) (Eijkhout 2022), which aims for fast, energy efficient, highly parallel, scalable, and isolated execution of computation tasks (Sterling 2002; Jette 2012; Green500 Authors 2022). HPC uses a set of loosely coupled computers acting as one system, called cluster, to solve problems that are computationally hard or highly data intensive. A single computer of a cluster is usually referred to as a node. Since the size of a cluster can reach thousands of machines, tools for maintainability, scalable cluster management, and job scheduling are necessary. Today’s most popular software is the Simple Linux Utility for Resource Management (SLURM) (Yoo, Jette, and Grondona 2003; Auble et al. 2023), which contains a scheduling component where jobs describe all details of the execution. For combinatorial experiments, we require certainty and replicability (Beyer, Löwe, and Wendler 2019), which is quite orthogonal to the HPC’s goals of fast and energy efficient The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8005 Type Issue Example Reference Solution System Kernel Unexpected performance behavior (Kocher et al. 2019) Monitor CPU throttling System heats up and reduces performance (Fichte et al. 2021) Governors CPU load Use of desktop operating systems (Li, Ding, and Shen 2007) Avoid Overhead Use of virtual machines (Joy 2015) Avoid Design of Documentation Options not documented (McGeoch 2012) Care Experiment Measurements core/wall time; virtual/actual memory (Fichte et al. 2021) Decision Slow runs Proof logging onto slow storage (Beyer, Löwe, and Wendler 2019) shm Parameters Incomparable parameters (McGeoch 2012; Bocchese et al. 2018) Care Execution Isolation Resource enforcement fails (Beyer, Löwe, and Wendler 2019) cgroups Memory Memory paging (Beyer, Löwe, and Wendler 2019) cgroups Slow I/O Read/write large amounts of data (IBM Team 2021) shm Cacheline Cores share L2/L3 cache This paper resctrl Table 1: Listing of pitfalls when running experimental evaluations. computation and complex storage and memory architectures in clusters. Thus, we need to be extremely careful when setting up and running an experiment. Common Pitfalls. When we are interested in reliable empirical evaluations of combinatorial experiments, we can easily run into numerous pitfalls. We list standard issues that frequently show up in combinatorial evaluations and provide references to the literature for more details in Table 1. We separate these issues into three types depending on the type or phase when these issues show up: the system, the design of experiment, and the execution. Measuring. We use the Linux performance events subsystem (perf) to measure runtime, memory, and extended system events (Zijlstra, Molnar, and de Melo 2009; Weaver 2013). perf is part of the Linux kernel and allows to monitor both hardware and software at a fairly low overhead. For example, perf stat -e cycles -I 1000 cat /dev/urandom > /dev/null measures the number of cycles, which state the CPU frequency at the time of measurement and can be used to quickly spot performance degeneration originating in varying CPU frequency. The CPU frequency is usually adjusted by Linux performance governors (Brodowski et al. 2016). Specialized tools for stressing the system, test initial system performance, and detect silent performance degeneration of hardware are sysbench (Zaitsev, Kopytov et al. 2020), stress-ng (Haleem et al. 2023), and GeekBench (Primate Labs Inc. 2023). Discrepancies in hardware parameters can be spotted using tools such as hardinfo (Pereira et al. 2023), dmidecode (Cox et al. 2023), and lshw (Vincent et al. 2023). Resource limits can be enforced in multiple ways, BenchExec (Beyer, Löwe, and Wendler 2019) runsolver (Roussel 2011), and cgroups. We suggest to use a combination of runsolver and cgroups, as BenchExec can oftentimes not be employed in HPC environments. Using the above mentioned tools, we can tune the system to the best possible performance. If we run a purely sequential execution where each process has exclusive access to the entire hardware and avoid solving multiple instances, many issues can be circumvented (Beyer, Löwe, and Wendler 2019). Resource Enforcement. Many issues that we presented above can already be circumvented by a precise setup of SLURM in combination with a dedicated job generators and resource monitoring tooling (Auble et al. 2023). In SLURM, we can isolate each individual executions of a solver, configuration, and instance using the cgroupsv1 plugin (Jackson and Lameter 2006; Auble et al. 2022). The cgroups restriction makes sure that the solver sees only the assigned amount of resources (cores and memory) and cores are pinned automatically when the cluster scheduler spawns a task on a node. We can strictly restrict cores by setting the ConstrainRAMSpace option. Then, no oversubscription is possible. When we enforce memory limits using the ConstrainRAMSpace option in the cgroups plugin, the kernel triggers an out-of-memory if the memory limit is reached and terminates processes. Concurrency and Resilience Literature on empirical evaluation oftentimes suggests to run experiments sequentially where each process has exclusive access to an entire computer to obtain stable and repeatable conditions (Beyer, Löwe, and Wendler 2019; Eggensperger, Lindauer, and Hutter 2019). However, there are two major shortcomings from this approach (i) even in a sequential setting, runtime variations can be significant; and (ii) cost, time, and resource-sustainable perspective still calls for solving multiple instances in parallel since modern computers provide many cores. In this section, we tackle these shortcomings and stabilize hardware for replicable sequential runs and enable concurrent execution of processes by a resilient configurations for cluster schedulers. We take advantage of the maximum available resources resulting in the execution of multiple processes. Before addressing the shortcomings, we need to identify its origins. The major contributing effect is modern hardware architecture, which is fairly complicated. While modern processors consist of many physical cores access to certain parts of the memory might be fast, slow, or vary in terms of speed. Combinatorial solvers and, in particular, SAT solvers have extensive memory requirements and demand fast memory access (Zhang and Malik 2004; Fichte et al. 2023b). Over The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8006 90% of the runtime of a modern SAT solver attributes to a process called unit propagation, which depends on fast memory access. Less known but well-studied is the effect of memory cachelines and mapping between virtual and physical memory to memory bound combinatorial solvers for the same reasons (Hölldobler, Manthey, and Saptawijaya 2010; Fichte et al. 2020). Now, a process may have access to a small or large portion of the memory depending on the current resource allocation. If multiple sequential solvers are executed in parallel, runtime can degenerate quickly. Processors and Memory Access. To design stable and repeatable experiments, we need to revisit basics in hardware architecture and to understand the structure of modern CPUs and the interconnection to memory. Modern computer architecture employs a hierarchy between different types of memory (Hennessy and Patterson 2011). While this is folklore for persistent (HDD, SDD) and volatile memory (DRAM), it is less known that modern processors have only indirect access to dynamic random-access memory (DRAM). The reason can be found in the balance between access time and costs. For DRAM, we see access times of around 10−8s. But the effective base frequency of a modern processor is around 2GHz, i.e., one cycle per 5 · 10−10s. In consequence, DRAM is not fast enough to provide data to a processor core. To avoid wasting cycles, faster but more expensive memory is employed, which are called registers. A register provides the fastest way to access data but is usually very small. For example, floating point registers of modern processors contain 512 bit. Behind the registers, we find caches that are slower but larger and still faster than DRAM. Usually, there are multiple cache levels, L1, L2, and sometimes L3 or L4 that occupy notable parts of the actual microchip. In practice, the amount of data that can be reused from a cache (hit) plays an important role in performance. Decreasing the access time to a cache also boosts performance. However, before a core can access data from DRAM, data needs to be fetched into the cacheline. Here, latency matters since a core that runs out of required data needs to wait for the data (stalling). In addition, cores have usually fast access only to a certain part of the DRAM meaning that DRAM on the local socket is faster accessible than DRAM that is wired to another socket. This memory design is called non-uniform memory access (NUMA) and present in multicore architectures (Majo and Gross 2011). Example 1. Fig. 1 provides a hierarchical map of involved elements in the memory hierarchy of a system with an Intel E5-2650v4. The system contains two physical sockets and 12 physical cores on each socket. Hyperthreading is disabled. The L1 cache can store 32KB data, the L2 256KB data, and L3 30MB data. In total, the 12 cores per socket share one L3 cache and from the specification we see that each socket has 4 memory channels (Intel 2016). Memory-aware Concurrent Job Scheduling Many modern combinatorial solvers are extremely memory demanding. Additionally, modern computer architecture consists of multiple processor cores, fairly slow main memory, and elaborate memory cachelines to compensate Figure 1: Illustration of a memory cacheline obtained by lstopo. We see a hierarchical map of the key computing elements, e.g., NUMA memory nodes, shared caches, processor sockets, and processor cores (threads). Figure 2: Cache partitioning of the L3-cache from Figure 1. for slow memory access. Hence, when aiming for repeatable experiments, we need to eliminate unpredictable memory access patterns. Furthermore, when solving multiple instances in parallel, we cannot accidentally block caches and memory, given that modern cores are notably faster than access to the memory. Therefore, we construct configurations for resilient shared caches and ensure that non-uniform memory access (NUMA) is reduced to a minimum. L3 Cache Partitioning. In most modern CPUs, the L3 cache is shared among cores on the same socket. For example, assume that two processes, solver A and solver B with an input instance each, are executed on separate cores but share a cache, e.g., the L3 cache. If solver A runs in a highly memory-intensive phase, solver A might cause all shared caches that are also allocated by solver B to be evicted. Then, the performance of solver B will very likely be reduced. Even worse, if solver B is also highly memory-intensive, both solvers frequently evict each others required caches. In consequence, when running multiple independent jobs on a single socket, we observe severe interference among the cache allocations of those jobs. We tackle this issue by cache partitioning, which is a technique to make the cache behavior more predictable. Therefore, we partition a shared cache into smaller parts to which so-called dedicated resource groups or dedicated cores have access. Figure 2 illustrates an equal partition of the L3-cache from Example 1. We assign a process to a resource group by cache access patterns. For replicable empirical evaluations, we define nonThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8007 overlapping parts for all concurrently running processes and ensure exclusive access to the cache. Thereby, we enforce more predictable memory access, which in turn makes the runtime more repeatable. Note that on certain hardware, the shared caches may have shared buffers or queues that are not part of the partitioning. In addition, smaller caches result in increased cache misses, but more predictable runtime among varying runs. Enforcing Non-overlapping Caches. Recent Linux kernel implementations allow to manually partition shared caches by a hardware control component called Resource Control (resctrl) (Yu, Luck, and Shivappa 2023; Intel 2023). L3 cache partitioning allows us to manually assign a particular part of the cache memory to a specific process. Unfortunately, we cannot simply split the cache into equal parts or directly assign equal parts to particular cores. Instead we construct a bitmask consisting of ℓ-bits that specify how to divide the cache. The number of available bits depends on the actual hardware, namely, the physical limitation to access the cache, so-called cache ways (Intel 2019). When the mask is set, a set of cores has access to a defined part. To that end, let n, ℓ> 0 be positive integers representing the number n of cores on the socket and the number ℓ of L3 cache ways, respectively. Now, the idea is that we partition the L3 cache in chunks corresponding to the greatest common divisor of n and ℓ. Therefore, let gcd(n, ℓ) refer to the greatest common divisor, which is the largest positive integer dividing both integers, between n and ℓ. This results in k := gcd(n, ℓ) partitions of the cache and each of those partitions will be assigned n/ gcd(n, ℓ) cores. In more technical details, we set the bitmask as follows. For each integer 1 ≤j ≤n representing the core number, we define the bitmask b1 . . . bm where bi := ( 1 if (i−1)·n gcd(n,ℓ) ≤j < i·n gcd(n,ℓ), 0 otherwise. (for i ≤ℓ) The full cache bitmask is the disjunction of the bitmasks of its assigned CPU cores. Whenever a core from a partition is used by a process (solver), the corresponding cache partition is made available. By ensuring that processes get assigned to non-overlapping partitions of cores, we also ensure that the L3 cache partitions do not overlap. Example 2. Consider the memory layout as given in Example 1 and illustrated in Figure 1. When splitting the L3-cache according to our formula above, we obtain 4 partitions containing 5 cache ways and 3 cores each. Core Binding and Memory Channels. Since our cache partitioning relies on assigning solvers to particular CPU cores, we bind running processes to a specific set of cores within a cluster node (Auble et al. 2023). Furthermore, the available memory channels can have a notable impact on the runtime of solvers on the exact same instances. Hence, we limit the number of running solvers by the available memory channels to avoid over-committing. Scheduling in Practice In the previous section, we introduced concepts and methods to reduce uncertainty in memory access. To obtain actual replicable experimental results, we need to put these insights into practice. Therefore, we establish the technical part next. Scheduling Jobs. We employ the HPC software SLURM to describe and execute experiments. We compile a detailed description of the job that is supposed to run on the cluster including a configuration to enforce resource requirements and wrappers to monitor occupied system resources. The cluster scheduler enables us directly to isolate runs, memory, and avoid oversubscribing of resources when properly configured. However, cache-aware scheduling is not available and needs additional effort. We employ the runsolver tool (Roussel 2011) to sample memory consumption and patiently terminate a processes hierarchy that rans out-ofmemory allowing the solver to write logs and statistics. In addition, we measure the performance of the task during the execution with perf. We provide our cluster configuration including additional comments in the supplement (Fichte et al. 2023a). We use ansible to deploy configurations onto nodes (Hochstein and Moser 2017). Cache Partitioning. We implement caching partitioning by the resctrl Kernel feature. To set up the configuration for each SLURM job, we utilize a custom prolog script, which runs prior to the job execution. The script creates a new restcrl resource group, sets the bitmask according to the formula stated in the previous subsection, and inserts the identifier of the process into that group. Child processes inherit the same restrictions. For more technical details on resctrl we refer to the documentation (Intel 2023). Memory Channels as Resource. Since memory channels can have notable practical impact on the runtime of solvers on the exact same instances, we introduce additional features for the cluster scheduler to enable exclusive access also for memory channels. We establish this by a fairly unconventional approach. SLURM (gres.conf) allows to specify and configure arbitrary Generic RESources (GRES). On each compute node, we employ GRES and create mockresources, which we call memch-resources. These are simply empty files memch0, memch1, . . . in the directory /opt/gres/. GRES recognizes these files as resources. Then, a job can request the virtual resource and the SLURM scheduler ensures that no other job accesses our “memch-resources” while it is in use. In the configuration, we link the resource to cores according to the hardware system specification. If a user requests the number of cores that matches with a multiple of the number of cores in a memch-resource, SLURM assigns all cores according to the memch-resource. Thereby, we ensure that job is given cores consecutively according to the memch-resource definition, which we produce according to the memory layout of our hardware. We can employ a similar approach for caches, if we want to take L2 or L3 caches into our consideration. For simplicity, also a combined resource capturing stable behavior for cacheline and memory channels can be defined. In that way, users can request both cachelines or memory channels as a resource. Example 3. For our system described in Section and Figure 1, we have consecutively numbered cores with L1 and L2 caches each, one shared L3 cache on each socket and 4 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8008 NodeName=node[1-11] Name=memch File=/opt/gres/memch0 ,→Cores=0,1,2 ... Figure 3: Scheduler cacheline-aware configuration. memory channels, where 4 memory modules are present for each socket. So technically, we could assign set a memchresource as each multiple of 24/4/2 = 3. However, we align the resource with our L3-cache partitioning from Example 2. Hence, we define the memch-resource memch0 on cores 0–2 and so on (see Figure 3 for details). Task/affinity. To ensure consistent and stable memory access, we employ the task/affinity plugin in SLURM, which allows us to bind a task to a specific set of cores within a node (Auble et al. 2023). Using extended parameters, we can schedule tasks according to present NUMA regions -cpu-bind=rank_ldom or specify specific orders in which we aim to use cores. Regardless of specific requests in jobs, the plugin ensures that a default memory placement policy is enforced automatically according to the requested cores. Storage Access. In order to avoid issues that originate in slow I/O and that file system caches are employed implicitly, which might slow down or speed up a subsequent execution of on the same instance, we provide each task with an individual copy of the input via the shared memory file system (/dev/shm). We copy input files into the temporary folder and provide the actual input from the shared memory file system. Copperhead. The construction of jobs and configurations, which implement the techniques established above, can easily consume precious time for each experiment. In practice, we aim at reducing user overhead as much as possible. Therefore, we designed a tool, called copperbench1 that generates jobs from compact descriptions for experiments. Our tool creates a script that wraps the experimental task to resolve the aforementioned issues. After the job finished, we collect data, parse the output files, and compile a summary. In that way, experimenting can uniformly be automatized. Experiments: Concurrent Benchmarking To investigate effects on memory caches and the execution of solving multiple instances in parallel, we design a small experiment. Binaries, instances, and logs can be found in the supplement (Fichte et al. 2023a). We consider SAT solvers, which are known to be highly memory demanding. For simplicity, we take the solvers glucose (Audemard and Simon 2019), CaDiCaL (Biere 2019) and Kissat (Biere et al. 2020), which show robust performance. We take the instance set set-asp-gauss, which contains 200 publicly available SAT instances from a variety of domains with increasing practical hardness (Hoos et al. 2013). We set timeouts to 900s and memouts to 10GB, as we are primarily interested in repeatability on individual instances. In contrast, SAT competitions restrict the runtime to 5,000s and memory to 128GB, however, require certificates. 1github.com/tlyphed/copperbench relative time diff [%] #cores CaDiCaL Kissat glucose off on off on off on 1 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 2 0.1 7.0 -0.0 7.5 0.1 4.8 4 -1.6 4.6 -1.7 5.1 -0.7 3.4 6 -3.5 – -3.6 – -1.6 – 8 -5.3 -0.4 -5.4 -0.3 -2.6 -0.4 24 -18.9 – -20.1 – -11.0 – Table 2: Relative wall clock time difference to baseline for each solver in relation to 1 occupied core. Lower is better. on/off refers to cache partitioning. We mark unavailable runs due to bitmask limitations by “–”. Solver toff[h] ton[h] ∆[%] CaDiCaL 8.38 9.01 7.0 Kissat 6.55 6.87 4.7 glucose 7.12 7.69 7.4 Table 3: Comparing the total runtime when a solver has exclusive access to the entire node (1 core occupied). t[h] states the total wall clock time and on/off refers to L3-cache partitioning and limiting the size of the partition to a quarter of the total cache; ∆[%] gives the relative change in percent. Environment. We run on a cluster consisting of 11 nodes. Each node is equipped with two Intel Xeon E5-2650v4 processors consisting of 12 physical cores running at a base frequency of 2.2GHz, 256GB shared RAM in total. Hyperthreading is disabled. The operating system is a Ubuntu 22.04.2 LTS running a 5.19.0-41-generic Linux kernel. Design of Experiment. Next, we test the effect of the memory cacheline. To this end, we run multiple settings between purely sequential runs with no interference and multiple instances solved in parallel. We repeat each instance 5 times per solver. The considered solvers are sequential and execution means executing a solver on the command line, i.e., the combination of solver, configuration, instance, and repetition. We compare varying number of occupied cores together with activated and deactivated cache partitioning. Both setups implement all other techniques illustrated in the previous sections. Expectations. Our expectations are as follows: E1 The runtime difference between several repetitions of each instance is low (average), but can be high on certain instances regardless of parallel runs. E2 Parallel execution degenerates the total runtime, number of solved instances, and results in higher deviation between repetitions. E3 If memory requirements are chosen according to available memory cacheline and channels, variation in the total runtime and number of solved instances is minimal. Observation. In Table 2, we find a summary of the results for the different solvers and settings comparing the relative The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8009 Figure 4: The standard deviation of solving time among the 5 repetitions for the considered settings without (left) and with L3-cache partitioning (right). Besides on the very right, the x-axis refers to the number of active cores, meaning, on each node in total x instances are run in parallel while each solver instance occupies one core. The right most setting "S" shows the results for running a single solver exclusively on one node but with the same cache restriction as running 8 in parallel. runtime difference. Figure 4 shows the standard deviation of the solving times among the 5 runs in the different settings. On the left side, we illustrate the situation without any modifications to the L3-cache. Whereas, on the right side, we visualize the results when restricting the L3-cache according to our formula in Section and the setting "S" which has the same cache restriction as the 8 solver setting and shows that differences in the current load have no impact. Table 3 illustrates the performance loss when activating cache partitioning and assigning the same amount of cache as in the setting of 8 parallel solvers in wall clock time for the baseline, that is, occupying only one core and having exclusive access to an entire node (the best possible performance for an individual solver run). Restricting the cache significantly affects performance. However, as we have already stated above this setting produces the same performance as running 8 solvers in parallel. Hence, the performance is marginally worse but stable. We observe that without cache partitioning the overall performance of solvers degrades with the number of simultaneously committed cores (Table 2). Moreover, the variance in runtime for the exact same solver and input instance 4 increase significantly. The runtime increases up to 5% when running 8 solvers and 18% when running 24 solvers in parallel on one node and the standard deviation for the exact same instances can be up to a factor of 5 to 7 higher (from 5s to 25 or 35s, respectively). If we enable cache partitioning, total runtime is stable even when simultaneously committing multiple cores, i.e., running multiple solvers in parallel on a node. Runtime degeneration is slightly worse than without cache partitioning (c.f., Table 2 and Table 3). Note that the employed instance set contains instances with varying runtimes. The effect increases when the total runtime of an instance increases. Hence, we expect an even more problematic behavior in experiments, where instances that were solved extremely fast, are excluded. Summary. Our experiments show that running multiple jobs in parallel can severely influence performance and thus repeatability. As shown in Table 2, running one job per core (24) can lead to drastically longer solving time and thus also less solved instances. Further, depending on the current load when a job was run, its performance can differ. By careful cache partitioning, we obtain stable, resilient, and replicable experiments. Partitioning increases the individual runtime slightly, but we obtain much more sustainable hardware usage. To this end, we run 8 processes in parallel, which is the expected technical maximum on our setting due to the 4 memory channels per socket. Furthermore, if we select a uniform cache size regardless of the actual occupied cores, see Figure 4 (right) 8 vs. S, we ensure stable results independent of the current hardware load. Conclusion We investigate how sustainable, replicable empirical experiments can be designed and established. In contrast to previous work, we suggest conditions for parallel execution. By exploring the factor system memory, we eliminate a major issue that is often neglected as solving focuses on processors only. We illustrate how widely employed cluster schedulers can be fruitfully employed for combinatorial evaluations. Finally, we emphasize that a proper setup of empirical work should not be trivialized. The effect of a problematic execution can easily destroy the scientific value of an experiment. Our work opens up multiple directions for future research. We believe that an interesting question is to evaluate whether task isolation, frequency scaling, and further methods that involve system memory can be employed to design abstract and reliable execution environments that provide repeatability beyond the hardware where it was executed. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8010 Acknowledgments Authors are ordered alphabetically. The work has been carried out while Fichte and Hecher visited the Simons Institute at UC Berkeley. Research is supported by ELLIIT funded by the Swedish government; the Austrian Academy of Sciences (ÖAW), DOC Fellowship; the Austrian Science Fund (FWF), grant J4656, and the Society for Research Funding in Lower Austria (GFF) grant ExzF-0004. References Auble, D.; et al. 2022. Control Group in Slurm. https:// slurm.schedmd.com/cgroups.html. Auble, D.; et al. 2023. Slurm Workload Manager. https: //slurm.schedmd.com. Audemard, G.; Paulevé, L.; and Simon, L. 2020. SAT Heritage: A Community-Driven Effort for Archiving, Building and Running More Than Thousand SAT Solvers. In SAT’20, 107–113. Springer Verlag. Audemard, G.; and Simon, L. 2019. Glucose in the SAT Race 2019. In Proceedings of SAT Race 2019 : Solver and Benchmark Descriptions, 19–20. University of Helsinki. Bartocci, E.; Beyer, D.; Black, P. E.; Fedyukovich, G.; Garavel, H.; Hartmanns, A.; Huisman, M.; Kordon, F.; Nagele, J.; Sighireanu, M.; Steffen, B.; Suda, M.; Sutcliffe, G.; Weber, T.; and Yamada, A. 2019. TOOLympics 2019: An Overview of Competitions in Formal Methods. In TACAS’19, 3–24. Springer Verlag. Beyer, D.; Löwe, S.; and Wendler, P. 2019. Reliable benchmarking: requirements and solutions. International Journal on Software Tools for Technology Transfer, 21(1): 1–29. Biere, A. 2019. CaDiCaL Simplified Satisfiability Solver. http://fmv.jku.at/cadical/. Biere, A.; Fazekas, K.; Fleury, M.; and Heisinger, M. 2020. CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling Entering the SAT Competition 2020. In SAT COMPETITION 2020. Biere, A.; Fleury, M.; Froleyks, N.; and Heule, M. J. 2023. The SAT Museum. In Proceedings of the 14th International Workshop on Pragmatics of SAT (PoS’23), volume 3545. CEUR Workshop Proceedings (CEUR-WS.org). Bocchese, A. F.; Fawcett, C.; Vallati, M.; Gerevini, A. E.; and Hoos, H. H. 2018. Performance robustness of AI planners in the 2014 international planning competition. AI Communications, 31(6): 445–463. Bridi, T. 2018. Scalable optimization-based Scheduling approaches for HPC facilities. Ph.D. thesis, Universida di Bolonga. Brodowski, D.; Golde, N.; Wysocki, R. J.; and Kumar, V. 2016. CPU frequency and voltage scaling code in the Linux(TM) kernel. https://www.kernel.org/doc/ Documentation/cpu-freq/governors.txt. Ceri, S.; Fraternali, P.; Bongio, A.; Brambilla, M.; Comai, S.; and Matera, M. 2003. Designing data-intensive Web applications. Morgan Kaufmann. Chappell, D. A. 2004. Enterprise service bus: Theory in practice. O’Reilly. Cosmo, R. D.; and Zacchiroli, S. 2017. Software Heritage: Why and How to Preserve Software Source Code. In iPRES’17. Cox, A.; Delvare, J.; Delvare, J.; et al. 2023. dmidecode(8) - Linux man page. https://linux.die.net/man/8/dmidecode. Eggensperger, K.; Lindauer, M.; and Hutter, F. 2019. Pitfalls and best practices in algorithm configuration. Journal of Artificial Intelligence Research, 861–893. Eijkhout, V. 2022. The Art of HPC. Lulu Press. https: //github.com/VictorEijkhout/TheArtofHPC_pdfs/. Elffers, J.; Giráldez-Cru, J.; Gocht, S.; Nordström, J.; and Simon, L. 2018. Seeking Practical CDCL Insights from Theoretical SAT Benchmarks. In IJCAI’18, 1300–1308. IJCAI. Fichte, J. K.; Geibinger, T.; Hecher, M.; and Schlögel, M. 2023a. Dataset: Parallel Empirical Evaluations: Resilience Despite Concurrency. Zenodo. doi.org/10.5281/zenodo. 10400972. Fichte, J. K.; Hecher, M.; Le Berre, D.; and Szeider, S. 2023b. The Silent (R)Evolution of SAT. Communications of the ACM, 66(6): 64–72. Fichte, J. K.; Hecher, M.; McCreesh, C.; and Shahab, A. 2021. Complications for Computational Experiments from Modern Processors. In CP’21, 25:1–25:21. Dagstuhl Publishing. Fichte, J. K.; Hecher, M.; and Szeider, S. 2023. A Time Leap Challenge for SAT-Solving. CoRR. arxiv.org/abs/ 2008.02215. A preliminary version appeared in CP’20. Fichte, J. K.; Manthey, N.; Schidler, A.; and Stecklina, J. 2020. Towards Faster Reasoners by using Transparent Huge Pages. In CP’20, 304–322. Springer Verlag. Force, I. E. T. 1993. IETF Online Proceedings. https://www. ietf.org/old/2009/proceedings_directory.html. Galleguillos, C.; Kiziltan, Z.; Sîrbu, A.; and Babaoglu, O. 2019. Constraint Programming-Based Job Dispatching for Modern HPC Applications. In CP’19, 438–455. Springer Verlag. Geer, D.; Bace, R.; Gutmann, P.; Metzger, P.; Pfleeger, C. P.; Quarterman, J. S.; and Schneier, B. 2003. CyberInsecurity: The Cost of Monopoly. https://cryptome.org/ cyberinsecurity.htm. Georgiou, Y.; Cadeau, T.; Glesser, D.; Auble, D.; Jette, M.; and Hautreux, M. 2014. Energy Accounting and Control with SLURM Resource and Job Management System. In Distributed Computing and Networking, 96–118. Springer Berlin Heidelberg. Green500 Authors. 2022. The Green500 Supercomputers. https://www.top500.org/lists/green500/. Haleem, A.; et al. 2023. stress-ng (stress next generation). https://github.com/ColinIanKing/stress-ng. Henderson, J.; and Patel, S. 2002. The Role of Market-based and Committee-based Standards. Technical report, Babson College. Hennessy, J. L.; and Patterson, D. A. 2011. Computer Architecture: A Quantitative Approach. Morgan Kaufmann, 5th edition. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8011 Heule, M. J. H.; and Kullmann, O. 2017. The Science of Brute Force. Communications of the ACM, 60(8): 70—79. Hochstein, L.; and Moser, R. 2017. Ansible: Up and Running: Automating configuration management and deployment the easy way. O’Reilly Media. Hölldobler, S.; Manthey, N.; and Saptawijaya, A. 2010. Improving Resource-Unaware SAT Solvers. In LPAR’16, 519– 534. Springer Verlag. Hoos, H. H.; Kaufmann, B.; Schaub, T.; and Schneider, M. 2013. Robust Benchmark Set Selection for Boolean Constraint Solvers. In LION’13, 138–152. Springer Verlag. IBM Team. 2021. IBM Debugging disk I/O on Linux servers. https://www.ibm.com/docs/en/ioc/5.2.0?topic= resources-debugging-disk-io-linux-servers. Ibsen, C.; and Anstey, J. 2018. Camel in action. Simon and Schuster. Intel. 2016. Specification: Intel® Xeon® Processor E5-2650 v4 30M Cache, 2.20 GHz. https://www.intel.com/content/ www/us/en/products/sku/91767/intel-xeon-processore52650-v4-30m-cache-2-20-ghz/specifications.html. Intel. 2019. Use Intel® Resource Director Technology to Allocate Last Level Cache (LLC). https://www.intel.com/content/www/us/en/developer/ articles/technical/use-intel-resource-director-technologyto-allocate-last-level-cache-llc.html. Intel. 2023. User space software for Intel(R) Resource Director Technology. https://github.com/intel/intel-cmt-cat/ wiki/resctrl/. Jackson, P.; and Lameter, C. 2006. cgroups - Linux control groups. https://www.kernel.org/doc/Documentation/ cgroup-v1/cgroups.txt. Jette, M. 2012. Slurm Workload Manager Architecture, Configuration and Use. https://www.open-mpi.org/video/ slurm/Slurm_EMC_Dec2012.pdf. Joy, A. M. 2015. Performance comparison between Linux containers and virtual machines. In ICACEA’15, 342–346. Kocher, P.; Horn, J.; Fogh, A.; ; Genkin, D.; Gruss, D.; Haas, W.; Hamburg, M.; Lipp, M.; Mangard, S.; Prescher, T.; Schwarz, M.; and Yarom, Y. 2019. Spectre Attacks: Exploiting Speculative Execution. In S&P’19. Koopmann, P.; Hähnel, M.; and Turhan, A.-Y. 2017. EnergyEfficiency of OWL Reasoners—Frequency Matters. In JIST’17, 86–101. Springer Verlag. Li, C.; Ding, C.; and Shen, K. 2007. Quantifying the Cost of Context Switch. In ExpCS’07, 2–es. Association for Computing Machinery, New York. Luksa, M. 2017. Kubernetes in action. Simon and Schuster. Majo, Z.; and Gross, T. R. 2011. Memory System Performance in a NUMA Multicore Multiprocessor. In SYSTOR’11. Association for Computing Machinery, New York. McGeoch, C. C. 2012. A Guide to Experimental Algorithmics. Cambridge University Press. Müller-Hannemann, M.; and Schirra, S., eds. 2010. Algorithm Engineering. Springer Verlag. ISBN 978-3-64214866-8. Pereira, L. A. F.; et al. 2023. HARDINFO. https://github. com/lpereira/hardinfo. Primate Labs Inc. 2023. GeekBench. https://www. geekbench.com/download/linux/. Roussel, O. 2011. Controlling a Solver Execution with the runsolver Tool. J. on Satisfiability, Boolean Modeling and Computation, 139–144. Sakallah, K. 2023. A Roadmap for the Next Phase of SAT Research. https://simons.berkeley.edu/talks/karemsakallah-university-michigan-2023-04-18. Sterling, T. L. 2002. Beowulf cluster computing with Linux. MIT Press. Strevell, M.; Lambiaso, D.; Brendamour, A.; and Squillo, T. 2019. Designing an Energy-Efficient HPC Supercomputing Center. In ICPP Workshops’19. Association for Computing Machinery, New York. Stump, A.; Sutcliffe, G.; and Tinelli, C. 2014. StarExec: A Cross-Community Infrastructure for Logic Solving. In IJCAR’14, 367–373. Springer Verlag. Vallet, N.; Michonneau, D.; and Tournier, S. 2022. Toward practical transparent verifiable and long-term reproducible research using Guix. Scientific Data, 9(1): 597. Vercellino, C.; Scionti, A.; Varavallo, G.; Viviani, P.; Vitali, G.; and Terzo, O. 2023. A Machine Learning Approach for an HPC Use Case: the Jobs Queuing Time Prediction. Future Generation Computer Systems, 215–230. Vincent, L.; et al. 2023. lshw: HardWare LiSter for Linux. https://github.com/lyonel/lshw. Wasik, S.; Antczak, M.; Badura, J.; Laskowski, A.; and Sternal, T. 2016. Optil.Io: Cloud Based Platform For Solving Optimization Problems Using Crowdsourcing Approach. In CSCW’16, 433–436. Association for Computing Machinery, New York. Weaver, V. M. 2013. Linux perf_event features and overhead. In The 2nd international workshop on performance analysis of workload optimized systems (FastPath’13), 5. Yoo, A. B.; Jette, M. A.; and Grondona, M. 2003. SLURM: Simple Linux Utility for Resource Management. In JSSPP’03, 44–60. Springer Verlag. Yu, F.; Luck, T.; and Shivappa, V. 2023. The Linux Kernel: User Interface for Resource Control feature. https://docs. kernel.org/arch/x86/resctrl.html. Zaitsev, P.; Kopytov, A.; et al. 2020. sysbench. https: //github.com/akopytov/sysbench. Zhang, L.; and Malik, S. 2004. Cache Performance of SAT Solvers: a Case Study for Efficient Implementation of Algorithms. In Theory and Applications of Satisfiability Testing, 287–298. Springer Berlin Heidelberg. Zijlstra, P.; Molnar, I.; and de Melo, A. C. 2009. Performance Events Subsystem. https://github.com/torvalds/linux/ tree/master/tools/perf. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8012
2024
890
18,729
Locally Rainbow Paths Till Fluschnik1, Leon Kellerhals2, Malte Renken2 1Institut für Informatik, TU Clausthal, Germany 2Technische Universität Berlin, Algorithmics and Computational Complexity, Germany till.fl[email protected], [email protected], [email protected] Abstract We introduce the algorithmic problem of finding a locally rainbow path of length ℓconnecting two distinguished vertices s and t in a vertex-colored directed graph. Herein, a path is locally rainbow if between any two visits of equally colored vertices, the path traverses consecutively at leaset r differently colored vertices. This problem generalizes the wellknown problem of finding a rainbow path. It finds natural applications whenever there are different types of resources that must be protected from overuse, such as crop sequence optimization or production process scheduling. We show that the problem is computationally intractable even if r = 2 or if one looks for a locally rainbow among the shortest paths. On the positive side, if one looks for a path that takes only a short detour (i.e., it is slightly longer than the shortest path) and if r is small, the problem can be solved efficiently. Indeed, the running time of the respective algorithm is near-optimal unless the ETH fails. Introduction Many graph connectivity problems are studied with additional constraints to make them applicable to real-world problems. Typical constraints include forbidden pairs of vertices or edges in the solution or — if the graph is colored — requiring that the solutions are rainbow (no two elements in the solution have the same color) or properly colored (no two adjacent elements have the same color). Examples for such constraints on problems can be found for spanning trees (Broersma and Li 1997; Darmann et al. 2011), Steiner trees (de Uña et al. 2016; Ferone, Festa, and Guerriero 2022; Halldórsson et al. 2018), but most notably for paths (Alon, Yuster, and Zwick 1995; Agrawal et al. 2020; Bhattacharya 2010; Bentert, Kellerhals, and Niedermeier 2023). For paths, the properly edge-colored variant forbids two equally colored edges to appear subsequently in the path. What, to the best of our knowledge, has not been considered yet, is any model that forbids a visited color for the next, say r, subsequent vertices of the path. For example, this allows the modeling of protecting certain types of resources from overuse. This for example is relevant for crop sequence optimization: here, different colors model different Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. s t Figure 1: A digraph whose vertices are colored with four colors, with a shortest 2-rainbow (but not 3-rainbow) s-t path. types of crops which, depending on the season, have different impacts on soil health (Dury et al. 2012; Turchetta et al. 2022; Benini et al. 2023). Other applications include holiday trip planning (different colors modeling different types of leisure activities), production process scheduling (different colors modeling different workers or machines). More concretely, given a vertex-colored graph, we propose the concept of locally rainbow paths, in which every subpath of bounded length is required to carry pairwise distinct colors. Formally, a path or walk W = (v0, v1, . . . , vq) in G is r-rainbow if for every i ∈[0, q −r], the vertices vi, vi+1, . . . , vi+r have pairwise different color (see Fig. 1). We arrive at the following problem description: LOCALLY RAINBOW PATH Input: A digraph G, a vertex-coloring c: V (G) →C, two vertices s, t ∈V (G), two integers r, ℓ∈N0. Question: Is there an r-rainbow s-t path of length at most ℓin G? We also consider LOCALLY RAINBOW WALK, where we look for s-t walks with the same constraints. LOCALLY RAINBOW PATH becomes the aforementioned problem of finding a rainbow path when r = ℓ. If r = 1, then the problem coincides with finding a properly colored s-t path. Our contributions. We study the parameterized complexity of LOCALLY RAINBOW WALK and LOCALLY RAINBOW PATH, with a focus on the locality parameter r. We show that the path variant is NP-hard for any fixed value of r ≥2 (Theorem 16). In contrast, we are able to design an algorithm with running time 2O(r log r) · nO(1) for the walk variant (Theorem 1), with n being the number of vertices. This result is achieved by developing an ordered version of the representative families technique. We prove this result to be optimal in the sense that no 2o(r log r) · nO(1)-time algorithm is possible if the ETH holds (Theorem 12). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8013 Note that an r-rainbow s-t walk of length ℓmust always be a path when ℓ≤dist(s, t) + r. Thus, our algorithm for LOCALLY RAINBOW WALK also applies to the path variant when the detour length k := ℓ−dist(s, t) is small. Motivated by this observation and the result of Bezáková et al. (2019) that finding s-t paths of detour length k is fixedparameter tractable for the parameter k, we also investigate this parameter. While both of our problem variants remain NP-hard even when k = 0 (Theorem 12), we are able to give a fixed-parameter tractable algorithm for the combined parameter k + r (Theorem 19). We mention in passing that our results also hold when coloring the edges instead of vertices (and adapting local rainbowness accordingly). Furthermore, our (nontrivial) algorithmic results also hold when looking for paths of length exactly ℓ. Proofs of results marked with ⋆are deferred to the paper’s full version. Related work. Finding a rainbow path is known to be NPhard (Chen, Li, and Shi 2011) and fixed-parameter tractable with respect to the number of colors (Kowalik and Lauri 2016; Uchizawa et al. 2013). While finding a properly colored path is trivially linear-time solvable, it is less obvious that this is also solvable in that time in an (undirected) edgecolored graph. This was shown by Szeider (2003). The field of finding paths of detour length exactly or at least k is rather active, with the former being easier to tackle than the latter. Bezáková et al. (2019) prove both variants to be fixed-parameter tractable, however, for the latter variant only on undirected graphs. While there has been some progress on directed graphs, most recently by Jacob, Włodarczyk, and Zehavi (2023), it is open whether finding a path with detour length at least one is polynomial-time solvable. Another closely related and more applied area is that of finding resource-constrained paths. Here, the graph carries arc (or vertex) weights and the desired s-t path must not accumulate more than a given threshold of that weight. The problem is known to be NP-hard (Handler and Zang 1980) and studied in many variations (Ford et al. 2022; Irnich and Desaulniers 2005; Pugliese and Guerriero 2013). A variation close to our setting introduces so-called replenishment arcs, at which one may “drop off” the weight accumulated so far (Smith, Boland, and Waterer 2012). This setting is relevant in airline/train crew scheduling (weight represents duty hours, replenishment arcs correspond to crew overnight rests) and aircraft/train routing (weight represents machine hours, replenishment arcs correspond to maintenance events) and also has ties with electric vehicle routing problems (weight represents battery discharge, replenishment arcs correspond to charging events) (Adler et al. 2016; Zündorf 2014). Our rainbowness constraint is similar in that it “replenishes” any colors that were visited more than r steps ago. Preliminaries We denote by Z, N0, and N the set of all, the non-negative, and the positive integers, respectively. For n, m ∈Z we denote by [n, m] := {i ∈Z | n ≤i ≤m} the set of integers between n and m and define [n] := [1, n]. We denote by e ≈2.718 Euler’s number and by ω < 2.373 the matrix multiplication constant (Alman and Williams 2021). Let σ := (a1, . . . , an) be a sequence. We denote by |σ| := n its length, i.e., the number of elements in σ, and also call σ an n-sequence. We write x ∈σ if x = ai for some i ∈ [n]. If every element in σ is contained in a set U, then we say that σ is a sequence on (or over) U. A sequence σ′ is a substring or consecutive subsequence of σ if there are i < j ∈[n] with σ′ = (ai, ai+1, . . . , aj). If i = 1 or j = n, then we also say that σ begins with or ends on σ′, respectively. If ρ = (b1, . . . , bm) is a sequence, then we denote by σ◦ρ := (a1, . . . , an, b1, . . . , bm) the concatenation of σ and ρ. For sequences σ1, . . . , σn, we denote by ⃝n i=1σi = σ1 ◦· · ·◦σn their consecutive concatenation. Graph theory. For basic notations on (directed) graph theory see, e.g., (Diestel 2016; Bang-Jensen and Gutin 2009). A digraph G is a tuple (V, A) with A ⊆V × V . In this work, all digraphs contain no self-loops, i.e., no arcs from the set {(v, v) | v ∈V }. For a digraph G = (V, A) we also denote by A(G) the arc set A and by V (G) the vertex set V . We call a digraph G symmetric if (v, w) ∈A(G) ⇐⇒ (w, v) ∈A(G). The symmetrization of the digraph G is the graph (V, A(G) ∪{(v, w) | (w, v) ∈A(G)}). For two vertices v, w ∈V (G), a v-w walk W = (u0 = v, u1, . . . , uq = w) (of length q) is a sequence of vertices from V such that (ui−1, ui) ∈A(G) for every i ∈[q]. A v-w walk is a path if all vertices are pairwise different. A digraph G is weakly connected if in its symmetrization G∗it holds true that for any (v, w) ∈V × V there is an v-w path. Throughout, unless stated otherwise, we denote by n := |V (G)| and m := |A(G)| and assume the input digraph G to be weakly connected (and hence n ≤m −1). For a vertex v, we denote by N −(v) := {w ∈V (G) | (w, v) ∈A(G)}. Color sequences and compatibility. Let G be a digraph and let c: V (G) →C be a vertex coloring. Recall that we call a path or walk W = (v0, v1, . . . , vq) in G r-rainbow if for every i ∈[0, q −r], the vertices vi, vi+1, . . . , vi+r have pairwise different color. The color sequence of W is σ := (c(v0), . . . , c(vq)). We sometimes also call σ rrainbow if W is r-rainbow. For two r-rainbow sequences σ = (a1, . . . , an) and ρ = (b1, . . . bm), we say that σ is r-compatible to ρ if a path or walk with color sequence σ ◦ρ is r-rainbow. Formally, σ is r-compatible to ρ if {amax(1,n−j+1), . . . , an} ∩{b1, . . . , bmin(r−j+1,m)} = ∅ for all j ∈[r]. Parameterized complexity. Let Σ be a finite alphabet and Σ∗= {x ∈Σn | n ∈N0}. A parameterized problem P is a subset {(x, k) | x ∈Σ∗, k ∈N0} ⊆Σ∗× N0, where k is referred to as the parameter. A parameterized problem P is fixed-parameter tractable (in FPT) if every instance (x, k) is solvable in f(k) · |x|O(1) time, where f is some computable function only depending on k. The Exponential Time Hypothesis (ETH) (Impagliazzo and Paturi 2001; Impagliazzo, Paturi, and Zane 2001) states that there exists some fixed ε > 0 such that 3-SAT cannot be decided in 2ε·n · (n + m)O(1) time on any input with n variables and m clauses. For more details, see Cygan et al. (2015). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8014 Walks In this section we study the parameterized complexity of LOCALLY RAINBOW WALK with respect to the parameter r. Note that all results obtained here also hold for finding shortest r-rainbow paths, i.e., paths with length ℓ= dist(s, t). We will see that the problem is fixed-parameter tractable, by providing an rO(r) · nO(1)-time algorithm. Indeed, although the length of a walk is not bounded in the input size, we can show that the above running time holds even if we ask whether there exists an r-rainbow s-t walk of any length. Finally, we prove a asymptotically tight running time lower bound based on the Exponential Time Hypothesis (ETH). Fixed-Parameter Tractability In this section we show the following. Theorem 1. LOCALLY RAINBOW WALK can be solved in O((r · e)ωr · ℓm) time, where m is the number of arcs in the input graph and ω is the matrix multiplication constant. Note that this does not yet prove fixed-parameter tractability for LOCALLY RAINBOW WALK parameterized by r as the walk may become very long, i.e., ℓmay not be bounded polynomially in the input size or by any function in r. Later in this section we will show that we can always find a solution whose length is bounded by a function in r; thus proving fixed-parameter tractability with r. Our algorithm will build a family Wp v of r-rainbow length-p s-v walks for every length p and each vertex v using dynamic programming in a Dijkstra fashion — that is, it will extend the walks along the arcs of the graph. To ensure that the r-rainbowness is maintained in this process we only need to remember the sequence σ = (c1, . . . , cr) at the end of the color sequence of any walk W. So we want to compute for each v ∈V (G) and p ∈[ℓ] the family Wp v :=   σ |σ| = min{p + 1, r} and G contains an r-rainbow length-p s-v walk whose color sequence ends on σ   . (1) Note that trivial dynamic programming on these families would blow up their size to O(|C|r), which is too large for our purposes. Yet, σ restricts the choice in colors for the next r vertices on the path: The path may only continue with a sequence ρ of colors to which σ is r-compatible. If however, for some sequence ρ there are multiple sequences in Wp v that are r-compatible to ρ, then it suffices to remember only one of them. We call the remaining family an ordered representative for Wp v and define it formally as follows. Definition 2 (Ordered representative). Let p, r ∈ N with p ≤r and let W be a family of sequences of length at most p. A subfamily c W of W is an ordered r-representative for W (written c W ⊆r orep W) if the following holds for every sequence ρ of length at most r: If there exists a σ ∈W that is r-compatible to ρ, then there exists a bσ ∈c W that is r-compatible to ρ. To compute an ordered r-representative for Wp v we make use of an algorithm by Fomin et al. (2016) to compute representatives of (unordered) set families. Let us first define (unordered) representatives for families of sets. Definition 3 (Unordered representative). Let F be a family of p-element sets and q ∈N. A subfamily bF of F is a qrepresentative for F (written bF ⊆q rep F) if the following holds for every set Y of size at most q: If F contains a set X disjoint from Y , then bF contains a set b X disjoint from Y . While Fomin et al. (2016) state their results for families of independent sets of a matroid, for our purposes, the simpler definition for set families (a special case) suffices. Proposition 4 ((Fomin et al. 2016)). There is an algorithm that, given a family F of p-sets over a universe U and an integer q ∈N0, computes in time O |F| · p+q p  pω + |F| · p+q q ω−1 a q-representative bF for F of size at most p+q p  . We will first show how to translate a sequence σ into a corresponding (unordered) set so that we can make use of the concept of representatives for unordered set families. After that, we are ready to devise an algorithm for Theorem 1. Consider two r-rainbow walks Wσ and Wρ with color sequences σ = (a1, . . . , ap) and ρ = (b1, . . . , bq). We wish to define two functions π and π′ that map color sequences to subsets of C × [r] such that σ is r-compatible to ρ if and only if π(σ) ∩π′(ρ) = ∅. By definition, σ is r-compatible to ρ if and only if bi does not equal any of the last r −i + 1 entries of σ. Define π′(ρ) := {(bi, i) | i ∈[min{r, q}]} and π(σ) := {(aj, i) | i ∈[r], j ∈[p −(r −i), p] ∩N}. (2) Then, (bi, i) /∈π(σ) if and only if bi does not appear among the last r −i + 1 entries of σ. In other words, we have the following. Observation 5. A sequence σ is r-compatible to a sequence ρ if and only if π(σ) ∩π′(ρ) = ∅. We now have the promised connection between ordered and unordered representatives. Lemma 6. Let W be a family of p-sequences and let F := {π(σ) | σ ∈W}. If bF is an r-representative of F, then c W := {σ | π(σ) ∈bF} is an ordered r-representative of W. Proof. Consider a sequence ρ = (b1, . . . , br). Suppose that σ = (a1, . . . , ap) ∈W is r-compatible to ρ. Then by Observation 5, π(σ) is disjoint from π′(ρ). Therefore, there is a set π(bσ) ∈bF which is disjoint from π′(ρ); Thus by Observation 5, bσ is r-compatible to ρ. Consequently, we can use Proposition 4 to compute ordered r-representatives. Corollary 7 (⋆). There is an algorithm that, given a family W of p-sequences over a universe U and an integer r ∈ N0, computes in time O |W|·(r·e)rrω +|W|·(r·e)(ω−1)r an ordered r-representative c W of W of size at most (r · e)r. Ordered representatives are transitive, just like their unordered counterparts (Fomin et al. 2016). Observation 8 (⋆). If c W ⊆r orep f W and f W ⊆r orep W, then c W ⊆r orep W. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8015 With a way to efficiently compute ordered r-representatives at hand, we can compute an r-rainbow s-t walk of length ℓwith the following routine. Recall that we are given a graph G with two terminals s and t, a coloring c: V (G) → C, and two integers r and ℓas input. Algorithm 1. Set c W0 s := {(c(s))} and for all v ∈V (G) \ {s}, set c W0 v := ∅. Now, for each p = 1, 2, . . . , ℓcompute for all v ∈V (G) the set N p v . If p < r, then N p v := [ u∈N −(v) ( (a1, . . . , ap+1) (a1, . . . , ap) ∈c Wp−1 u and c(v) = ap+1 /∈{a1, . . . , ap} ) If p ≥r, then N p v := [ u∈N −(v) ( (a2, . . . , ar+1) (a1, . . . , ar) ∈c Wp−1 u and c(v) = ar+1 /∈{a1, . . . , ar} ) . Compute an ordered r-representative c Wp v ⊆r orep N p v . Return yes if and only if c Wq t ̸= ∅for some q ∈[ℓ]. Let us show that Algorithm 1 indeed computes representatives of the family Wp v as defined in equation (1), and hence, is correct. Lemma 9. For each v ∈V (G) and p ∈[0, ℓ], the family c Wp v computed by Algorithm 1 contains at most (r · e)r sets and is an ordered r-representative of Wp v as defined in (1). Proof. Our proof is by induction. By the initial assignments of c W0 v, the statement holds for p = 0. Now, fix some p ∈[ℓ] and assume that c Wp−1 u ⊆r orep Wp−1 u for all u ∈V (G). Let ρ = (b1, . . . , bq) be a color sequence with q ≤r and let v ∈V (G). Suppose that there exists a sequence σ ∈Wp v that is r-compatible to ρ. We claim that there exists a bσ ∈ c Wp v that is r-compatible to ρ; thus proving that c Wp v ⊆r orep Wp v. As the bound on |c Wp v| then follows from Corollary 7, we are done once the claim is proven. We will prove the claim first for p ≥r and afterwards for p < r. If p ≥r, then σ is an r-sequence (a2, . . . , ar+1) and there exists an r-rainbow length-p s-v walk W whose color sequence ends on σ. Let a1 be the color that W visits just before visiting the colors in σ, that is, the color sequence of W ends on (a1, . . . , ar+1). Further, let u be the penultimate vertex visited by W and let W ′ be the length-(p −1) subwalk of W ending on u. Then the color sequence of W ′ ends on σ′ := (a1, . . . ar). Let ρ′ := (ar+1) ◦ρ. Observe that σ′ is r-compatible to ρ′, due to σ being r-compatible to ρ and W ′ being r-rainbow. Thus, by our induction hypothesis and the definition of ordered r-representatives, there exists a sequence bσ′ ∈c Wp−1 u that is r-compatible to ρ′. Let c W ′ be the s-u walk corresponding to bσ′ and let bσ′ := (ba′ 1, . . . , ba′ r). Define bσ := (ba′ 2, . . . , ba′ r) ◦(ar+1). As u ∈N −(v) and c(v) = ar+1 /∈{ba′ 1, . . . , ba′ r}, we have that bσ ∈N p v . Finally, observe that bσ is r-compatible with ρ. Thus, N p v ⊆r orep Wp v. Since c Wp v ⊆r orep N p v , the claim follows for p ≥r due to the transitivity of ordered r-representatives (Observation 8). If p < r, then σ is a (p + 1)-sequence (a1, . . . , ap+1) and there exists an r-rainbow length-p s-v walk W whose color sequence ends on σ. Indeed, σ is the entire color sequence of W. This case is similar to the above, but there is no color that is visited before σ in W. Hence, in this case, σ′ := (a1, . . . , ap) and bσ := bσ ◦(ap+1). The remainder of the proof is the same. Next, we show that the algorithm runs in the claimed running time. Theorem 1 then follows from Lemmas 9 and 10. Lemma 10 (⋆). Algorithm 1 runs in O((r · e)ωr · ℓm) time on m-arc digraphs. Bounding the length of the walk. Note that the length of a walk may be significantly longer than the running time of the above algorithm for LOCALLY RAINBOW WALK. Hence, Theorem 1 does not imply fixed-parameter tractability for the problem of finding an r-rainbow s-t walk of any length. We can however show that we can always find an r-rainbow s-t walk in which the number of visits to each vertex is bounded by a function in r. The idea is as follows. Consider a vertex v that is visited multiple times by an rrainbow walk W. Relevant for us are the consecutive subsequences of length r−1 of the color sequence τ of W that appear immediately before and after each visit of v. Consider the i-th visit and let σi and ρi be the consecutive length(r −1) subsequences of τ before and after the i-th visit of v, that is, the sequence σi ◦c(v) ◦ρi is a consecutive subsequence of τ. Now, if for a later visit, say the j-th visit of v, we have that σi◦c(v) is r-compatible to ρj, then we can skip all vertices between i and j. We will show that the number of visits to v is bounded by a function in r, or else we can skip visits. For this, we will make use of a skewed variant (Frankl 1982) of the seminal Bollobás’ Two Families Theorem (Bollobás 1965). Details are deferred to the appendix. Combining the above with Theorem 1, we can prove that deciding whether a graph contains an r-rainbow s-t walk is fixed-parameter tractable when parameterized by r. Corollary 11 (⋆). Given a vertex-colored m-arc digraph and two vertices s and t, one can decide in O((r · e)r(ω+1) · m) time whether the graph contains an r-rainbow s-t walk. A Matching Lower Bound A close look at the above algorithm shows that using the algorithm by Fomin et al. (2016) to compute ordered r-representatives is actually not optimal as the underlying unordered representative family also stores representatives for any set Y which does not correspond to π′(ρ) for any sequence ρ. This raises hope for a more efficient algorithm. We can however show that finding an algorithm with a running time with an exponent that is asymptotically smaller than ours (Theorem 1) would break the Exponential Time Hypothesis (ETH) (Impagliazzo and Paturi 2001). The provided reduction also proves the problem to be NP-hard. It also holds for shortest walks, and thus also for shortest paths. Altogether, we will prove the following in this section. Theorem 12 (⋆). Even if ℓ= dist(s, t) and on acyclic digraphs, both LOCALLY RAINBOW WALK and LOCALLY RAINBOW PATH are NP-hard and, unless the ETH fails, cannot be solved in 2o(r log r) · nO(1) time on n-vertex digraphs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8016 = {(1, 2), (2, 2)} = {(1, 1), (2, 2), (2, 3), (3, 3)} = {(2, 1), (3, 1), (3, 2)} · · · s u2 u3 um t w2,1 1 w3,3 2 v1,2 2 Figure 2: Example for the construction of Theorem 12 on a 3 × 3-grid with F = { , , . . . , }. Gray (thin) arcs exist independently of the current subset Fi ∈F. Black arcs point to elements in Fi and are the only way to reach the top copy. The highlighted path selects the hitting set {(1, 2), (2, 1), (3, 3)}, thereby visiting, i.a., w2,1 1 , v1,2 2 , and w3,3 2 . The path visits one black arc for each Fi ∈F. We will provide a polynomial-time reduction from the k × k PERMUTATION HITTING SET problem, where one is given a family F of subsets of a universe [k] × [k] (which we will treat like a grid with k rows and columns), and one is asked whether there is a hitting permutation, that is, a bijection ϕ: [k] →[k] such that each F ∈F contains an element (i, ϕ(i)) with i ∈[k]. Unless the ETH fails, k × k PERMUTATION HITTING SET cannot be solved in 2o(k log k) · (k + |F|)O(1) time (Lokshtanov, Marx, and Saurabh 2018). The rough idea for the construction is as follows (see Fig. 2 for an illustrative example): For each set in F, we create a pair of copies (lower and upper) of our universe, which we will be able to traverse column by column. Each row receives a color, and the subpath length r := k is set with the intention that we always have to visit the colors in the same order for each set in F. Hence, for each F ∈F, we must pick the same permutation. Now, we always start “left” of the two copies for F, and we can always go to the lower copy, but in order to get to the next set in F, we need to get to the upper copy; This is only possible if one element from our permutation is in F. Hence, there is an r-rainbow s-t walk (indeed, by construction, it will always be a path) if and only if there is a hitting permutation for F. As r = k in our reduction, an algorithm running in time 2o(r log r) · nO(1) would refute the ETH. Details about the reduction can be found in the appendix. Remarkably, the ETH lower bound holds even if one asks whether there exists an r-rainbow s-t walk of arbitrary length. This complements the fixed-parameter tractability of Corollary 11. Corollary 13. Unless the ETH breaks, there is no 2o(r log r)· nO(1)-time algorithm for the problem of deciding whether there is an r-rainbow s-t walk of arbitrary length in a given vertex-colored acyclic digraph with n vertices. Finally, as every r-rainbow s-t walk in the constructed instance is a shortest s-t path, we can add to every arc (u, v) its antiparallel arc (v, u). The resulting graph thus is symmetric. Corollary 14. Even if ℓ= dist(s, t) and on symmetric digraphs, both LOCALLY RAINBOW WALK and LOCALLY RAINBOW PATH are NP-hard and, unless the ETH breaks, cannot be solved in 2o(r log r) · nO(1)-time on n-vertex digraphs. Paths In this section, we study the parameterized complexity of LOCALLY RAINBOW PATH with respect to the locality parameter r and the detour length k := ℓ−dist(s, t). NP-Hardness for Constant Locality Values We now provide a dichotomy for LOCALLY RAINBOW PATH parameterized by the locality parameter r. Obviously, if r = 0 then any s-t path is a solution. We will now show that the problem remains efficiently solvable when r ≤2, but prove NP-hardness for all values r ≥3. Clearly, if r > 0, we can assume that there is no arc (u, v) with c(u) = c(v) in our digraph. Thus, the task of finding a 1-rainbow s-t path (or s-t walk) reduces to finding any s-t path. Observation 15. Finding a shortest 1-rainbow s-t walk or s-t path is linear-time solvable. As soon as r ≥2, the problem becomes much harder. Theorem 16 (⋆). LOCALLY RAINBOW PATH is NP-hard for any fixed value of r ≥2. We provide a polynomial-time reduction from 3-SAT, where given a Boolean formula φ in conjunctive normal form such that each clause contains exactly three literals (3CNF), the question is whether there exists a truth assignment to the variables for which φ evaluates to true. The problem is known to be NP-hard, even if each variable appears exactly twice positive and twice negative in the given formula (Berman, Karpinski, and Scott 2003, Theorem 1). We provide our construction for r = 2 and describe afterwards how it can be adapted to the case when r > 2. In a nutshell, our reduction works as follows (compare with Fig. 3). Our path first needs to go through the variable gadgets, in which there are two branches (for true and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8017 (a) vi 1 vi 1 v1 i 2 v1 i 2 v2 i 3 v2 i 3 1 ¯vi 2 3 ¯vi 2 3 ¯vi 1 2 ¯vi 1 2 yi 2 3 v′ i (b) 1 wj 1 w′ j 3 3 3 2 2 2 (c) 1 1 3 2 vi ¯vi 1 ¯vi 2 1 wj 1 w′ j 3 3 2 2 2 (d) 3 v′ n 4 2 1 w1 Figure 3: (a) The variable gadget and (b) the clause gadget in the construction of Theorem 16. (c) An example showing how a literal path corresponding to literal ¯xi in clause cj is attached to the variable gadget at ¯v1 i . (d) The connection between the last variable gadget and the first clause gadget. false) for each variable. Afterwards, it needs to go through the clause gadgets, in which there is a branch for each literal. Each branch visits a vertex of the corresponding variable gadget. As we are looking for a path, this vertex must be on the branch that was not yet visited by our path. Finally, the colors in the graph are chosen such that taking any forbidden turn (e.g., from a variable gadget directly into a clause gadget) would breach the local rainbowness constraint. For details on the proof of Theorem 16 we refer to the appendix. We close this section by remarking that, on symmetric digraphs, finding a shortest 2-rainbow path becomes efficiently solvable. (The case r ≥3 remains NP-hard by a reduction similar to the one above.) The idea here is to transform the vertex coloring into an edge coloring (i.e., every symmetric arc is assigned one color). We say that a walk is properly colored (with respect to some edge coloring) if no two consecutive symmetric arcs share the same color. Lemma 17 (⋆). Let G be a symmetric digraph, c a vertex coloring, and W an s-t walk. Assume that no two adjacent vertices have the same color. Then, W is 2-rainbow if and only if it is properly colored with respect to the edge coloring c′((u, v)) := {c(u), c(v)}. As a properly edge-colored s-t path can be found in linear time in symmetric digraphs (Szeider 2003, Cor. 10), we obtain the following. Observation 18. Finding a shortest 2-rainbow s-t walk in a symmetric digraph is solvable in linear time. Fixed-Parameter Tractability with Detour Length We now prove our problem to be fixed-parameter tractable with respect to r + k where k denotes the length of a detour the path may take (i.e., the desired length ℓis dist(s, t)+k). Let us first exclude some degenerate cases. If k < 0, then ℓ< dist(s, t), and we have a trivial no-instance at hand. If k = 0, then any solution must be a shortest path. As any shortest walk is also a shortest path, we can use our algorithm for LOCALLY RAINBOW WALK, see Theorem 1. Finally, we may assume that each of the n vertices distance to t v0 v2 · · · · · · · · · · · · vℓ s t ... vi1 vi2 vij ≤2k + 1 Figure 4: An exemplary s-t path P, circles marking distance separators. The x-axis shows the vertices of P in the order of their appearance. The y-axis shows the distance of the current vertex to t. Our algorithm exploits the property that the subpaths between any two distance separators are short (i.e., of length at most 2k +1) and internally vertex-disjoint. in G reaches t. In all, we have that dist(s, t) < ℓ< n and thus 0 < k < n −dist(s, t). Theorem 19 (⋆). LOCALLY RAINBOW PATH can be solved in rO(r+k) · ℓn2m time, where n and m are the number of vertices and arcs of the input digraph, and ℓis the length and k is the detour length of the desired path. Our approach for Theorem 19 is to merge our above techniques to keep track of the last r vertices with a central observation for paths with detour length k. To this end, we will show that any hypothetical solution P ∗visits in regular intervals so-called distance separators — see Fig. 4 for an illustration. At these points, we can partition the search space as we know that the subpath of P ∗between two consecutive distance separators lies disjoint from any subpath between two other consecutive distance separators. We then use a subroutine to compute a representative of all r-rainbow u-v paths of some fixed length whose running time is fixedparameter tractable with respect to its length. This fits into the promised running time as any two distance separators u and v are at most 2k + 1 vertices apart (Lemma 22). Indeed, such distance separators can be found in any path with bounded detour length. As mentioned earlier, this approach is inspired by works on parameterizations with respect to the detour length by Bezáková et al. (2019) and Zschoche (2023). The challenge in our setting is that we need to keep track of the ordered representatives. We start off with a basic observation. We denote for every v ∈V (G) by d(v) := dist(v, t) the distance to t. Observation 20 (⋆). For any s-t path P = (s = v0, . . . , vℓ= t) with ℓ≤d(s) + k we have i ≤d(s) − d(vi) + k for each i ∈[0, ℓ]. Definition 21 (Distance separator). Let P = (s = v0, v1, . . . , vℓ= t) be a path with detour length k := ℓ− dist(s, t). Then vi is a distance separator if d(vi) < d(vj) for all j < i and d(vi) > d(vj) for all j > i. By definition, if we have two distance separators vi and vj, j > i, then we know that between vi and vj, P only visits vertices w with d(vi) > d(w) > d(vj). Zschoche (2023) showed that a path with detour length k regularly visits distance separators. We need a faintly different statement. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8018 Lemma 22 (⋆). Let P = (s = v0, v1, . . . , vp = v) be a path of length at most d(s)−d(v)+k and let v be a distance separator. Then, for all i ∈[0, p −2k], there is a j ∈[0, 2k] such that vi+j is a distance separator. Our algorithm now guesses the positions of the distance separators. As the subpaths between the distance separators are (internally) vertex-disjoint, we then only need to find an r-rainbow path matching the color sequence of the subpath to the last distance separator and only uses vertices after this distance separator. For any two distance separators u and v in our graph G, we define Gu,v := G[Bu,v ∪{u, v}] Bu,v := {w ∈V (G) | d(u) > d(w) > d(v)} if u ̸= s, {w ∈V (G) | d(w) > d(v)} if u = s. On these graphs, we will compute an ordered r-representative for the family of r-rainbow u-v paths of some length in Gu,v. Indeed, as we will append these paths to some r-rainbow s-u path that ends on some color sequence τ, we need the family to be r-compatible with τ. We say that a path P = (v0, . . . , vq) fits τ if τ is r-compatible to (c(v0), . . . c(vmin{r−1,q})). We will need to compute an ordered r-representative for the following family for any two distance separators u and v in G, any integer q ∈N0, and any color sequence τ of length at most r: Pq τ (Gu,v) :=        σ |σ| = min{q + 1, r} and there is an r-rainbow length-q u-v path in Gu,v that fits τ and whose color sequence ends on σ        Computing such families can be done with an adaptation of Algorithm 1 for walks, the difference being that we additionally need to remember the set of vertices visited so far by our path (refer to the full version). Remembering these vertices comes at the cost of an additional running time factor of rO(q), where q is the length of the path. As the path length is an upper bound for r, this proves LOCALLY RAINBOW PATH to be fixed-parameter tractable with respect to the path length. Lemma 23 (⋆). Given a digraph G with m arcs, an integer r, two distance-separators u, v ∈V (G), an integer q ∈N0, and a color sequence τ of length at most r, one can compute in rO(r+q) ·m time an ordered r-representative for Pq τ (Gu,v) of size at most rO(r+q). Now that we know how to compute the families bPq τ (Gu,v), we can state the main algorithm. Herein, for every p ∈[0, ℓ] and v ∈V (G), we are interested in the family Rp v :=   σ |σ| = min{p + 1, r} and there is an r-rainbow length-p s-v path in Gs,v whose color sequence ends on σ   . (3) Hence, there is a length-ℓr-rainbow s-t path P ∗if and only if Rℓ t is nonempty. As, by Observation 20, P ∗will have v as its p-th vertex only if p ≤d(s) −d(v) + k, we only need to consider those families Rp v for which this inequality holds. Algorithm 2. Set bR0 s := {(c(s))} and bR0 v := ∅for all v ∈ V (G) \ {s}. Now, for each p = 1, 2, . . . , ℓ, for each v ∈ V (G) with p ≤d(s) −d(v) + k, compute Sp v, which is [ u∈V (G), q∈[min{2k+1,p}]            σ′◦σ ∃σ′′ : σ′′◦σ′ ∈bRp−q u , |σ′|=max{0,min{p−q+1,r−q}} |σ| = min{q, r}, and σ ∈bPq (σ′′◦σ′)(Gu,v)            and compute bRp v ⊆r orep Sp v using Corollary 7. Return yes if and only if bRℓ′ t ̸= ∅for some ℓ′ ∈[ℓ]. To prove Theorem 19, we still need to analyze the running time and show that that bRp v is an ordered r-representative for Rp v. Lemma 24 (⋆). For each v ∈V (G) and p ∈[0, ℓ] with p ≤ d(s) −d(v) + k, the family bRp v computed in Algorithm 2 is of size at most (r · e)r and is an ordered r-representative for Rp v as defined in (3). Moreover, Algorithm 2 runs in rO(r+k) · ℓn2m time. Conclusion We introduced a local rainbow constraint to the classic problem of finding s-t paths and walks, modeling scenarios in which resources (i.e., colors) are replenished over time. For walks, we are able to prove fixed-parameter tractability for the locality parameter r thanks to a new adaptation of the representative sets technique. In contrast, LOCALLY RAINBOW PATH remains NP-hard even for constant r due to the added non-local constraint of forbidding self-intersections. However, when the allowed length of the path is not too large in comparison to the distance between its endpoints, then the no-intersection constraints effectively become local again. This is exploited to prove LOCALLY RAINBOW PATH to be fixed-parameter tractable with the combined parameter r + k where k is the detour length. Towards future work, we believe that local rainbowness is only the tip of the iceberg when it comes to interesting local constraints. A straightforward generalization would be to allow for multi-colored vertices, so as to model a setting in which multiple types of resources can be used at once. Another natural variant would be to relax the local rainbowness constraint of the subpaths, allowing some bounded number of vertices to share the same color. One could also extend our local rainbowness constraint to other connectivity problems. Canonical candidates would be the traveling salesperson problem or the problem of finding multiple disjoint s-t paths. Similarly, one could be interested in finding Steiner trees in which all subpaths are locally rainbow (a generalization of rainbow Steiner trees (Ferone, Festa, and Guerriero 2022)), or vertex sets whose deletion destroys all s-t paths that are not locally rainbow. We already observed that in practice, the locality constraint is usually motivated by some regeneration over time. Therefore, it may be sensible to study the local rainbowness constraints also on temporal graphs, such as finding temporal walks (Bentert et al. 2020) or paths (Casteigts et al. 2021; Zschoche 2023). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8019 Acknowledgments Till Fluschnik was supported by the DFG, project AFFA (BR 5207/1). References Adler, J. D.; Mirchandani, P. B.; Xue, G.; and Xia, M. 2016. The electric vehicle shortest-walk problem with battery exchanges. Networks and Spatial Economics, 16(1): 155–173. Agrawal, A.; Jain, P.; Kanesh, L.; and Saurabh, S. 2020. Parameterized Complexity of Conflict-Free Matchings and Paths. Algorithmica, 82(7): 1939–1965. Alman, J.; and Williams, V. V. 2021. A Refined Laser Method and Faster Matrix Multiplication. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA ’21), 522–539. Alon, N.; Yuster, R.; and Zwick, U. 1995. Color-Coding. Journal of the ACM, 42(4): 844–856. Bang-Jensen, J.; and Gutin, G. Z. 2009. Digraphs - Theory, Algorithms and Applications, Second Edition. Springer Monographs in Mathematics. Springer. ISBN 978-1-84800997-4. Benini, M.; Blasi, E.; Detti, P.; and Fosci, L. 2023. Solving crop planning and rotation problems in a sustainable agriculture perspective. Computers and Operations Research, 159: 106316. Bentert, M.; Himmel, A.; Nichterlein, A.; and Niedermeier, R. 2020. Efficient computation of optimal temporal walks under waiting-time constraints. Applied Network Science, 5(1): 73. Bentert, M.; Kellerhals, L.; and Niedermeier, R. 2023. Fair Short Paths in Vertex-Colored Graphs. In Proceedings of the 37th Conference on Artificial Intelligence (AAAI ’23), 12346–12354. AAAI Press. Berman, P.; Karpinski, M.; and Scott, A. D. 2003. Approximation Hardness of Short Symmetric Instances of MAX3SAT. Electronic Colloquium on Computational Complexity, TR03-049. Bezáková, I.; Curticapean, R.; Dell, H.; and Fomin, F. V. 2019. Finding Detours is Fixed-Parameter Tractable. SIAM Journal on Discrete Mathematics, 33(4): 2326–2345. Bhattacharya, S. 2010. Search-Based Path Planning with Homotopy Class Constraints. In Fox, M.; and Poole, D., eds., Proceedings of the 24th Conference on Artificial Intelligence (AAAI ’10). AAAI Press. Bollobás, B. 1965. On generalized graphs. Acta Mathematica Academiae Scientiarum Hungarica, 16: 447–452. Broersma, H.; and Li, X. 1997. Spanning trees with many or few colors in edge-colored graphs. Discussiones Mathematicae Graph Theory, 17(2): 259–269. Casteigts, A.; Himmel, A.; Molter, H.; and Zschoche, P. 2021. Finding Temporal Paths Under Waiting Time Constraints. Algorithmica, 83(9): 2754–2802. Chen, L.; Li, X.; and Shi, Y. 2011. The complexity of determining the rainbow vertex-connection of a graph. Theoretical Computer Science, 412(35): 4531–4535. Cygan, M.; Fomin, F. V.; Kowalik, L.; Lokshtanov, D.; Marx, D.; Pilipczuk, M.; Pilipczuk, M.; and Saurabh, S. 2015. Parameterized Algorithms. Springer. Darmann, A.; Pferschy, U.; Schauer, J.; and Woeginger, G. J. 2011. Paths, trees and matchings under disjunctive constraints. Discrete Applied Mathematics, 159(16): 1726– 1735. de Uña, D.; Gange, G.; Schachte, P.; and Stuckey, P. J. 2016. Steiner Tree Problems with Side Constraints Using Constraint Programming. In Proceedings of the 30th Conference on Artificial Intelligence (AAAI ’16), 3383–3389. AAAI Press. Diestel, R. 2016. Graph Theory, volume 173 of Graduate Texts in Mathematics. Springer, 5th edition. Dury, J.; Schaller, N.; Garcia, F.; Reynaud, A.; and Bergez, J. E. 2012. Models to suport cropping plan and crop rotation decisions. A review. Agronomy for Sustainable Development, 32: 567–580. Ferone, D.; Festa, P.; and Guerriero, F. 2022. The Rainbow Steiner Tree Problem. Computers and Operations Research, 139: 105621. Fomin, F. V.; Lokshtanov, D.; Panolan, F.; and Saurabh, S. 2016. Efficient Computation of Representative Families with Applications in Parameterized and Exact Algorithms. Journal of the ACM, 63(4): 29:1–29:60. Ford, B. T.; Aggarwal, R.; Kumar, M.; Manyam, S. G.; Casbeer, D.; and Grymin, D. 2022. Backtracking Hybrid A* for Resource Constrained Path Planning. In Proceedings of the AIAA Scitech 2022 Forum (AIAA ’22), 1592. Frankl, P. 1982. An Extremal Problem for Two Families of Sets. European Journal of Combinatorics, 3: 125–127. Halldórsson, M. M.; Kortsarz, G.; Mitra, P.; and Tonoyan, T. 2018. Spanning Trees With Edge Conflicts and Wireless Connectivity. In Proceedings of the 45th International Colloquium on Automata, Languages, and Programming (ICALP ’18), volume 107, 158:1–158:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. Handler, G. Y.; and Zang, I. 1980. A dual algorithm for the constrained shortest path problem. Networks, 10(4): 293– 309. Impagliazzo, R.; and Paturi, R. 2001. On the Complexity of k-SAT. Journal of Computer and System Sciences, 62(2): 367–375. Impagliazzo, R.; Paturi, R.; and Zane, F. 2001. Which Problems Have Strongly Exponential Complexity? Journal of Computer and System Sciences, 63(4): 512–530. Irnich, S.; and Desaulniers, G. 2005. Shortest Path Problems with Resource Constraints, 33–65. Springer US. Jacob, A.; Włodarczyk, M.; and Zehavi, M. 2023. Long Directed Detours: Reduction to 2-Disjoint Paths. arXiv:2301.06105. Kowalik, L.; and Lauri, J. 2016. On finding rainbow and colorful paths. Theoretical Computer Science, 628: 110– 114. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8020 Lokshtanov, D.; Marx, D.; and Saurabh, S. 2018. Slightly Superexponential Parameterized Problems. SIAM Journal on Computing, 47(3): 675–702. Pugliese, L. D. P.; and Guerriero, F. 2013. A survey of resource constrained shortest path problems: Exact solution approaches. Networks, 62(3): 183–200. Smith, O. J.; Boland, N.; and Waterer, H. 2012. Solving shortest path problems with a weight constraint and replenishment arcs. Computers and Operations Research, 39(5): 964–984. Szeider, S. 2003. Finding paths in graphs avoiding forbidden transitions. Discrete Applied Mathematics, 126(2-3): 261– 273. Turchetta, M.; Corinzia, L.; Sussex, S.; Burton, A.; Herrera, J.; Athanasiadis, I.; Buhmann, J. M.; and Krause, A. 2022. Learning Long-Term Crop Management Strategies with CyclesGym. In Proceedings of the 35rd Annual Coference on Advances in Neural Information Processing Systems (NeurIPS ’22). Uchizawa, K.; Aoki, T.; Ito, T.; Suzuki, A.; and Zhou, X. 2013. On the Rainbow Connectivity of Graphs: Complexity and FPT Algorithms. Algorithmica, 67(2): 161–179. Zschoche, P. 2023. Restless Temporal Path Parameterized Above Lower Bounds. In Proceedings of the 40th International Symposium on Theoretical Aspects of Computer Science (STACS ’23), 55:1–55:16. Schloss Dagstuhl - LeibnizZentrum für Informatik. Zündorf, T. 2014. Electric Vehicle Routing with Realistig Recharging Models. Master thesis, Karlsruhe Institute of Technology. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8021
2024
891
18,730
Approximate Integer Solution Counts over Linear Arithmetic Constraints Cunjing Ge1, 2 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2School of Artificial Intelligence, Nanjing University, China [email protected] Abstract Counting integer solutions of linear constraints has found interesting applications in various fields. It is equivalent to the problem of counting lattice points inside a polytope. However, state-of-the-art algorithms for this problem become too slow for even a modest number of variables. In this paper, we propose a new framework to approximate the lattice counts inside a polytope with a new random-walk sampling method. The counts computed by our approach has been proved approximately bounded by a (ϵ, δ)-bound. Experiments on extensive benchmarks show that our algorithm could solve polytopes with dozens of dimensions, which significantly outperforms state-of-the-art counters. Introduction As one of the most fundamental type of constraints, linear constraints (LCs) have been studied thoroughly in many areas. In this paper, we consider the problem of counting approximately the number of integer solutions of a set of LCs. This problem has many applications, such as counting-based search (Zanarini and Pesant 2007; Pesant 2016), simple temporal planning (Huang et al. 2018), probabilistic program analysis (Geldenhuys, Dwyer, and Visser 2012; Luckow et al. 2014), etc.. It also includes as a special case several combinatorial counting problems that have been studied, like that of estimating the permanent of a matrix (Jerrum and Sinclair 1989; Gamarnik and Katz 2010; Harviainen, R¨oysk¨o, and Koivisto 2021), the number of contingency tables (Cryan et al. 2002; Desalvo and Zhao 2020), solutions to knapsack problems (Dyer et al. 1993), etc.. Moreover, it can be incorporated as a subroutine for #SMT (LA) (Ge et al. 2018). Since a set of LCs represents a convex polytope, its integer solutions correspond to lattice points inside the polytope. Accordingly, we do not distinguish the concepts of polytopes and sets of LCs in this paper. It is well-known that counting lattice points in a polytope is #P-hard (Valiant 1979). On the implementation front, the first practical tool for lattice counting is LATTE (Loera et al. 2004), which is an implementation of Barvinok’s algorithm (Barvinok 1993, 1994). The tool BARVINOK (Verdoolaege et al. 2007) is the successor of LATTE Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. with an in general better performance. In practice, it often still has difficulties when the number of variables is greater than 10 (preventing many applications). The relation between the number of lattice points inside a polytope and the volume of a polytope has been studied for approximate integer counting (Ge et al. 2019). However, it is inevitable that the approximation bounds may far off from exact counts. A more recent work (Ge and Biere 2021) introduced factorization preprocessing techniques to reduce polytopes dimensionality, which are orthogonal to lattice counting, they also require polytopes in specific forms. An algorithm for sampling lattice points in a polytope was introduced in (Kannan and Vempala 1997), which can be used to approximate the integer solution count, though we are not aware of any implementation. Since then, there have been a lot of works about sampling real points, such as Hit-and-run method (Lov´asz 1999; Lov´asz and Vempala 2006a), and approximating polytopes’ volume (Lov´asz and De´ak 2012; Cousins and Vempala 2015, 2018). As a result, the state-of-the-art volume approximation algorithms could solve general polytopes around 100 dimensions. Naturally, we wonder if they could be extended to integer cases. The primary contribution of this paper is a novel approximate lattice counting algorithm, in detail, it includes new methods with theoretical results as follows. • A lattice sampling method is introduced, which is a combination of Hit-and-run random walk and rejection sampling. We proved that it generates samples in distribution limited by Hit-and-run method, which is nearly uniform. • A dynamic stopping criterion is proposed, which could be calculated by variance of approximations while running. We proved that errors of outputs approximately lie in [1 −ϵ, 1 + ϵ] with probability at least 1 −δ, given ϵ, δ. We evaluated our algorithm on an extensive set of random and application benchmarks. We not only compared our tool with integer counters, but also with #SAT counters by translating LCs into propositional logic formulas. Experimental results show that our approach scales to polytopes up to 80 dimensions, which significantly outperforms the state-ofthe-art counters. We also observe that counts computed by our algorithm are bounded well by theoretical guarantees. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8022 Background In this section, we first present definitions of notations, and then briefly describe the sampling and volume approximation algorithms which inspired us. Notations and Preliminaries Definition 1. A linear constraint is an inequality of the form a1x1 + · · · + anxn op b, where xi are numeric variables, ai are constant coefficients, and op ∈{<, ≤, >, ≥, =}. Without loss of generality, a set of linear constraints can be written in the form of: {A⃗x ≤⃗b}, where A is a m × n coefficient matrix and ⃗b is a 1 × n constant vector. In the view of geometry, a linear constraint is a halfspace, and a set of linear constraints is an n-dimensional polytope. Definition 2. An n-dimensional polytope is in the form of P = {⃗x ∈Rn : A⃗x ≤⃗b}. Naturally, Zn represents the set of all integer points (points with all integer coordinates). Thus integer models of the linear constraints can be represented by {⃗x ∈Zn : A⃗x ≤⃗b}. It is the same as the integer points inside the corresponding polytope, i.e., {⃗x ∈Zn : A⃗x ≤⃗b} = P ∩Zn. In this paper, we assume that the polytopes are bounded, i.e., finite number of integer solutions, otherwise, it can be easily detected via Integer Linear Programming (ILP). Note that in our experiments, the running time of ILP is usually negligible compared to that of the integer counting. Definition 3. More notations: • Let A = ( ⃗A1, ..., ⃗ Am)T and hi = ⃗Ai⃗x ≤bi, given P = {A⃗x ≤⃗b}, i.e., P = h1 ∩... ∩hm. • Let Vol(K) denote the volume of a given convex set K, which is the Lebesgue measure of |K| in Euclidian space. • Let C(⃗x) denote the unit cube centered at ⃗x. • Let B(⃗x, r) denote the ball centered at ⃗x, of radius r. Hit-and-run Method Hit-and-run random walk method was first introduced in (Berbee et al. 1987), whose limiting distribution is proved to be uniform. It was employed and improved for volume approximation by (Lov´asz 1999; Lov´asz and Vempala 2006a). Experiments (Ge et al. 2018) showed that a variation called Coordinate Directions Hit-and-run is more efficient in practice. Thus we also adopt this variation, which is called Hitand-run for short in the rest of paper. It samples a real point from ⃗p in a given convex body K by the following steps: • Select a direction from n coordinates uniformly. • Generate the line l through ⃗p with above direction. • Pick a next point ⃗p′ uniformly from l ∩K. • Start from p′ and repeat above steps w times. Earlier works (Lov´asz and Vempala 2006a) proved that Hitand-run method mixes in w = O(n2) steps for a random initial point and O(n3) steps for a fixed initial point. However, further numerical studies (Lov´asz and De´ak 2012; Ge et al. 2018) reported that w = n is sufficient for nearly uniformly sampling in polytopes with dozens of dimensions. Multiphase Monte-Carlo Algorithm Multiphase Monte-Carlo Algorithm (MMC) is a polynomial time randomized algorithm, which was first introduced in (Dyer, Frieze, and Kannan 1991). At first, the complexity is O∗(n23), it was reduced to O∗(n3) by a series of works (Lov´asz 1999; Lov´asz and Vempala 2006b; Cousins and Vempala 2018). It consists of the following steps: • Employ an Ellipsoid method to obtain an affine transformation T, s.t., B(⃗0, 1) ⊂T(P) ⊂B(⃗0, ρ), given a ρ > n. Note that Vol(P) = Vol(T(P)) · det(T). • Construct a series of convex bodies Ki = T(P) ∩ B(⃗0, 2i/n), i = 0, ..., l, where l = ⌈n log2 ρ⌉. Then Vol(T(P)) = Vol(Kl) = Vol(K0) · l−1 Y i=0 Vol(Ki+1) Vol(Ki) . Specifically, K0 = B(⃗0, 1) and Kl = T(P). • Generate a set Si of sample points by Hit-and-run in Ki+1, where |Si| = f(l, ϵ, δ). Then count |Ki ∩Si| and use ri = |Ki∩Si| |Si| to approximate the ratio Vol(Ki+1) Vol(Ki) . • At last, Vol(P) ≈Vol(B(⃗0, 1)) · Ql−1 i=0 ri · det(T). Note that the function f(l, ϵ, δ) determines the number of samples with given ϵ, δ, s.t., relative errors of outputs are bounded in [1 −ϵ, 1 + ϵ] with probability at least 1 −δ. Algorithm To apply MMC framework and Hit-and-run random walk on lattice counting problem, there are some difficulties: • How to efficiently sampling lattice points nearly uniformly inside a polytope? • How to construct a chain of polytopes and then approximate ratios among them like MMC? • How many sample points are sufficient, given ϵ, δ? Could relative errors be computed while algorithm running? In this section, we will propose new algorithms to answer above questions, with theoretical analysis. Lattice Sampling To sampling lattice points in a given polytope P, we apply Hit-and-run random walk method with rejection sampling. Intuitively, a real point ⃗p = (p1, ..., pn) corresponds to a lattice point ⃗[p] = ([p1], ..., [pn]). So lattice points can be generated by Hit-and-run method and number rounding, noted [ . ]. However, the distribution of lattices generated by sampling real points directly in P is not uniform. Because the probability of sampling a lattice point ⃗u closed to polytopes’ facets may be smaller than a point ⃗v which C(⃗v) ⊂P. Example 1. In Figure 1, the probability of a blue point picked by sampling directly in P, is smaller than a red point. Now let us consider shifting c to l1, l2 and l3. Note that C(u3) ⊂a ∩b ∩l2 ⊂a ∩b ∩l3, but C(u3) ̸⊂a ∩b ∩l1. Then the probability of picking u3 by sampling real points in a ∩b ∩l2 or a ∩b ∩l3 is the same as red points. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8023 Figure 1: An illustration of shifting facets. Here P = △ABC = a∩b∩c, where a, b and c are inequalities (halfspaces) correspond to AB, BC, AC, respectively. Inequalities l1, l2, l3 are parallel to c. Red points {v1, ..., v5} and blue points {u1, ..., u9} are lattice points in P s.t. C(vi) ⊂ P and C(ui) ̸⊂P respectively. Therefore, our approach first enlarges P to P ′ by shifting facets of P. Then it repeatedly generates real points in P ′ and rejects samples whose corresponding lattice points outside P. Obviously, the larger P ′, the larger probability of rejection. Now we have a further question: • How to obtain such P ′ as small as possible? Naturally, P ′ should contain all unit cubes centered at lattice points in P, i.e., C(⃗p) ⊂P ′, ∀⃗p ∈P ∩Zn. Without loss of generality, let us consider shifting the ith facet ⃗Ai⃗x ≤bi. The hyperplane shifting problem is equivalent to the following optimization problem min b′ i s.t. C(⃗p) ⊂⃗Ai⃗x ≤b′ i, ∀⃗p ∈P ∩Zn. ⇔max b′ i s.t. [C(⃗p) ∩⃗Ai⃗x = b′ i] ̸= ∅, ∃⃗p ∈P ∩Zn. ⇔max ⃗Ai⃗x s.t. ⃗x ∈ [ ⃗p∈P ∩Zn C(⃗p), ⃗x ∈Rn. In the worst case, assume there is a lattice point ⃗q on the ith facet of P, i.e., ⃗Ai⃗q = bi. Then we have ⇔max ⃗Ai⃗x s.t. ⃗x ∈C(⃗q), ⃗x ∈Rn. ⇔bi + max ⃗Ai⃗x s.t. ⃗x ∈C(⃗0), ⃗x ∈Rn. (1) The optimization problem of Equation (1) can be solved by Linear Programming (LP), e.g., Simplex algorithm. Algorithm 1 is the pseudocode of our sampling method. It first enlarges P to P ′ by the shifting method. Next it applies the Shallow-β-Cut Ellipsoid method on P ′ which is the same as MMC. It obtains an affine transformation T such that B(0, 1) ⊂T(P ′) ⊂B(0, 2n). Then it samples a lattice point ⃗q by [T −1(⃗p)], where ⃗p is a real sample point generated by Hit-and-run in T(P ′), and T −1 is the inverse transformation of T. The algorithm only accepts samples inside P. At last it repeats above steps till |S| = s. The parameter w will be discussed later in Section . Why we adopt an affine transformation T before random walks? Intuitively, it could transform a ‘thin’ polytope P ′ into is a well-rounded one T(P ′). Thus it is easier for Hitand-run walks to get out of corners. Algorithm 1: Sample() – Sample s lattice points in P Input: P, s Parameter: w Output: S 1: for each ⃗Ai⃗x ≤bi in P do 2: vi ←Simplex(max ⃗Ai⃗x s.t. {−1 2 ≤xi ≤1 2}) 3: end for 4: P ′ ←{A⃗x ≤⃗b + ⃗v} 5: T ←Ellipsoid(P ′) 6: ⃗p ←⃗0, S ←∅ 7: repeat s times 8: do 9: ⃗p ←HitAndRun(T(P ′), ⃗p, w) 10: ⃗q ←[T −1(⃗p)] 11: while ⃗q ̸∈P 12: S ←S ∪{⃗q} 13: end repeat The following results show that Algorithm 1 generates lattice sample points in nearly uniform. Lemma 1. The probability of acceptance is |P ∩Zn| Vol(P ′) , if Hitand-run is a uniform sampler. Proof. Assume ⃗x is generated by Hit-and-run method. Then Prob(⃗x accepted) = Prob([T −1(⃗x))] ∈P) =Prob(⃗x ∈∪⃗p∈P ∩ZnT(C(⃗p))) =Vol(∪⃗p∈P ∩ZnT(C(⃗p))) Vol(T(P ′)) = P ⃗p∈P ∩Zn Vol(C(⃗p))/ det(T) Vol(P ′)/ det(T) = |P ∩Zn| Vol(P ′) . Theorem 1. Each point ⃗x ∈P ∩Zn gets picked with the same probability, if Hit-and-run is a uniform sampler. Proof. Consider an arbitrary point ⃗x ∈P ∩Zn. Let ⃗p represents a real point generated by Hit-and-run in T(P ′). Then Prob(⃗x picked) = Prob(⃗p ∈T(C(⃗x))) Prob(⃗p accepted) = Vol(T(C(⃗x))) Vol(T(P ′)) · Vol(P ′) |P ∩Zn| = 1 |P ∩Zn|. From Lemma 1, we observe that the acceptance could be very small when |P ∩Zn| ≪Vol(P ′). Polytopes Chain Generation Now we consider a chain of polytopes {P0, ..., Pl} s.t. |P ∩Zn| = |P0 ∩Zn| · l−1 Y i=0 |Pi+1 ∩Zn| |Pi ∩Zn| , |Pi+1 ∩Zn| |Pi ∩Zn| ∈ [rmin, rmax] i ≤l −2, [rmin, 1) i = l −1. (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8024 Algorithm 2: Subdivision() – Obtain the polytopes chain Input: P, s Parameter: rmax, rmin, µ Output: l, {Pi}, {Si} 1: P0 ←GetRect(P) 2: i ←0, j ←1 and S0 ←∅ 3: while j ≤m do 4: Si ←Sample(Pi, s) 5: H ←Rn 6: while |Si∩H∩hj| |Si| > rmax and j ≤m do 7: H ←H ∩hj, j ←j + 1 8: end while 9: k ←min(j, m) 10: if |Si∩H∩hk| |Si| ≥rmin then 11: H ←H ∩hk, j ←k + 1 12: else 13: do 14: ⃗A′ k ←⃗Ak or Disturb(Ak, µ) since second loop 15: find min b′ k s.t. |Si∩H∩⃗ A′ k⃗x≤b′ k| |Si| ≥rmin, b′ k ≥bk 16: while no feasible b′ k found 17: H ←H ∩⃗A′ k⃗x ≤b′ k 18: end if 19: Pi+1 ←Pi ∩H 20: i ←i + 1 21: end while 22: return i, {P0, ..., Pi}, {S0, ..., Si} where [rmin, rmax] bounds ratios close to 1 2, like [0.4, 0.6]. If ratios are close to 0, the computational cost of generating points in Pi+1 ∩Zn by sampling in Pi ∩Zn will increase. On the other hand, l will be a large number when ratios are close to 1, which is also not computational-wise. Algorithm 2 presents our method for constructing such Pis. Recall that in MMC, it eventually approximates the ratio between volume of P and an inner ball B(⃗0, 1) whose exact volume is easy to compute. It constructs a series of convex body Ki inside P. Lemma 1 indicates that the smaller polytope, the more difficult to sampling lattice points, naturally, we would like to construct polytopes chain outside P. Our approach starts from an n-dimensional rectangle P0 = Rect(P) ⊃P, whose exact lattice count is also easy to obtain. Next it constructs P1 ⊃P by adding new cutting constraints on P0, s.t. |P1∩Zn| |P0∩Zn| close to 1 2. Then it repeatedly generates P1 ⊃P2 ⊃... until a polytope Pl = P found. • How to find cutting constraints to halve Pis? Example 2. In Figure 2, given P = △ABC = a ∩b ∩c, and P0 = ADEF ⊃P. Now we try to cut P0 with a, b and c. We observe that |P0∩a∩b∩Zn| |P0∩Zn| = 10 15 > rmax and |P0∩a∩b∩c∩Zn| |P0∩Zn| = 4 15 < rmin. Then we find d parallel to c s.t. |P0∩a∩b∩d∩Zn| |P0∩Zn| = 8 15. Thus P1 = P0 ∩a ∩b ∩d. Suppose that we already have P0 ⊃... ⊃Pi which Pi ⊂ h1 ∩... ∩hj−1 and Pi ̸⊂hj. Then cutting constraints for constructing Pi+1 are found by the following steps: Figure 2: An example of constructing P0 = ADEF, P1 = ABCHG and P2 = △ABC = P. • Step 1. Add constraints hj, hj+1,... repeatedly until a k is found s.t. |Pi∩hj∩...∩hk∩Zn| |Pi∩Zn| ≤rmax or k = m. • Step 2. If |Pi∩hj∩...∩hk∩Zn| |Pi∩Zn| ≥rmin, then Pi+1 = Pi ∩ hj ∩... ∩hk has been found. Note that Pi+1 = Pl = P when k = m. • Step 3. Otherwise, it indicates that hk over-cuts the solution space. Then we find an h′ k = ⃗A′ k⃗x ≤b′ k (almost) parallel to hk s.t. rmin ≤|Pi∩hj∩...∩h′ k∩Zn| |Pi∩Zn| ≤rmax. At last, let Pi+1 = Pi ∩hj ∩... ∩hk−1 ∩h′ k. About above steps, we may naturally ask: • How to determine the value of |Pi∩hj∩...∩hk∩Zn| |Pi∩Zn| ? Algorithm 2 samples lattice points Si in Pi and then approximates |Pi∩hj∩...∩hk∩Zn| |Pi∩Zn| via |Si∩hj∩...∩hk| |Si| . Since we aim to obtain Pi+1 s.t. |Pi+1∩Zn| |Pi∩Zn| close to 1 2, it is not necessary to approximate very accurately with a mass of samples. • How to find h′ k in Step 3? Line 13 to 16 in Algorithm 2 is the loop of finding h′ k. At the first time of loop, it sets ⃗A′ k = ⃗Aj and searches the minimum b′ k ≥bk s.t. |Si∩H∩⃗ A′ k⃗x≤b′ k| |Si| ≥rmin. We then compute and sort D = {d : d = ⃗Ak⃗p, ∀⃗p ∈Si ∩H}. Thus searching b′ k is equivalent to scanning D, whose time complexity is O(|D|) = O(|Si ∩H|) = O(s). Note that there may be no feasible b′ k, as for certain y, |Si∩H∩⃗ Ak⃗x≤y| |Si| > rmax, |Si∩H∩⃗ Ak⃗x<y| |Si| < rmin. For example, |x1+x2 = 0.99∩Z2| = 0 and |x1+x2 = 1∩Z2| = ∞. Therefore, if our algorithm fails to find a feasible b′ k once, it will generate ⃗A′ k = {a′ k1, ..., a′ kn} by disturbing ⃗Ak, i.e., a′ ki ∈[aki −µ, aki + µ], where µ ∈R is a small constant. In practice, the loop in line 13–16 (Algorithm 2) usually finds a feasible b′ k by disturbing ⃗A′ k once, occasionally twice, though the loop may not stop in theory in worst cases. With respect to the size of l, it is easy to find the following result as every Pi+1 is constructed by nearly halve Pi. Theorem 2. The length l of the chain P0, .., Pl constructed by Algorithm 2 is in O(log2 |P0 ∩Zn|) in the worst case. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8025 Dynamic Stopping Criterion Approximating |P ∩Zn| is factorized into approximating a series ratios |Pi+1∩Zn| |Pi∩Zn| by Equation (2). Naturally, we could approximate ratios via |Pi+1∩Si| |Si| , where Si is a set of lattice points sampled in Pi by Algorithm 1. A key question rises: • How many sample points is sufficient to approximate |P ∩Zn| with certain guarantees, like an (ϵ, δ)-bound? Let Ri denote the random variable of |Pi+1∩Si| |Si| , and R = Ql−1 i=0 Ri. Note that Ris are mutually independent, since for each Si, the random walk starts from origin ⃗0 ∈T(P ′) (see Algorithm 1). Thus we have the variance of R: Var(R) = Var( Y Ri) = E(( Y Ri)2) −[E( Y Ri)]2 = Y E(R2 i ) −[ Y E(Ri)]2 = Y [Var(Ri) + E(Ri)2] − Y [E(Ri)]2. (3) From Chebyshev inequality, we have: Prob(|R −E(R) E(R) | ≥ϵ) ≤ Var(R) ϵ2 · E(R)2 ≤δ ⇒Var(R) ≤δ · ϵ2 · E(R)2. (4) Equation (4) shows when the approximate result lies in [(1− ϵ)|P ∩Zn|, (1 + ϵ)|P ∩Zn|] with probability at least 1 −δ, i.e., satisfies an (ϵ, δ)-bound. Thus we adopt Equation (4) as the stopping criterion of approximation. Given a set of sample Si, let ri = |Pi+1 ∩Si|/|Si| and r = Ql−1 i=0 ri. We use r and ris to approximately represent E(Ri)s and E(R) respectively (Lemma 3 shows that such substitutions are safe). Then we split Si into N groups {Sij} with the same size s/γ, where N = |Si| · γ/s is the number of groups. Let rij = |Pi+1 ∩Sij|/|Sij| and Rij denote the random variable of rij. If Rijs are mutually independent and follow the same distribution, we have Var(Ri) = Var( PN j=1 Rij N ) = 1 N 2 N X j=1 Var(Rij) = Var(Ri1) N ≈1 N N X j=1 (rij −ri)2 N −1 . Note that Rij can be exactly mutually independent if random walks start from a fixed point, however, it is not actually necessary. Let vi = P (rij−ri)2 N(N−1) . As a result, an approximate stopping criterion is obtained Var(R) ≈ Y (vi + r2 i ) −r2 ≤δ · ϵ2 · r2. (5) The pseudocode of the main framework is presented as Algorithm 3. It first generates s sample points for each Pi and then computes ris, vis, r and v. If Equation (5) satisfies, it returns |P0 ∩Zn| · r, otherwise, it repeats above steps. Lemma 2. lim|Si|→∞ri = |Pi+1∩Zn| |Pi∩Zn| and lim|S|→∞r = |P ∩Zn| |P0∩Zn|, if Hit-and-run is a uniform sampler. Algorithm 3: Approximate Lattice Counts Input: P Parameter: ϵ, δ, s, γ Output: |P ∩Zn| 1: (l, {Pi}, {Si}) ←Subdivision(P, s) 2: N ←0 3: do 4: N ←N + γ 5: for i = 0 to l −1 do 6: Si ←Si∪Sample(Pi, s) 7: ri ←|Pi+1 ∩Si|/|Si| 8: Split Si into N groups Si1, ..., SiN 9: rij ←|Pi+1 ∩Sij|/|Sij|, j ∈{1, ..., N} 10: vi ←PN j=1 (rij−ri)2 N(N−1) 11: end for 12: r ←Ql−1 i=0 ri 13: v ←Ql−1 i=0 (vi + ri)2 −r2 14: while v ≤δ · ϵ2 · r2 15: return |P0 ∩Zn| · r Proof. Note that sampling uniform in Pi and then count the number of samples in Pi+1 is a Bernoulli trial. Lemma 3. Equation (4) and (5) are approximately equivalent, regardless of the difference between r and E(R). Proof. Let c = r/E(R) and ci = ri/E(Ri) represent the differences. Since ri = ci · E(Ri) = E(ciRi), we have vi ≈Var(ciRi) = c2 i · Var(Ri). Equation (4) can be transformed into δ · ϵ2 ≥ Q(Var(Ri) + E(Ri)2) E(R)2 −1 ≈ c2 E(R)2 · Y Var(Ri) + E(Ri)2 c2 i −1 = Q(vi + r2 i ) r2 −1. (6) Note that Equation (6) is the same as Equation (5). From Lemma 2, 3 and Equation (5), we have Theorem 3. The output of Algorithm 3 is approximately bounded in an (ϵ, δ)-bound. Implementation Details The setting of parameters in Algorithm 1, 2 and 3 are listed with explanations as the following: • ϵ = 0.2 and δ = 0.1. They determine the bounds of counts computed by our approach. Experimental results with more pairs of values, such as (0.5, 0.1) and (0.1, 0.05), can be found in Section . • w = n. It controls the number of Hit-and-run walks per real sample point. Earlier theoretical results (Lov´asz and Vempala 2006a) showed the upper bounds on w in the Markov chain is O(n2) for a random initial point and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8026 (a) ϵ = 0.5, δ = 0.1 (b) ϵ = 0.2, δ = 0.1 (c) ϵ = 0.1, δ = 0.05 Figure 3: Quality of counts computed by ALC with different ϵ and δ on cases whose exact counts are available. Each case was experimented 10 times, i.e., 10 data points per case. The average running times of experiments in (a) (b) (c) are 0.19s (ϵ = 0.5, δ = 0.1), 0.81s (ϵ = 0.2, δ = 0.1), 5.73s (ϵ = 0.1, δ = 0.05) respectively. (a) Random polytopes. (b) Rotated thin rectangles. (c) Application instances. Figure 4: Performance comparisons among tools on different families of benchmarks. O(n3) for a fixed initial point. However, further numerical studies (Lov´asz and De´ak 2012; Ge et al. 2018) reported that w = n is sufficient on polytopes with dozens of dimensions. They also tried w = 2n and w = n ln n, but no visible improvement. Thus we adopt w = n. • s = 2/(δ · ϵ2): It controls the number of samples in one round. We select this value, as 1/(δ·ϵ2) uniform samples are sufficient to approximate ri in (ϵ, δ)-bound. Note that the total number of samples is determined by the stopping criterion instead of s. • rmin = 0.4, rmax = 0.6, µ = 0.005 and γ = 10. In Algorithm 2, P0 = Rect(P) can be easily computed by LP or ILP. Naturally, LP is cheaper than ILP, but the rectangle generated by ILP is smaller. In practice, the cost of ILP is usually negligible compared to entire counting algorithm. In Algorithm 2 and 3, samples in Si ∩Pi+1 can be reutilized in Si+1. Thus we only have to generate s−|Si ∩Pi+1| new samples for Si+1. (Ge et al. 2018) proved that this technique has no side-effect on errors for approximating ratios. Evaluation We implemented a prototype tool called APPROXLATCOUNT (ALC) 1 in C++. Furthermore, we integrated ALC into a DPLL(T)-based #SMT(LA) counter (Ge et al. 2018). Experiments were conducted on Intel(R) Xeon(R) Gold 1Source code of ALC and experimental data including benchmarks can be found at https://github.com/bearben/ALC. 6248 @ 2.50GHz CPUs with a time limit of 3600 seconds and memory limit of 4 GB per benchmark. The setting of parameters of ALC has already been presented and discussed in Section . The benchmark set consists of three parts: • Random Polytopes: We generated 726 random polytopes with three parameters (m, n, λ), where n ranges from 3 to 100, m ∈{n/2, n, 2n} and λ ∈{20, 21, ..., 210}. A benchmark is in the form of {A⃗x ≤⃗b, −λ ≤xi ≤λ}, where aij ∈[−10, 10] ∩Z and bi ∈[−λ, λ] ∩Z. • Rotated Thin Rectangles: To evaluate the quality of approximations on “thin” polytopes, 180 n-dimensional rectangles {−1000 ≤x1 ≤1000, −τ ≤xi ≤τ, i ≥2} were generated and then rotated randomly, where n ∈ {3, ..., 8} and τ ∈{0.1, 0.2, ..., 2.9, 3.0}. • Application Instances: We adopted 4131 benchmarks (Ge and Biere 2021) from program analysis and simple temporal planning. The domain of variables is [−32, 31]. We compared our tool ALC with the state-of-the-art integer counter BARVINOK (Verdoolaege et al. 2007). On random polytopes, we further compared our approach with the state-of-the-art propositional model counters APPROXMC4 (Soos and Meel 2019), CACHET (Sang et al. 2004), and GANAK (Sharma et al. 2019). We used the default settings of APPROXMC4 (ϵ = 0.8, δ = 0.2) and GANAK (δ = 0.05). Note that they require CNF formulas as inputs. Thus we first translated linear constraints into bitvector formulas, and then translated into propositional CNF The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8027 Dim n 3 4 5 6 7 8 10 12 14 15 20 30 40 50 60 70 80 ALC #solve 33 33 33 33 33 33 33 30 29 28 22 14 11 10 5 4 1 avg. ¯t 0.03 0.05 0.07 0.19 0.65 0.50 85.6 42.6 48.8 151 156 286 249 1057 515 1684 3090 avg. ¯l 1.6 3.2 4.2 6.1 7.4 8.2 12.3 13.4 15.8 17.6 21.6 24.7 31.6 40.4 31.4 40.3 44 Barvinok #solve 33 33 33 33 22 11 0 0 0 0 0 0 0 0 0 0 0 avg. ¯t 0.22 0.24 2.72 105 1052 1158 — — — — — — — — — — — Cachet #solve 27 18 17 13 11 9 6 3 2 0 0 0 0 0 0 0 0 avg. ¯t 161 71.8 537 396 291 434 601 483 2159 — — — — — — — — Ganak #solve 25 17 13 11 9 8 5 3 3 0 0 0 0 0 0 0 0 avg. ¯t 68.7 256 9.4 187 27.6 198 169 390 2909 — — — — — — — — ApproxMC4 #solve 33 33 32 22 19 13 10 5 4 3 0 0 0 0 0 0 0 avg. ¯t 1.16 9.78 81.3 98.4 136 121 580 608 673 1001 — — — — — — — Table 1: More statistics of performance on random polytopes with respect to n (33 cases for each n, experiment once per case). with BOOLECTOR (Niemetz et al. 2018). Translation times are not included in the running times. Figures 3 (a) (b) (c) show the relative errors (y-axis) of counts computed by ALC with different (ϵ, δ) settings. The experiments were conducted on random polytopes (case 91 ∼280) and rotated thin rectangles (case 1 ∼90) whose exact counts could be obtained by BARVINOK and CACHET. We run ALC 10 times on each cases. So there are 10 data points per case, 2800 data points per figure. We observe that the counts computed by ALC are bounded well. For example, in Figure 3 (b), relative errors should lie in [0.8, 1.2] with probability at least 90% with ϵ = 0.2, δ = 0.1. Figures 4 (a) (b) (c) compare running times among tools on different families of benchmarks. In general, ALC significantly outperforms other tools. On random polytopes, more results with respect to n are listed in Table 1, which will be discussed later. Figure 4 (b) present the results on rotated thin rectangles. Note that none of cases in this family was solved in timeout by APPROXMC4, CACHET or GANAK, due to larger coefficients and variable domains. We observe jumps regarding running times of BARVINOK, as n increases. Figure 4 (c) presents the results of comparisons over application instances which are all SMT(LA) formulas. Since we only integrated ALC and BARVINOK into the #SMT(LA) counter, we did not compare with other tools. Note that ‘STN’ is the family of simple temporal planning benchmarks, others are all generated by analyzing C++ programs. We find that most benchmarks were solved in one second by both tools, except ‘shellsort’ and ‘STN’ instances. On ‘shellsort’ instances, ALC significantly outperforms BARVINOK. On ‘STN’ instances, ALC eventually gains upper hand as the dimensionality increases. Table 1 lists the number of solved cases and average running times (exclude timeout cases) with respect to n. For each n, there are 33 benchmarks. We find that ALC could handle random polytopes up to 80 dimensions. “Avg. ¯l” means the average length of polytopes chain (exclude timeout cases), which grows nearly linear. Note that APPROXMC4, CACHET and GANAK could solve cases with more variables (max to 15) than BARVINOK here, due to benchmarks with λ = 1, i.e., −1 ≤xi ≤1, which are in favor of propositional model counters. Related Works There are a few related works which also investigate approximate integer solution counting problem. In (Kannan and Vempala 1997), an algorithm for sampling lattice points in a polytope was introduced. Similar to Algorithm 1, it considers an enlarged polytope P ′′ for real points sampling and then rejects samples outside P, where P ′′ = {⃗x : ⃗Ai⃗x ≤bi + (c + p 2 log m)| ⃗Ai|)}, c = q ln 4 ε and ε is the variational difference between the uniform density and the probability density of real points sampling. As a result, they proved that there exists a polynomial time algorithm for nearly uniform lattice sampling if bi ∈Ω(n√m| ⃗Ai|). However, in practice, such condition is often too loose. For example, benchmarks considered in Section are usually smaller, i.e., bi < n√m| ⃗Ai|, especially when n ≥10, which has a higher difficulty in sampling. Also note that P ′ computed by our approach is tighter than P ′′. Thus the probability of rejection by sampling in P ′ is lower than in P ′′. In addition, back to the time of this work published, the best real points sampler is only with time complexity of O∗(n5). Nowadays, the state-of-the-art real points sampler is in O∗(n3). A more recent work (Ge and Biere 2021) introduced factorization preprocessing techniques to reduce polytopes dimensionality. Suppose a polytope P has been factorized into F1, ..., Fk, and |P ∩Zn| = Qk i=1 |Fi ∩Zni|, where ni represents the dimensionality of Fi. To approximate |P ∩Zn| with given ϵ, δ, we have to approximate counts in Fi with smaller ϵ′, δ′. It indicates that factorization techniques integrated with ALC may not as effective as with exact counters. Conclusion In this paper, a new approximate lattice counting framework is introduced, with a new lattice sampling method and dynamic stopping criterion. Experimental results show that our algorithm significantly outperforms the state-of-the-art counters, with low errors. Since our sampling method is limited by the Hit-and-run random walk, which is only a nearly uniform sampler, we are interested in an efficient method to test the uniformity of samplers in the future. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8028 Acknowledgments This work is supported by National Key R&D Program of China (2022ZD0116600). Cunjing Ge is supported by the National Natural Science Foundation of China (62202218), and is sponsored by CCF-Huawei Populus Grove Fund (CCF-HuaweiFM202309). References Barvinok, A. I. 1993. Computing the Volume, Counting Integral Points, and Exponential Sums. Discrete & Computational Geometry, 10: 123–141. Barvinok, A. I. 1994. Computing the Ehrhart Polynomial of a Convex Lattice Polytope. Discrete & Computational Geometry, 12: 35–48. Berbee, H. C. P.; Boender, C. G. E.; Kan, A. H. G. R.; Scheffer, C. L.; Smith, R. L.; and Telgen, J. 1987. Hit-and-run algorithms for the identification of nonredundant linear inequalities. Math. Program., 37(2): 184–207. Cousins, B.; and Vempala, S. S. 2015. Bypassing KLS: Gaussian Cooling and an O*(n3) Volume Algorithm. In Servedio, R. A.; and Rubinfeld, R., eds., Proc. STOC, 539– 548. ACM. Cousins, B.; and Vempala, S. S. 2018. Gaussian Cooling and O*(n3) Algorithms for Volume and Gaussian Volume. SIAM J. Comput., 47(3): 1237–1273. Cryan, M.; Dyer, M. E.; Goldberg, L. A.; Jerrum, M.; and Martin, R. A. 2002. Rapidly Mixing Markov Chains for Sampling Contingency Tables with a Constant Number of Rows. In Proc. FOCS, 711–720. IEEE Computer Society. Desalvo, S.; and Zhao, J. 2020. Random sampling of contingency tables via probabilistic divide-and-conquer. Comput. Stat., 35(2): 837–869. Dyer, M. E.; Frieze, A. M.; and Kannan, R. 1991. A Random Polynomial Time Algorithm for Approximating the Volume of Convex Bodies. J. ACM, 38(1): 1–17. Dyer, M. E.; Frieze, A. M.; Kannan, R.; Kapoor, A.; Perkovic, L.; and Vazirani, U. V. 1993. A Mildly Exponential Time Algorithm for Approximating the Number of Solutions to a Multidimensional Knapsack Problem. Comb. Probab. Comput., 2: 271–284. Gamarnik, D.; and Katz, D. 2010. A deterministic approximation algorithm for computing the permanent of a 0, 1 matrix. J. Comput. Syst. Sci., 76(8): 879–883. Ge, C.; and Biere, A. 2021. Decomposition Strategies to Count Integer Solutions over Linear Constraints. In Zhou, Z., ed., Proc. of IJCAI, 1389–1395. ijcai.org. Ge, C.; Ma, F.; Ma, X.; Zhang, F.; Huang, P.; and Zhang, J. 2019. Approximating Integer Solution Counting via Space Quantification for Linear Constraints. In Kraus, S., ed., Proc. of IJCAI, 1697–1703. ijcai.org. Ge, C.; Ma, F.; Zhang, P.; and Zhang, J. 2018. Computing and estimating the volume of the solution space of SMT(LA) constraints. Theor. Comput. Sci., 743: 110–129. Geldenhuys, J.; Dwyer, M. B.; and Visser, W. 2012. Probabilistic symbolic execution. In Proc. of ISSTA, 166–176. Harviainen, J.; R¨oysk¨o, A.; and Koivisto, M. 2021. Approximating the Permanent with Deep Rejection Sampling. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y. N.; Liang, P.; and Vaughan, J. W., eds., Proc. of NeurIPS, 213–224. Huang, A.; Lloyd, L.; Omar, M.; and Boerkoel, J. C. 2018. New Perspectives on Flexibility in Simple Temporal Planning. In Proc. of ICAPS, 123–131. Jerrum, M.; and Sinclair, A. 1989. Approximating the Permanent. SIAM J. Comput., 18(6): 1149–1178. Kannan, R.; and Vempala, S. 1997. Sampling Lattice Points. In Proc. of STOC, 696–700. Loera, J. A. D.; Hemmecke, R.; Tauzer, J.; and Yoshida, R. 2004. Effective lattice point counting in rational convex polytopes. J. Symb. Comput., 38(4): 1273–1302. Lov´asz, L. 1999. Hit-and-run mixes fast. Math. Program., 86(3): 443–461. Lov´asz, L.; and De´ak, I. 2012. Computational results of an O∗(n4) volume algorithm. European Journal of Operational Research, 216(1): 152–161. Lov´asz, L.; and Vempala, S. 2006a. Hit-and-Run from a Corner. SIAM J. Comput., 35(4): 985–1005. Lov´asz, L.; and Vempala, S. S. 2006b. Simulated annealing in convex bodies and an O*(n4) volume algorithm. J. Comput. Syst. Sci., 72(2): 392–417. Luckow, K. S.; Pasareanu, C. S.; Dwyer, M. B.; Filieri, A.; and Visser, W. 2014. Exact and approximate probabilistic symbolic execution for nondeterministic programs. In Proc. of ASE, 575–586. Niemetz, A.; Preiner, M.; Wolf, C.; and Biere, A. 2018. Btor2, BtorMC and Boolector 3.0. In Chockler, H.; and Weissenbacher, G., eds., Proc. of CAV, volume 10981 of Lecture Notes in Computer Science, 587–595. Springer. Pesant, G. 2016. Counting-Based Search for Constraint Optimization Problems. In Proc. of AAAI, 3441–3448. Sang, T.; Bacchus, F.; Beame, P.; Kautz, H. A.; and Pitassi, T. 2004. Combining Component Caching and Clause Learning for Effective Model Counting. In Proc. of SAT. Sharma, S.; Roy, S.; Soos, M.; and Meel, K. S. 2019. GANAK: A Scalable Probabilistic Exact Model Counter. In Kraus, S., ed., Proc. of IJCAI, 1169–1176. ijcai.org. Soos, M.; and Meel, K. S. 2019. BIRD: Engineering an Efficient CNF-XOR SAT Solver and Its Applications to Approximate Model Counting. In Proc. of AAAI, 1592–1599. AAAI Press. Valiant, L. G. 1979. The Complexity of Enumeration and Reliability Problems. SIAM J. Comput., 8(3): 410–421. Verdoolaege, S.; Seghir, R.; Beyls, K.; Loechner, V.; and Bruynooghe, M. 2007. Counting Integer Points in Parametric Polytopes Using Barvinok’s Rational Functions. Algorithmica, 48(1): 37–66. Zanarini, A.; and Pesant, G. 2007. Solution Counting Algorithms for Constraint-Centered Search Heuristics. In Proc. of CP, 743–757. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8029
2024
892
18,731
Composing Biases by Using CP to Decompose Minimal Functional Dependencies for Acquiring Complex Formulae Ramiz Gindullin1,2*, Nicolas Beldiceanu1,2, Jovial Cheukam-Ngouonou1,2,4†, R´emi Douence1,2,3, Claude-Guy Quimper4 1IMT Atlantique, Nantes, France 2LS2N, Nantes, France 3INRIA, Nantes, France 4Universit´e Laval, Quebec City, Canada [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Given a table with a minimal set of input columns that functionally determines an output column, we introduce a method that tries to gradually decompose the corresponding minimal functional dependency (mfd) to acquire a formula expressing the output column in terms of the input columns. A first key element of the method is to create sub-problems that are easier to solve than the original formula acquisition problem, either because it learns formulae with fewer inputs parameters, or as it focuses on formulae of a particular class, such as Boolean formulae; as a result, the acquired formulae can mix different learning biases such as polynomials, conditionals or Boolean expressions. A second key feature of the method is that it can be applied recursively to find formulae that combine polynomial, conditional or Boolean subterms in a nested manner. The method was tested on data for eight families of combinatorial objects; new conjectures were found that were previously unattainable. The method often creates conjectures that combine several formulae into one with a limited number of automatically found Boolean terms. 1 Introduction While the problem of synthesising formulae from data (Alur et al. 2018) is central to many areas such as programming by example (e.g. finding formulae in spreadsheets (Gulwani 2011; Gulwani, Harris, and Singh 2012; Paramonov et al. 2017)), program verification (e.g. identifying loop invariants (Srivastava, Gulwani, and Foster 2013)), and conjecture generation (e.g. proposing bounds for combinatorial objects (Aouchiche et al. 2005; Larson and Cleemput 2016; Beldiceanu et al. 2022)), acquisition techniques are limited when the learning bias, i.e. “the set of assumptions that the learner uses to predict outputs of given inputs” (Wikimedia Commons 2003), is vast. In this paper, these assumptions correspond to the type of formulae we acquire. Besides the recent work of S.-M. Udrescu et al. (Udrescu et al. 2020) which applies to continuous functions, most *Funded by the EU ASSISTANT project no. 101000165. †Funded by the ANR AI@IMT project and by Laval University. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. approaches for acquiring discrete functions rely on a grammar to define a domain-specific learning bias. They use a generate-and-test method to produce candidate formulae of increasing complexity. Several improvements were made to limit the combinatorial explosion of candidate formulae. They include the use of probabilistic grammar or statistical methods (Brence, Todorovski, and Dˇzeroski 2021) to focus on more likely candidate formulae first, to generate partially instantiated formulae whose coefficients are determined by a CP or a MIP model (Ligeza et al. 2020; Beldiceanu et al. 2022), or to apply metaheuristics (Hansen and Caporossi 2000). However, their main weakness is twofold: first, they usually deal with formulae from a restricted domain-specific learning bias; second, they try to directly acquire a formula that mentions all the relevant input parameters at once. Contribution We generally do not know how to effectively combine learning biases for acquiring formulae. A system that knows how to solve problems with learning bias (A) and another system that knows how to solve problems with learning bias (B) can be combined to handle not only problems with learning bias (A) or (B), but also can combine both learning biases in a nested manner. The problem is how to find decomposition methods to facilitate the discovery and combination of multiple learning biases. The proposed method partly answers this question through the following observation. Although many formulae are complex, i.e. they involve various operators in different sub-terms of a same formula, some parameters only appear in a few sub-terms, and some sub-terms have a very specific form. We show that by analysing an mfd of a table, while relying on the input columns and the output column, it is possible to identify different sub-terms of a formula to be learnt and the operators that connect its sub-terms. We carry out this analysis by using Constraint Programming (CP) to solve certain sub-problems that allows us to decompose the formula we are looking for, into its sub-terms, without knowing yet the formula to be found. In Sect. 2, we give concrete examples of complex formulae obtained using our new decompositions, and describe them in Sect. 3. In Sect. 4, we evaluate the performance of our contribution. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8030 2 Context, Motivation, Running Examples, and Decomposition Types This section first describes the context behind the decomposition techniques introduced in the paper. Then, it gives typical formulae found using these techniques. Finally, it defines the types of decomposition introduced. 2.1 Context and Motivation Paper (Beldiceanu et al. 2022) describes a CP system for finding conjectures about sharp bounds on characteristics of combinatorial objects. Their learning process is based on tables, each entry of which is a positive example, specifying the sharp lower (resp. upper) bound of a characteristic of a combinatorial object based on a combination of values for other characteristics. Therefore the acquisition process acquires equality (as they have sharp bounds), corresponding, in the end, to inequalities as the generated conjectures deal with lower (resp. upper) bounds. As all the entries in a table are noise-free, they acquire formulae that match all table entries. Their framework uses three biases: (i) Formulae mentioning one or two polynomials, e.g. ‘x3 + y ·z’, ‘max(x·y, 2·z2)’; in the case of two polynomials, these can be connected by a usual arithmetic operator. (ii) Simple conditional formulae of the form (cond ? x : y) denotes x if condition cond holds, y otherwise; the conditional part, the then and else parts have a simple form: each part mentions at most 2 input parameters and one coefficient, e.g. ‘(⌈x/y⌉= 2 ? 2 · y : x)’. (iii) Boolean-arithmetic formulae consisting of arithmetic conditions using a commutative operator such as ‘∧’ or ‘+’, e.g. ‘[x ≥2] + [(y −x) ≥2]’, where ‘[’ and ‘]’ stand for Iverson brackets. Despite using these biases (i)–(iii), the system (Beldiceanu et al. 2022) missed many conjectures. Rather than extending these biases further or introducing new biases, we combine these biases to catch complex formulae. To avoid a combinatorial explosion, this is not done by assembling a merged grammar associated with different biases, but rather by devising some recursive decomposition techniques that identify the sub-terms of a formula and how these sub-terms are connected by arithmetic operators. The base case of such recursion belongs to biases (i)–(iii). The next section provides a first insight on how this is achieved. 2.2 Running Examples and Intuition of the Decomposition Technique Conj. (1)–(4) illustrate sharp lower bounds found by the decomposition method that could not be found before, as they were outside the scope of biases (i)–(iii). They are used as running examples and were proved in (CheukamNgouonou et al. 2023) to stress the fact that the decomposition method can find non-obvious conjectures that turn out to be true. In the first phase, the decomposition method identifies an incomplete formula, i.e. a formula for which some terms are still unknown and will be determined later in the 2nd phase by applying the decomposition method recursively. In Conj. (1)–(4), the right-hand side of each inequality matches terms, highlighted by a brace, connected by one or two arithmetic operators (or a conditional), where: • Terms with no grey background refer to expressions matching biases (i)–(iii), that are found in the 1st phase. • Terms with a grey background are found in the 2nd phase when the decomposition is applied recursively. Conjecture 1 Conj. (1) provides a sharp lower bound on the number of connected components c of a digraph where every vertex is adjacent to at least an arc wrt its number of vertices v, the maximum number of vertices c inside a connected component, and the smallest number of vertices s in a strongly connected component (scc). The right-hand side of Inequality (1) consists of the terms (1.1) and (1.2), resp. referring to biases (i) and (iii), and linked by a sum. c ≥ ⌈v/c⌉ | {z } (1.1) binary function + [¬((2 · s ≤c) ∨(s ≥(v mod c = 0 ? c : v mod c))] | {z } (1.2) Boolean term (1) In Phase one, the decomposition method finds a formula of the form g1,1(v, c)+g1,2(v, c, s), where g1,1 has only 2 parameters, and where the codomain of g1,2 is the set {0, 1}; in the 2nd phase, the method finds the functions g1,1 and g1,2. Conjecture 2 Conj. (2) gives a sharp lower bound c of a digraph wrt its number of scc s, and the c and s characteristics introduced in Ex. 1. The term (2.1) is the isolated input parameter s and the term (2.2), i.e. ⌊c/s⌋, refers to bias (i). These terms are connected by an integer division rounded up. c ≥  s |{z} (2.1) unary function / ⌊c/s⌋ | {z } (2.2) binary function  (2) In Phase one, the method finds the formula l g2,1(s) g2,2(c,s) m , with g2,1(s) = s, and where g2,2 has only 2 parameters; in the 2nd phase, the method finds the function g2,2 itself. Conjecture 3 Conj. (3) depicts a sharp lower bound on the maximum number of vertices s inside an scc of a digraph wrt the s and s characteristics previously introduced. Within Conj. (3), the term (3.1) is a conditional expression, i.e. bias (ii), the term (3.2) refers to a unary function, and the term (3.3), i.e. [s = 1], corresponds to a Boolean expression, i.e. bias (iii). These terms are connected by the division rounded up and the sum operators. s ≥  (v = s ? v : v −s) | {z } (3.1) binary function as a conditional /( s −1 | {z } (3.2) unary function + [s = 1] | {z } (3.3) Boolean term )  (3) In Phase 1, the decomposition method finds a formula of the form l g3,1(v,s) g3,2(s)+g3,3(s) m , with g3,2(s) = s −1, and where g3,1 has only 2 parameters, and where the codomain of g3,3 is the set {0, 1}; in the 2nd phase, the method finds g3,1 and g3,3. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8031 Conjecture 4 Conj. (4) depicts a sharp lower bound on the maximum number of vertices c inside a connected component of a digraph wrt the v, c, c and s characteristics previously introduced. The right-hand side of Conj. (4) is a complex conditional expression outside the scope of bias (ii), as its ‘else’ part is too complicated. It consists of three parts: • A simple condition v = c · c denoted by (4.1); • A ‘then’ part, c, depicted by (4.2); • An ‘else’ part, max  s, l v−c c−1 m , referring to a complex term labelled by (4.3). c ≥  v = c · c | {z } (4.1) condition ? c |{z} (4.2) ‘then’ part : max  s, v −c c −1  | {z } (4.3) ‘else’ part  (4) The method first finds (v = g4,1(c, c) ? g4,2(c) : g4,3(v, c, c, s)) with g4,1(c, c) = c · c, g4,2(c) = c ; then it finds function g4,3 using the method in a recursive way: • It finds g4,3(v, c, c, s) = max(g4,3,1(s), g4,3,2(v, c, c)), with g4,3,1(s) = s, and where g4,3,2 has 3 parameters; • It then finds function g4,3,2 using again the decomposition method in a recursive way: – It first finds g4,3,2(v, c, c) = l g4,3,2,1(v,c) g4,3,2,2(c) m with g4,3,2,2(c) = c −1; – It then finds function g4,3,2,1(v, c) directly as v −c, a polynomial, i.e. bias (i). Wrapping Up the Examples We saw four conjectures acquired by our decomposition method. They cover all the types of decompositions we found. Each function introduced in Phase 1 when searching for an incomplete formula is simpler than the function initially looked for: (a) it has fewer input parameters, e.g. in Conj. 1, g1,1(v, c) = ⌈v/c⌉does not mention the s parameter, or (b) its codomain is restricted to two values, e.g. in Conj. 1, the codomain of g1,2 is the set {0, 1}, or (c) it only holds if a given condition is met, e.g. the ‘else’ part in Conj. (4) only holds if v ̸= c · c . 2.3 Decompositions Definition By decomposing the formula to acquire into an expression consisting of several simpler terms as mentioned in Sect. 2.2, we aim to solve an easier acquisition problem. To this end, we introduce four ways of decomposing a function. Before formally defining them, we explain each of them intuitively. • [Adding a Boolean expression] The first decomposition breaks down a function as the sum of a function with fewer input parameters, and a function whose codomain is the set {0, 1}. As shown in Conj. (1), this decomposition was motivated by the possibility of approximating a bound by an error margin of at most 1. • [Isolating a parameter] The 2nd decomposition is based on the idea that one may find a formula in which an input parameter only occurs in the outer term of the formula where no other input parameter is used. In Conj. (2), the parameter s only occurs in the numerator of the top-level of binary function ‘integer division rounded up’. • [Isolating a parameter and using a 0-1 slack] The 3rd decomposition generalises the 2nd decomposition by introducing a 0-1 slack term that can refer to any subset of the input parameters. In Conj. (3), the input parameter s does not occur in the numerator of the formula; indeed, it is only mentioned by the denominator of the top level function ‘integer division rounded up’. • [Introducing a conditional] The 4th decomposition is based on the intuition that it may be easier to find a formula that applies to a subset of the table entries rather than to all. The 4th decomposition divides the table entries into a ‘then’ and an ‘else’ set using a simple condition then, for each set, we have to identify a corresponding formula. We use a small set of predefined formulae with a subset of the condition’s parameters for the ‘then’ set, while imposing no restriction on the ‘else’ set. Conj. (4) illustrates such conditional decomposition. Problem Given a table tab[1..nrow, 1..ncol + 1] of integer values, consisting of nrow rows and ncol +1 columns, where columns 1, 2, . . . , ncol form a mfd determining column ncol + 1, we address the following question: How to decompose the problem of discovering a function g satisfying all the following set of equalities ∀j ∈[1, nrow] : tab[j, ncol+1] = g(tab[j, 1], . . . , tab[j, ncol]) (5) into a set of easier subproblems requiring finding a limited number of functions satisfying one of the decompositions (6)–(9) of Def. 1 that we now introduce. Definition 1 (Decomposition Types) ∀j ∈[1, nrow] : tab[j, ncol + 1] = g1(tab[j, a1], . . . , tab[j, aℓ1]) + g2(tab[j, b1], . . . , tab[j, bℓ2 ] (6) ∀j ∈[1, nrow] : tab[j, ncol + 1] = g1(tab[j, a1], . . . , tab[j, aℓ1]) ⊕g3(tab[j, b1]) (7) ∀j ∈[1, nrow] : tab[j, ncol + 1] = g1(tab[j, a1], . . . , tab[j, aℓ1]) ⊕ (g2(tab[j, b1], . . . , tab[j, bℓ2]) + g3(tab[j, b1])) (8) ∀j ∈[1, nrow] : tab[j, ncol + 1] = (cond(tab[j, c1], . . . , tab[j, cℓ3]) ? g4(tab[j, d1], . . . , tab[j, dℓ4]) : g1(tab[j, a1], . . . , tab[j, aℓ1])) (9) 1. g1 : Zℓ1 →Z refers to one of the biases (i)–(iii) or to a formula obtained by one of the four decompositions; a1, . . . , aℓ1 are distinct indices from [1, ncol] with ℓ1 ∈ [1, ncol −1] for (6)–(8) as g1 does not involve all input parameters, and with ℓ1 ∈[1, ncol] for (9). For (7)–(8), b1 is different from a1, . . . , aℓ1. 2. g2 : Zℓ2 → {0, 1} matches bias (iii); b1, . . . , bℓ2 are distinct indices from [1, ncol] with ℓ2 ∈[1, ncol]. Note that in (6), functions g1 and g2 may share some parameters. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8032 3. g3 : Z →Z is one of the unary functions A · x2 + B · x + C, ⌊A·x2+B·x D ⌋, ⌈A·x2+B·x D ⌉, min(A · x + B, C), max(A · x + B, C), (A · x + B) mod D, |A · x + B|, [((x + A) mod D) = C], [((x + A) mod D) ≥C], [((x + A) mod D) ≤C], with A, B, C ∈Z, and D ∈Z+. To limit the search space, we consider unary functions involving up to 3 constants. 4. Within (7)–(8), ⊕stands for one of the operators ‘+’, ‘ ·’, ‘min’, ‘max’, ‘⌊⌋’, or ‘⌈⌉’. 5. Within (9), cond is a condition mentioning at most 3 parameters, i.e. ℓ3 ∈ [1, 3], of the form ‘x = min(x)’, ‘x = y’, ‘x ≤y’, ‘x mod y = 0’, ‘x = y · z’, ‘A · x ≤y’, while g4 is one of the functions ‘B’, ‘x’, ‘[x = min(x)]’, ‘[x > min(x)]’, ‘x · y’, ‘[x = y + B]’, ‘x = y + z’, with ‘A’, ‘B’ ∈Z. We introduce a fair number of functions and conditions for g3, g4, and cond in Def. 1. To avoid overfitting, they mention at most 3 coefficients whose range is restricted to [−2, 2]. Example 1 Within Sect. 2.2, the right-hand part of inequalities (1), (2), (3), and (4) resp. matches the following decomposition types: • (6) with g1(v, c) = ⌈v c ⌉and g2(v, c, s) = [¬((2 · s ≤ c) ∨(s ≥(v mod c = 0 ? c : v mod c))]. • (7) with g1(c, s) = ⌊c/s⌋, g3(s) = s, and ⊕=‘⌈⌉’. • (8) with g1(v, s) = (v = s ? v : v −s), g2(s) = [s = 1], g3(s) = s −1, and ⊕=‘⌈⌉’. • (9) with g1(v, c, c, s) = max  s, l v−c c−1 m and g4(c) = c. 3 Implementing the Decompositions The implementation combines (a) phases of generating a limited number of alternatives on the type of functions used in the decomposition and on which parameters these functions mention, and (b) test phases verifying certain simple conditions and solving a constraint model to find the values of the coefficients of the functions mentioned in the terms of the decomposition. We introduce some notation to refer to intermediate structures used to analyse the consequence of eliminating an input parameter from a mfd. Notation 1 Consider the table tab[1..nrow, 1..ncol + 1], in which the first ncol columns, the input parameters, form a mfd determining column ncol + 1, i.e. the output parameter. • Let tab ↗ k [1..nrow, 1..ncol + 1] be the table obtained by sorting the rows of the table tab[1..nrow, 1..ncol + 1] in increasing lexicographic order wrt columns 1, . . . , k − 1, k + 1, . . . , ncol + 1, i.e. column k is skipped; to make the correspondence between the entries of the tables tabk and tab ↗ k , let σk denote the permutation that maps the j-th row of the table tabk to the σk(j)-th row of the table tab ↗ k (with j ∈[1, nrow]). • Let I denote the parameters associated with columns 1, 2, . . . , ncol of the table tab, and let Ij (with j ∈[1, nrow]) represents the corresponding parameter values for the j-th row of the table tab, i.e. values tab[j, 1], tab[j, 2], . . . , tab[j, ncol]. • Let Ik (with k ∈ [1, ncol]) denote the parameters associated with columns 1, . . . , k −1, k + 1, . . . , ncol of the table tab, while let Ik j (with j ∈[1, nrow]) represents the corresponding parameter values tab[j, 1], . . . , tab[j, k−1], tab[j, k+1], . . . , tab[j, ncol]. 3.1 Decomposition of Type (6) [adding a Boolean expression] Question We want to check whether there is a k ∈ [1, ncol] such that ∀j ∈[1, nrow] : tab[j, ncol + 1] = g1(Ik j ) + g2(Ij), with g2 : Zℓ2 →{0, 1}; i.e. we seek an approximation with a maximum error of 1, by using a function g1, without parameter k, and a correction term g2. Steps for Finding a Decomposition of Type (6) 1. [Determining the parameters of g1] First, we successively select the k-th column (with k ∈[1, ncol]) to remove from the input parameters of the function g1, and we apply Steps 2. to 4. for each candidate column k. 2. [Checking whether the codomain of g2 is the set {0, 1}] Second, provided function g1 does not use the k-th input parameter selected in Step 1, we analyse how this affects the codomain of function g2, even if functions g1 and g2 are yet unknown. For each maximum interval of consecutive rows [ℓ, u] in the sorted table tab ↗ k [1..nrow, 1..ncol + 1] for which columns 1, . . . , k −1, k + 1, . . . , ncol have the same value, we get the maximum max ℓ,u and minimum minℓ,u values in the (ncol +1)-th column, and we check that the difference max ℓ,u −minℓ,u does not exceed 1. In other words, we test for the table tab ↗ k that, for each combination of identical input parameters, from which the k-th input parameter is ignored, the corresponding output parameter varies by at most 1. When satisfied, this test ensures that the codomain of g2 is in {0, 1}. For each entry j ∈[ℓ, u] of the table tab ↗ k we set min ↗ k [j] = minℓ,u, where min ↗ k is a one-dimensional table whose entries vary from 1 to nrow. 3. [Determining the values of g1(Ik j ) and g2(Ij)] Third, for each combination of input parameters of functions g1 and g2 we compute their respective output values: ∀j ∈[1, nrow], g1(Ik j ) = min ↗ k [σk(j)] and g2(Ij) = tab ↗ k [σk(j), ncol + 1] −min ↗ k [σk(j)]. 4. [Using g1(Ik j ) and g2(Ij) for identifying functions g1 and g2] We search for g1 by using the CP solvers associated with biases (i)–(iii) or by applying recursively one of the decompositions of this paper. To identify g2 we call the Boolean solver associated with the bias (iii). Example 2 (Illustrating the Search for a Decomposition of Type (6) for Conj. (1)) Part (A) of Table 1 provides 9 entries of the table tab with input parameters v, c, s and the lower bound of the output parameter c, previously introduced. Assume we skip the third column of table tab, k = 3, shown in grey in table tab ↗ 3 , i.e. we ignore column s. • Parts (B1) and (B2) resp. show the tables introduced for finding a decomposition of Type (6), i.e. tables tab, tab ↗ 3 , and min ↗ 3 . The permutation σ3 (with σ3(3) = 4, σ3(4) = 3, and σ3(j) = j otherwise) maps the entries of table tab to the entries of table tab ↗ 3 . The rows of tab ↗ 3 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8033 1 2 3 4 5 6 7 8 9 σ3 1 2 3 4 5 6 7 8 9 v c s c 9 2 1 5 9 3 1 3 9 3 2 4 9 3 3 3 9 4 1 3 9 4 2 3 9 5 1 2 9 5 2 2 9 5 4 2 (A) tab[1..9,1..4] v c s c 9 2 1 5 9 3 1 3 9 3 3 3 9 3 2 4 9 4 1 3 9 4 2 3 9 5 1 2 9 5 2 2 9 5 4 2 (B1) tab ↗ 3 [1..9,1..4] 5 3 3 3 3 3 2 2 2 (B2) min ↗ 3 [1..9] v c g1(v, c) 9 2 5 9 3 3 9 3 3 9 3 3 9 4 3 9 4 3 9 5 2 9 5 2 9 5 2 (C1) v c s g2(v, c, s) 9 2 1 0 9 3 1 0 9 3 2 1 9 3 3 0 9 4 1 0 9 4 2 0 9 5 1 0 9 5 2 0 9 5 4 0 (C2) 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Table 1: Tables used to find a decomposition of type 6 for Conj. (1); bold entries refer to Ex. 2. and min ↗ 3 can be partitioned in four maximum intervals, depicted in dark and light grey, resp. corresponding to the pair of values (9, 2), (9, 3), (9, 4), and (9, 5) for the input parameters v and c. As for each of these four intervals the difference between the maximum and the minimum value of c does not exceed one, we can compute the values of g1(v, c) and g2(v, c, s). • Parts (C1) and (C2) resp. give the tables used to acquire g1(v, c) and g2(v, c, s), e.g. for j=3, g1(I3 j )=g1(I3 3)=g1(9, 3)=min ↗ 3 [σ3(j)]=min ↗ 3 [σ3(3)]= min ↗ 3 [4]=3, and g2(Ij)=g2(9, 3, 2)=tab ↗ 3 [σ3(j), 4] − min ↗ 3 [σ3(j)]=tab ↗ 3 [4, 4] −min ↗ 3 [4] = 4 −3 = 1. 3.2 Decompositions of Types (7) and (8) (isolating a parameter) Using a binary operator ⊕, Decomposition (7) combines 2 sub-terms: a function g3 involving a single input parameter, with a function g1 mentioning only all other remaining input parameters. Decomposition (8) extends (7) a bit by adding an extra term whose codomain is the set {0, 1}. As identifying Decomposition (8) is very similar to identifying Decomposition (7), we focus on the latter for space reasons. Question We want to check whether there is a k ∈ [1, ncol] such that ∀j ∈[1, nrow] : tab[j, ncol + 1] = g1(Ik j ) ⊕g3(tab[j, k]), where ⊕and function g3 were defined in Def. 1; i.e. we want to see if we can express the formula we are looking for, by restricting one of its parameters to just one of the formula’s sub-terms. 1. [Selecting k, ⊕, and g3] To determine the images of function g1 that will be used to find function g1 itself in Step. 2 we successively consider the ncol × 10 × 6 combinations of triples ⟨k, g3, ⊕⟩(with k ∈[1, ncol]), see Items 3–4 of Def. 1 for g3 and ⊕. To find whether a combination of triples can be used or not to find the images of the function g1, we apply the following steps: (a) [Creating the “value variables” for the images of g1] For each maximum interval of consecutive rows [ℓ, u] of the table tab ↗ k [1..nrow, 1..ncol + 1] for which columns 1, . . . , k −1, k + 1, . . . , ncol have the same value, we create a single domain variable yℓrepresenting the value of g1(Ik σ−1 k (ℓ)), where σ−1 k denotes the inverse permutation of permutation σk. (b) [Stating the row constraints for finding the coefficients of g3, and the value of g1 for each row] For each entry j of a maximal interval of consecutive rows [ℓ, u] of the table tab ↗ k [1..nrow, 1..ncol + 1] for which columns 1 . . . , k −1, k + 1, . . . , ncol have the same value, we create the constraint yℓ⊕ g3(tab[σ−1 k (j), k]) = tab[σ−1 k (j), ncol + 1]. (c) [Solving the row constraints] We solve the conjunction of constraints stated in Step 1b: we find the values of the coefficients of the unary function g3, and the values of the “value variables” of g1, while minimising the sum of the absolute value of the coefficients of g3 using a CP solver. Among all triples for which Step 1c found a solution, we keep those triples ⟨k, ⊕, g3⟩which minimise the sum of absolute values of the coefficients of g3. 2. [Identifying function g1] As for the decomposition of Type (6), we search for function g1 by employing the existing CP solvers associated with biases (i)–(iii) or by applying recursively one of the 4 decompositions proposed in this paper. For this purpose we reuse the value of the “value variables” found for g1 in Step 1c. Example 3 (Illustrating the Search for a Decomposition of Type (7) for Conj. (2)) Part (A) of Table 2 provides 9 entries of the data for the input parameters s, c, s and the lower bound of the output parameter c, namely table tab previously introduced. Assume we skip the first column of the table tab, i.e. k = 1 shown in grey in the table tab ↗ 1 , that is we ignore the column labelled by s. The rows of Parts (B)–(C) of Table 2 can be partitioned in three maximum intervals, depicted in dark and light grey, resp. corresponding to the pair of values (1, 1), (3, 2), and (9, 3) for the input parameters c and s. • Assuming we look for a function g3 of Type A·s2+B·s+ C, and for a binary operator ⊕of the form ‘⌈⌉’, the three columns in Part (C) resp. show for each row of tab ↗ 1 , (p1) the unary function g3, (p2) the “value variables” for g1, and (p3) the corresponding constraints. • Part (D) gives the derived table used to acquire g1(c, s). 3.3 Decomposition of Type (9) (conditional) Type (9) decomposition combines a simple condition and a simple function g4 for the ‘then’ part of the condition, with a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8034 1 2 3 4 5 6 7 8 9 s c s c 1 1 1 1 2 1 1 2 2 3 2 2 2 9 3 1 3 1 1 3 3 3 2 3 3 9 3 1 4 3 2 4 4 9 3 2 (A) tab 1 2 3 4 5 6 7 8 9 s c s c 1 1 1 1 2 1 1 2 3 1 1 3 2 3 2 2 3 3 2 3 4 3 2 4 2 9 3 1 3 9 3 1 4 9 3 2 (B) tab ↗ 1 σ−1 1 g3(s)=A·s2+B·s+C g1(c, s) ⌈g3(s)/g1(c, s)⌉ = c A + B + C y1 ⌈( A + B + C)/y1⌉= 1 4 · A + 2 · B + C y1 ⌈( 4 · A + 2 · B + C)/y1⌉= 2 9 · A + 3 · B + C y1 ⌈( 9 · A + 3 · B + C)/y1⌉= 3 4 · A + 2 · B + C y4 ⌈( 4 · A + 2 · B + C)/y4⌉= 2 9 · A + 3 · B + C y4 ⌈( 9 · A + 3 · B + C)/y4⌉= 3 16 · A + 4 · B + C y4 ⌈(16 · A + 4 · B + C)/y4⌉= 4 4 · A + 2 · B + C y7 ⌈( 4 · A + 2 · B + C)/y7⌉= 1 9 · A + 3 · B + C y7 ⌈( 9 · A + 3 · B + C)/y7⌉= 1 16 · A + 4 · B + C y7 ⌈(16 · A + 4 · B + C)/y7⌉= 2 (C) c s g1(c, s) 1 1 1 (y1) 1 1 1 (y1) 3 2 1 (y4) 9 3 3 (y7) 1 1 1 (y1) 3 2 1 (y4) 9 3 3 (y7) 3 2 1 (y4) 9 3 3 (y7) (D) 1 2 3 4 5 6 7 8 9 Table 2: (A),(B),(D) Tables, and (C) Constraints used for finding a decomposition of Type 7 for Conj. (2); variables y1, y4, and y7 in Tables (C) and (D) correspond to the “value variables” for g1. function g1 corresponding to biases (i)–(iii), or obtained by one of the first three decompositions described in this paper, i.e. the conditional decomposition is not applied recursively, as this has proven to be very time-consuming. We use the following steps to search for a decomposition of Type (9): 1. [Selecting cond and g4] We successively consider the 6 × 7 combinations of pairs ⟨cond, g4⟩, see Item 5 of Def. 1. To determine whether or not a combination of pairs can be used, we create this constraint model: (a) The variables c1, c2, . . . , cn (resp. d1, d2, . . . , dm) denote the indices of the columns of the table tab[1..nrow, 1..ncol + 1] used in the condition cond (resp. in function g4 of the ‘then’ part). These variables are in [1, ncol] as they correspond to input parameters, i.e. we state the constraints ∀i ∈[1, n] : ci ∈[1, ncol], alldifferent([c1, c2, . . . , cn]), ∀i ∈[1, m] : di ∈ [1, ncol], and alldifferent([d1, d2, . . . , dm]). (b) For each entry j (with j ∈[1, nrow]) of the table tab[1..nrow, 1..ncol + 1], we state the constraints: i. ∀k ∈[1, n ] : element(ck, tab[j, 1..ncol], vj,k ), cond(vj,1, vj,2, . . . , vj,n)⇔rj, rj ∈[0, 1], ii. ∀k ∈[1, m]:element(dk, tab[j, 1..ncol], wj,k), iii. rj = 1 ⇒g4(wj,1, . . . , wj,m) = tab[j, ncol + 1]. (c) By maximising the number of rows in the table tab[1..nrow, 1..ncol + 1] for which condition ‘cond’ is met, we try to create a smaller subproblem for acquiring the ‘else’ part. This is done by stating the constraints cost = P j∈[1,nrow] rj, cost > 0, cost < nrow. The last two constraints require the ‘then’ (or ‘else’) part to contain at least one row for which condition ‘cond’ is true (or false) as we want to obtain a nonsimplifiable conditional formula. We maximise cost wrt the posted constraints. 2. [Identifying function g1] As for decompositions (6)–(8), using Item 5 of Def. 1, we search for function g1 using the CP solvers related to biases (i)–(iii), or by recursively applying decompositions (6)–(8). To do this, we focus only on all the j-th rows of the table tab[1..nrow, 1..ncol + 1] for which condition cond(vj,1, vj,2, . . . , vj,n) does not hold, i.e. the ‘else’ part of the conditional. 3.4 Integrating Decompositions and Biases (i)-(iii) Trying simple formulae first, the decompositions have been integrated with biases (i)–(iii) in the following order. 1. Boolean-arithmetic formulae: bias (iii) when the output column of the table tab has only two values. 2. Simplest polynomial formulae: bias (i) with 1 monome. 3. Simple conditional formulae: bias (ii). 4. Simple polynomial formulae: bias (i), 2 or 3 monomes. 5. The decompositions (6), (7), (8), and (9), in this order. 6. Complex polynomial formulae: bias (i), 4 to 6 monomes. 4 Evaluation Description of the Used Combinatorial Objects We evaluate the decomposition methods on the combinatorial objects DIGRAPH, ROOTED TREE, ROOTED FOREST, ROOTED FOREST2, PARTITION, PARTITION0, STRETCH, and CYCLIC STRETCH introduced in (Gindullin et al. 2023). Definition 2 For DIGRAPH, ROOTED TREE, ROOTED FOREST, and ROOTED FOREST2, size denotes the number of vertices. For PARTITION0 and PARTITIONS, size is the number of elements of the set we partition, and for STRETCH and CYCLIC STRETCH, size is the sequence length. Description of Input Tables The data set (Gindullin et al. 2023) consists of a collection of tables giving for any combinatorial object of size at most size, for any combination of at most 3 input parameters, for any feasible combinations of values of these input parameters, the sharp lower or the sharp upper bound of a given output parameter, e.g. Table (A) of Table 1 is an excerpt of such an input table. In addition, an input table may contain auxiliary parameters, called secondary parameters, all functionally determined by the input parameters. The tables represent 12 GB of data. Conjectures We Are Looking For We search for (I) conjectures expressing a secondary parameter wrt input parameters, (II) conjectures expressing sharp bound on an output parameter wrt input parameters, (III) conjectures expressing a secondary parameter wrt both input and secondary parameters, (IV) conjectures expressing sharp The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8035 combinatorial no object DIGRAPH 2861 ROOTED TREE 185 ROOTED FOREST 2088 ROOTED FOREST2 2861 PARTITION 562 PARTITION0 235 STRETCH 6416 CYCLIC STRETCH 6589 total 21797 1st version nt na nb ni ne 3270 2637 434 1789 2 225 138 67 119 0 2343 1577 562 1250 4 2404 1639 569 1372 1 572 436 78 279 0 238 189 37 134 0 6481 4978 556 2157 4 5964 4484 521 2041 15 21497 16078 2824 9141 26 2nd version nt na nb ni ne nd n6 n7 n8 n9 n>1 3412 2702 447 1940 6 328 89 71 52 156 66 240 152 77 133 1 39 10 12 6 18 8 2672 1697 613 1428 6 433 122 131 112 168 121 2563 1700 607 1459 9 361 71 108 94 103 65 586 453 78 303 0 89 11 27 7 34 9 282 209 38 162 0 65 14 12 10 20 13 6981 5237 582 2473 0 660 146 220 118 357 161 7011 5105 561 2486 32 882 103 299 131 636 309 23747 17255 3003 10384 54 2857 566 880 530 1492 752 Table 3: Detailed experimental results for the 1st and the 2nd versions of the acquisition tool, where no is the number of secondary and output parameters across all tables, nt is the number of acquired conjectures by the 1st or the 2nd version, na is the number of secondary and output parameters for which the 1st or the 2nd version could acquire at least a conjecture, nb is the number of output parameters for which the 1st or the 2nd version could acquire at least a conjecture, ni is the number of secondary and output parameters for which the 1st or the 2nd version could acquire at least a conjecture input parameters only, ne is the number of conjectures invalidated on the largest available size of a combinatorial object, nd is the number of output parameters for which the 2nd version could acquire at least a conjecture using decompositions (6)–(9), n6, n7, n8, n9 are the number of output parameters for which we could resp. acquire at least a conjecture using (6), (7), (8), (9), n>1 is the number of output parameters for which the 2nd version found at least a conjecture using more than one decompositions. bound on an output parameter wrt both input and secondary parameters. We prefer conjectures using input parameters only as it allows one to express sharp bounds directly wrt input parameters, i.e. without using secondary parameters. Focusing only on sharp bounds limits the number of conjectures learned, which now depends only on the number of characteristics considered for the combinatorial objects. Experimental Setting We compare 2 versions of the acquisition tool using SICStus 4.7.1 on an cluster with Intel processors such as Silver 4216 Cascade Lake @ 2.1GHz, and E7-4809 v4 Broadwell @ 2.1Ghz. The source code for the decomposition consists of 1882 commented lines that are written in SICStus Prolog available from https://github.com/cquimper/MapSeekerAAAI24; The 1st version uses biases (i)–(iii), while the 2nd version uses biases (i)–(iii) and the 4 decompositions (6)–(9). If one of these versions took more than 96 hours to complete the acquisition for an input table, that table is excluded from the result evaluation, unless otherwise stated. We acquire conjectures on tables of smaller sizes and test them on the largest tables using the method described in (Beldiceanu et al. 2022) on selecting table size. We exclude invalidated conjectures from our evaluation. Experimental Results Out of a total of 4469 tables, the 1st version had timeouts on 44 tables, the 2nd version on 49 tables, with almost no overlap. We remove these tables and use the remaining 4378 tables to compare 2 versions. The 4378 tables has 21797 secondary and output parameters. As cluster node performance varies and we cannot control allocation of tables over CPUs, we only compare the aggregated full acquisition time for both versions. The 1st version took 5888 hours in total, while the 2nd version took 25053 hours to complete. A total of 26 (resp. 54) conjectures acquired by the 1st (resp. 2nd) version were not validated. Table 3 shows the detailed results of the experiment. The 1st and the 2nd versions resp. found conjectures for 16078 and 17255 secondary or output parameters. The 2nd version found conjectures of types I–IV (resp. I–II) for 5% more (resp. 14% more) secondary and output parameters compared to the 1st version. The 14% increase reflects the fact that the 2nd version expresses more conjectures with input parameters only, which is one of our goals. Including all 4469 tables, the 2nd version found conjectures for 7% more secondary and output parameters than the 1st version. The 2nd version found 6% more conjectures of types II and IV, i.e. sharp bounds. In the 2nd version, 2857 secondary or output parameters (16.5% of the 17255 parameters) have conjectures that use decompositions (6)–(9); 26% of them use several decompositions in one conjecture. In (CheukamNgouonou et al. 2023), we proved Conjectures (1)–(4) to show that decompositions find non-trivial sharp bounds, as well as a non-obvious conjecture for ROOTED TREE. 5 Conclusion Although simple, the proposed method may seem counterintuitive. It is based on the idea of identifying sub-terms of the formula being searched for before actually knowing the formula itself, by combining data analysis and CP wrt minimal functional dependencies. The method helps to find formulae whose sub-terms correspond to various biases, e.g. polynomial, conditional or Boolean expressions, as shown during the search for conjectures on sharp bounds. For our benchmark, the decomposition methods found 14% more conjectures expressed directly wrt input parameters. It also found non-obvious conjectures that we proved. Future work may use acquired conjectures to synthesise efficient filtering algorithms, as a lack of sharp bounds is a weakness of CP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8036 References Alur, R.; Singh, R.; Fisman, D.; and Solar-Lezama, A. 2018. Search-based program synthesis. Commun. ACM, 61(12): 84–93. Aouchiche, M.; Caporossi, G.; Hansen, P.; and Laffay, M. 2005. AutoGraphiX: a survey. Electron. Notes Discret. Math., 22: 515–520. Beldiceanu, N.; Cheukam-Ngouonou, J.; Douence, R.; Gindullin, R.; and Quimper, C. 2022. Acquiring Maps of Interrelated Conjectures on Sharp Bounds. In Solnon, C., ed., 28th International Conference on Principles and Practice of Constraint Programming, CP 2022, July 31 to August 8, 2022, Haifa, Israel, volume 235 of LIPIcs, 6:1– 6:18. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Brence, J.; Todorovski, L.; and Dˇzeroski, S. 2021. Probabilistic grammars for equation discovery. KnowledgeBased Systems, 224: 107077. Cheukam-Ngouonou, J.; Gindullin, R.; Beldiceanu, N.; Douence, R.; and Quimper, C.-G. 2023. Proving Conjectures Acquired by Composing Multiple Biases. arXiv:2312.08990. Gindullin, R.; Beldiceanu, N.; Cheukam-Ngouonou, J.; Douence, R.; and Quimper, C. 2023. Boolean-Arithmetic Equations: Acquisition and Uses. In Andre, C., ed., Integration of AI and OR Techniques in Constraint Programming - 20th International Conference, CPAIOR 2023, Nice, France, May 29-June 1, 2023, Proceedings, Lecture Notes in Computer Science. Springer. Gulwani, S. 2011. Automating string processing in spreadsheets using input-output examples. In Ball, T.; and Sagiv, M., eds., Proceedings of the 38th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2011, Austin, TX, USA, January 26-28, 2011, 317–330. ACM. Gulwani, S.; Harris, W. R.; and Singh, R. 2012. Spreadsheet data manipulation using examples. Commun. ACM, 55(8): 97–105. Hansen, P.; and Caporossi, G. 2000. AutoGraphiX: An Automated System for Finding Conjectures in Graph Theory. Electron. Notes Discret. Math., 5: 158–161. Larson, C. E.; and Cleemput, N. V. 2016. Automated conjecturing I: Fajtlowicz’s Dalmatian heuristic revisited. Artif. Intell., 231: 17–38. Ligeza, A.; Jemiolo, P.; Adrian, W. T.; Slazynski, M.; Adrian, M.; Jobczyk, K.; Kluza, K.; Stachura-Terlecka, B.; and Wisniewski, P. 2020. Explainable Artificial Intelligence. Model Discovery with Constraint Programming. In Stettinger, M.; Leitner, G.; Felfernig, A.; and Ras, Z. W., eds., Intelligent Systems in Industrial Applications, 25th International Symposium, ISMIS 2020, Graz, Austria, September 23-25, 2020, Selected Papers from the Industrial Part, volume 949 of Studies in Computational Intelligence, 171–191. Springer. Paramonov, S.; Kolb, S.; Guns, T.; and Raedt, L. D. 2017. TaCLe: Learning Constraints in Tabular Data. In Lim, E.; Winslett, M.; Sanderson, M.; Fu, A. W.; Sun, J.; Culpepper, J. S.; Lo, E.; Ho, J. C.; Donato, D.; Agrawal, R.; Zheng, Y.; Castillo, C.; Sun, A.; Tseng, V. S.; and Li, C., eds., Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, 2511–2514. ACM. Srivastava, S.; Gulwani, S.; and Foster, J. S. 2013. Templatebased program verification and program synthesis. Int. J. Softw. Tools Technol. Transf., 15(5-6): 497–518. Udrescu, S.; Tan, A. K.; Feng, J.; Neto, O.; Wu, T.; and Tegmark, M. 2020. AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Wikimedia Commons. 2003. Inductive Bias. https://en. wikipedia.org/wiki/Inductive bias. Accessed: 2023-08-04. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8037
2024
893
18,732
End-to-End Verification for Subgraph Solving Stephan Gocht1,2, Ciaran McCreesh3, Magnus O. Myreen4, Jakob Nordstr¨om2,1, Andy Oertel1,2, Yong Kiam Tan5 1Lund University, Lund, Sweden 2University of Copenhagen, Copenhagen, Denmark 3University of Glasgow, Glasgow, Scotland 4Chalmers University of Technology, Gothenburg, Sweden 5Institute for Infocomm Research (I2R), A*STAR, Singapore [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Modern subgraph-finding algorithm implementations consist of thousands of lines of highly optimized code, and this complexity raises questions about their trustworthiness. Recently, some state-of-the-art subgraph solvers have been enhanced to output machine-verifiable proofs that their results are correct. While this significantly improves reliability, it is not a fully satisfactory solution, since end-users have to trust both the proof checking algorithms and the translation of the highlevel graph problem into a low-level 0–1 integer linear program (ILP) used for the proofs. In this work, we present the first formally verified toolchain capable of full end-to-end verification for subgraph solving, which closes both of these trust gaps. We have built encoder frontends for various graph problems together with a 0–1 ILP (a.k.a. pseudo-Boolean) proof checker, all implemented and formally verified in the CAKEML ecosystem. This toolchain is flexible and extensible, and we use it to build verified proof checkers for both decision and optimization graph problems, namely, subgraph isomorphism, maximum clique, and maximum common (connected) induced subgraph. Our experimental evaluation shows that end-to-end formal verification is now feasible for a wide range of hard graph problems. Introduction Combinatorial optimization algorithms have improved immensely since the turn of the millennium, and are now routinely used to solve large-scale real-world problems, through both general-purpose solving paradigms (Biere et al. 2021; Bixby and Rothberg 2007; Garcia de la Banda et al. 2014) and dedicated algorithms for more specialised problems such as subgraph finding (McCreesh, Prosser, and Trimble 2020). Since these combinatorial solvers are used for an increasingly wide range of applications, it becomes crucial that the results they compute can be trusted. Sadly, this is currently not the case (Cook et al. 2013; Akg¨un et al. 2018; Gillard, Schaus, and Deville 2019; Bogaerts, McCreesh, and Nordstr¨om 2022). Extensive testing, though beneficial, has not been able to resolve the problem of solvers occasionally producing faulty answers, and attempts to build correct-byconstruction software using formal verification run into the Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Graph File(s) Graph Solver Untrusted Encoding Augmented Proof Checker ✓? Elaborator Kernel Proof Verified Encoder Verified Encoding Verified Checker ✓Trusted Conclusion VERIPB CAKEPBGRAPH CAKEPB This paper Figure 1: The full verification workflow. Without verified proof checking, only the left-hand part of the diagram is used. Our current work enables the additional shaded parts, where the thick dashed box is the formally verified program and thick arrows show its key input-output interfaces. obstacle that current techniques cannot scale to the level of complexity of modern solvers. Instead, the most promising way to achieve verifiably correct combinatorial solving seems to be proof logging, meaning that solvers produce efficiently verifiable certificates of correctness that can be corroborated by an independent proof checking program (McConnell et al. 2011). This approach has been successfully used in the SAT community (Heule, Hunt Jr., and Wetzler 2013a,b; Wetzler, Heule, and Hunt Jr. 2014), which raises the question of whether similar techniques could be employed in other settings such as subgraph finding. For this it would seem that the proof checker would need to understand graph concepts such as vertices, edges, neighbourhoods, et cetera. Surprisingly, this turns out not to be the case—instead, the solver can encode the graph problem using 0–1 linear inequalities (also referred to as pseudo-Boolean constraints), and then justify The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8038 its complex high-level reasoning in terms of this low-level representation. This approach has been used to add proof logging with the VERIPB tool to state-of-the-art solvers for subgraph isomorphism, clique, and maximum common (connected) induced subgraph (Gocht, McCreesh, and Nordstr¨om 2020; Gocht et al. 2020), as illustrated in the lefthand part of Figure 1. We emphasize that although this approach uses reasoning with pseudo-Boolean constraints for the proof logging, it is not limited to pseudo-Boolean solving. Rather, it can be used to certify the output of any untrusted solver—such as tools that operate natively on graph representations—as long as the solver’s relevant reasoning steps can be expressed with pseudo-Boolean proofs. While this approach has been successful for debugging solvers and providing convincing demonstrations that the fixed solvers are producing correct answers, it is important to observe that it crucially hinges on the assumption that three components are correct: (1) the low-level encoding of the problem, (2) the proof checker, and (3) the interpretation of the final output. For example, if the maximum clique solver in Gocht et al. (2020) produces a proof accepted by the VERIPB checker, then one can conclude that if the 0–1 ILP encoding of clique is implemented correctly, and if VERIPB does not contain bugs, and if (say) a 200-vertex graph having a maximum clique size of 13 corresponds to the optimal objective value for the low-level encoding being 187 (because it minimises the number of vertices not in the clique), then the maximum clique size is indeed 13. Such assumptions are not unreasonable—encodings have been chosen to be as simple as possible and the code can be subjected to extensive testing; the proof format is designed so that proof checking should be easy; and verifying that proof outputs correspond to solver outputs is not too cumbersome. Compared to having to trust an extremely complex solver, this is a vast improvement. However, if provably correct results are the end goal, then this still leaves much to be desired. Our Contribution In this work, we resolve all the concerns discussed above by presenting the first toolchain capable of end-to-end formal verification for state-of-the-art algorithms for maximum clique, subgraph isomorphism, and maximum common (connected) induced subgraph problems. Although the implementations of modern solvers for these problems are far too complicated to be formally verified by current techniques, we can still use formal verification to certify the correctness of the proof logging and proof checking process. We do so by defining a solver-friendly augmented VERIPB proof format; enhancing the VERIPB tool with a proof elaborator that can translate such augmented proofs to a more explicit kernel format; and designing a formally verified proof checker for the kernel format. This formally verified checker is also capable of providing its own formally verified encodings from graph problems to 0–1 ILPs. Finally, the output provided by the formally verified proof checker is in terms of the original problem, not the low-level encoding. This means that using the process illustrated in the righthand part of Figure 1, if the checking process outputs (say) s VERIFIED MAX CLIQUE SIZE |CLIQUE| = 13 is clique vs (v,e) def = vs ⊆{ 0,1,...,v−1 } ∧ ∀x y. x ∈vs ∧y ∈vs ∧x ̸= y ⇒is edge e x y max clique size g def = maxset { card vs | is clique vs g } has subgraph iso (vp,ep) (vt,et) def = ∃f . inj f { 0,1,...,vp−1 } { 0,1,...,vt−1 } ∧ ∀a b. is edge ep a b ⇒is edge et (f a) (f b) Figure 2: HOL definitions for maximum clique size of a graph with v vertices and edge set e (top), and existence of a subgraph isomorphism from a pattern graph (vp, ep) to a target graph (vt, et) (bottom). then we can be absolutely sure that the maximum clique size for our graph is 13, if we trust the formal verification tool(s) and if the formal higher-order logic (HOL) specifications (as shown in Figure 2) accurately reflect what it means to be a clique. The toolchain we provide is also flexible and extensible, in that it can be readily adapted to other combinatorial problems, including problems not involving graphs. Comparison to Related Work Formally verified proof checkers have previously played an important role in SAT solving (Cruz-Filipe, Marques-Silva, and Schneider-Kamp 2017; Cruz-Filipe et al. 2017; Lammich 2020) and are vital for widespread acceptance of SATsolver-generated mathematical proofs (Heule and Kullmann 2017). However, such proof checkers have worked only for conjunctive normal form (CNF), and only to establish that decision problems encoded in CNF are infeasible: verification that the encoding accurately reflects the problem to be solved has either been ignored or has been handled separately (e.g, Cruz-Filipe, Marques-Silva, and SchneiderKamp 2019; Shi et al. 2021; Codel, Avigad, and Heule 2023). For graph problems, previous attempts at verified proof checking have been tied to one specific problem, or even one specific algorithm (e.g., Bankovic, Drecun, and Maric 2023). In contrast, we provide formal verification for optimization problems and with much more expressive formats than CNF, and we do so in a unified way with a single pseudo-Boolean proof logging format for 0–1 linear inequalities together with a general-purpose toolchain, rather than having to design proof logging from scratch for each new combinatorial problem considered. In this way, we demonstrate that end-to-end formally verified combinatorial solving is now eminently within reach, by combining pseudoBoolean proof logging with formally verified tools for 0–1 ILP encodings and pseudo-Boolean proof checking. Outline of This Paper After reviewing preliminaries, we describe the formally verified proof checker, and how solver proofs in a user-friendly proof format can be converted to a more restricted format accepted by this proof checker. We then report results from an experimental evaluation, and conclude with a discussion of future research directions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8039 Preliminaries Our discussion of pseudo-Boolean proof logging will be brief, since the main thrust of this work is how to formally verify proof logging rather than to design it. See Gocht and Nordstr¨om (2021) and Bogaerts et al. (2023a) for more on the VERIPB system and Buss and Nordstr¨om (2021) for background on the cutting planes reasoning method used. A literal ℓover a variable x is x itself or its negation x, taking values 0 (false) or 1 (true), so that x = 1 −x. A pseudo-Boolean (PB) constraint C is a 0-1 integer linear inequality P iaiℓi ≥A, which without loss of generality we can always assume to be in normalized form; i.e., all literals ℓi are over distinct variables and the coefficients ai and the degree (of falsity) A are non-negative. The negation ¬C of C is P iaiℓi ≥P iai −A + 1 (saying that the sum of the coefficients of falsified literals is so large that the satisfied literals can contribute at most A−1). A pseudo-Boolean formula is a conjunction F = V j Cj of PB constraints. Cutting planes (Cook, Coullard, and Tur´an 1987) is a method for iteratively deriving new constraints logically implied by a PB formula by taking positive linear combinations or dividing a constraint and rounding up. We say that C unit propagates the literal ℓif under the current partial assignment C cannot be satisfied unless ℓis set to true, and that C is implied by F by reverse unit propagation (RUP) if adding ¬C to F and then unit propagating until saturation leads to contradiction in the form of a violated constraint. VERIPB allows adding constraints by RUP, which is a convenient way of avoiding having to write out explicit syntactic derivations. In addition to deriving constraints C that are implied by F, VERIPB also has strengthening rules for inferring redundant constraints D having the property that F and F ∧D are equisatisfiable. If there is a partial mapping ω of variables to literals and/or truth values such that F ∪{¬D} ⊢(F ∪D)↾ω (1) holds, meaning that after applying ω to F ∪{D} all of the resulting constraints can be derived by cutting planes from F ∪{¬D}, then D can be added by redundance-based strengthening. There is also a similar but slightly different dominance-based strengthening rule. Importantly, the proof has to specify ω and also contain explicit subderivations for all proof goals in (F ∪D)↾ω in eq. (1) unless they are obvious enough that VERIPB can automatically figure them out (e.g., by using RUP). Finally, for optimization problems there are rules to deal with objective functions and incumbent solutions, and the strengthening rules also need to be slightly adapted for this setting. The formalization of our proof checking toolchain is carried out in the HOL4 proof assistant for classical higherorder logic (Slind and Norrish 2008). We make particular use of the CAKEML tools for production and optimization of verified CAKEML source code (Myreen and Owens 2014; Gu´eneau et al. 2017) as well as for formally verified compilation (Tan et al. 2019), allowing to transfer guarantees of source-code-level correctness down to executable machine code. Where applicable, formal code snippets are pretty-printed for illustration, e.g., as shown in Figure 2. The set and first-logic notation is standard (e.g., ⇒denotes logical implication); other HOL notation is explained where appropriate. Formally verified results are preceded by a turnstile ⊩. All code is available in the supplementary material (Gocht et al. 2023). Formally Verified Graph Proof Checkers This section details the formal verification of our pseudoBoolean proof checker CAKEPB and its various graph frontends, focusing on the key architectural decisions and reusable insights behind the verification effort. An overview of the tool is shown in Figure 3. We first present the different components, and then plug them together to obtain end-toend verified graph proof checkers. Verified Pseudo-Boolean Proof Checking A key design objective for CAKEPB is to make it a general yet effective pseudo-Boolean proof checking backend. To this end, CAKEPB supports a kernel subset of the VERIPB proof format with cutting planes, strengthening, and optimization rules as discussed in the previous section. The implementation and verification of all of this within a single proof checker backend presents several new challenges compared to prior tools for efficient verified CNF proof checking (Cruz-Filipe et al. 2017; Lammich 2020; Tan, Heule, and Myreen 2023). Firstly, the pseudo-Boolean proof system features a much richer set of rules, each of which needs a formal soundness justification. Secondly, there is an intricate interplay between different proof rules, especially concerning how they preserve optimal solutions (or satisfiability for decision problems). This necessitates careful maintenance of state invariants within the proof checker implementation. And thirdly, all of the above needs to be adequately optimized for practical use, whilst being formally verified. We use a refinement-based approach to tackle each challenge in order and at the appropriate level of abstraction. 1. The verification process starts by defining an abstract, mathematical, pseudo-Boolean semantics, with respect to which the soundness of each rule is justified. For example, we prove lemmas that justify the soundness of adding two constraints and dividing a constraint by a non-zero natural number in a cutting planes proof step: ⊩satisfies npbc w C1 ∧satisfies npbc w C2 ⇒ satisfies npbc w (add C1 C2) ⊩satisfies npbc w C ∧k ̸= 0 ⇒ satisfies npbc w (divide C k) Here, satisfies npbc w C says that the pseudo-Boolean constraint C is satisfied by the Boolean assignment w. We verify similar lemmas for all supported reasoning principles, the most involved of which is dominancebased strengthening. Specifically, this rule requires making a well-founded induction argument over an arbitrary user-specified order for Boolean assignments, for which we largely follow the proof from Bogaerts et al. (2023a, Proposition 4). 2. Next, we implement a prototype proof checker that ensures that every application of a proof rule is valid, e.g., The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8040 Other Domains Graph File(s) ✓Trusted Conclusion Other Encoders Subgraph Isomorphism Max Clique Max CIS Max CCIS Conclusion Translator PB Encoding PB Conclusion PB Normalizer Norm. PB Encoding PB Proof Checker Externally Generated Kernel Proof CAKEPB (common backend) CAKEPBGRAPH (various frontends) Figure 3: Architecture of the end-to-end verified proof checkers for various graph problems. that divide is never applied with k = 0, throwing an error otherwise. The proof checker is verified to maintain key invariants on the proof state, especially the ones needed for dominance and optimization reasoning. Soundness of the checker is proved by induction over the sequence of proof steps. The main idea is illustrated by the following abridged lemma snippet. ⊩... ∧valid conf ord obj fml ⇒ check step step ord obj fml ... = Some (ord′,obj ′,fml ′, ...) ⇒ ... ∧valid conf ord′ obj ′ fml ′ Here, valid conf ord obj fml says that for any satisfying assignment w to the core constraints in formula fml, there exists another satisfying assignment w ′ ≼w which satisfies all constraints in fml, where ≼is the order on assignments induced by ord and obj. The lemma fragment says that, whenever checking a single proof step (check step) succeeds and returns a new proof checker state (result Some), the valid conf invariant is maintained for the state. Other key properties verified for check step include showing that fml ′ and fml are equisatisfiable by assignments that improve the best known objective value. 3. The final phase involves refining the prototype into an optimized proof checker implementation using the CAKEML tools for profiling and source code verification (Myreen and Owens 2014; Gu´eneau et al. 2017). We manually optimize several hotspots encountered in the pseudo-Boolean proofs generated in our experimental evaluation, e.g., using buffered I/O to stream large proof files, and swapping to constant-time array-based constraint lookups for cutting planes steps and hashbased proof goal coverage checks in application of the dominance-based strengthening rule. The verified proof checker backend operates most naturally and efficiently with normalized pseudo-Boolean constraints where, in addition, variables are indexed by numbers. However, this is not the most convenient interface for frontend users. Accordingly, CAKEPB also includes a verified pseudo-Boolean normalizer. As shown in Figure 3, CAKEPB accepts any pseudo-Boolean formula as input (normalized or otherwise) together with an externally generated kernel proof. It produces an appropriate verified conclusion about the formula, such as satisfiability status or upper and lower bounds on the objective function, depending is cis vs (vp,ep) (vt,et) def = ∃f . vs ⊆{ 0,1,...,vp−1 } ∧inj f vs { 0,1,...,vt−1 } ∧ ∀a b. a ∈vs ∧b ∈vs ⇒ (is edge ep a b ⇐⇒is edge et (f a) (f b)) connected subgraph vs e def = ∀a b. a ∈vs ∧b ∈vs ⇒ (λ x y. y ∈vs ∧is edge e x y)∗a b is ccis vs (vp,ep) (vt,et) def = is cis vs (vp,ep) (vt,et) ∧connected subgraph vs ep max ccis size gp gt def = maxset { card vs | is ccis vs gp gt } ⊩good graph (vp,ep) ∧good graph (vt,et) ∧ encode (vp,ep) (vt,et) = constraints ⇒ ((∃vs. is ccis vs (vp,ep) (vt,et) ∧card vs = k) ⇐⇒ ∃w. satisfies w (set constraints) ∧ eval obj (unmapped obj vp) w = vp −k) Figure 4: HOL definition of the size of a maximum common connected induced subgraph (MCCIS) for a pattern graph gp and a target graph gt (top), and a correctness theorem for encoding the MCCIS problem using PB constraints (bottom). on the type of problem and on the claims made by the proof. Verified Graph Problem Encoders Pseudo-Boolean formulas provide a convenient format for verified frontend encoders for graph problems, which we turn to next. Graphs are represented in HOL as a pair (v,e), where v is the number of vertices corresponding to the vertex set { 0,1,...,v−1 } , and e is an edge list representation such that is edge e a b is true iff there is an edge between vertices a and b. All graphs considered here are undirected.1 The graph encoders use a shared graph library which formalizes these basic graph notions and provides parsing functions for standard text formats such as LAD and DIMACS. The HOL definitions of various graph problems formalized in this paper are shown in Figures 2 and 4; we use maximum common connected induced subgraph (MCCIS) as a 1In practice, we apply a consistency check good graph for undirectedness and other syntactic properties when parsing input graphs. Graphs failing the check are rejected by the encoders. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8041 representative example. Given a pattern graph gp and a target graph gt, a subset of vertices vs of gp is a common induced subgraph (is cis) iff there exists an injective mapping f from vs into the target graph vertices which preserves edges and non-edges. Additionally, vs is a connected subgraph of gp iff its vertices are pairwise connected in the reflexive transitive closure (denoted ∗) of the induced is edge relation. The MCCIS size is the size of the largest common connected induced subgraph between gp and gt (max ccis size). The MCCIS pseudo-Boolean encoding from Gocht et al. (2020, Section 3.1) is implemented as a HOL function encode. The main subtlety is connected subgraph; briefly, connectedness is encoded using additional auxiliary variables that indicate whether a walk of length n for some n < min(vp, vt), exists between each pair of vertices in the chosen subgraph. The correctness theorem for encode is shown in Figure 4 (bottom). It says that a CCIS of cardinality k exists iff a satisfying assignment to the encoding constraints exists with objective value vp −k. Therefore, minimizing the objective (unmapped obj vp) yields the MCCIS size. Similar theorems are proved for encodings of subgraph isomorphism and maximum clique. The value of formal verification here is twofold: to gain confidence in the pen-and-paper justification of the encodings, and to ensure that the encodings are correctly implemented in code. End-to-End Verification Feeding the output of each frontend encoder into CAKEPB yields a suite of formally verified graph proof checkers, collectively called CAKEPBGRAPH. Since we are working within the CAKEML ecosystem, we can further achieve end-to-end verification by running the CAKEML compiler on CAKEPBGRAPH to transfer the source-level correctness guarantees for the CAKEPBGRAPH checkers down to the level of their respective machine code implementations. Let us illustrate this by briefly discussing the final correctness theorem for the maximum clique proof checker as shown in Figure 5. The assumption on Line 1 is standard for all programs written in CAKEML, and states that the compiled machine code is correctly loaded in memory of an x64 machine and that the appropriate command line and file system foreign function interfaces (FFIs) are available to CakeML. The first correctness guarantee on Lines 2–3 says that the code will run without crashing and will terminate safely, possibly reporting an out-of-memory resource error. The second correctness guarantee starting at Line 4–5 says there will be (possibly empty) strings out and err printed to standard output and error, respectively. The remaining lines now claim that if standard output is non-empty, then the input file was parsed in DIMACS format to a graph g (Lines 6–7), and the output is either: • a pretty-printed pseudo-Boolean encoding of the maximum clique problem for g (Line 8), or • a pretty-printed conclusion string which is either: – a verified exact maximum clique size for g formatted using clique eq str (Line 10), or – verified lower and upper bounds on clique sizes in g formatted using clique bound str (Lines 11–12). Let us clarify what needs to be trusted, or at least carefully inspected, in order to claim that the conclusions by CAKEPBGRAPH checkers are formally verified: • The HOL definitions of the graph input parsers and of various graph problems that appear in the final correctness theorems (e.g., Figure 5). We have kept these definitions as simple as possible. Notably, the internal definitions of pseudo-Boolean semantics and cutting planes used in the proof checker are not part of CAKEPBGRAPH’s trusted base because conversion into and out of pseudo-Boolean semantics is formally verified. • The formal HOL model of the CAKEML execution environment and its correspondence with the real system on which CAKEPBGRAPH runs. CAKEML has been used in various other proof checkers, e.g., by Tan, Heule, and Myreen (2023), and its target architecture models have been validated extensively (Tan et al. 2019). • The HOL4 theorem prover, including its logic, implementation and execution environment. The prover follows an LCF-style design (Slind and Norrish 2008) with a well-separated and trustworthy kernel responsible for checking every logical inference. A trusted base for binary code extraction (Kumar et al. 2018) as above is of the highest assurance standard for formally verified software—correctness is proved within a single system down to the machine code that runs. This provides a gold standard of trustworthiness for subgraph solving, in contrast to prior unverified proof checking approaches. Proof Elaboration CAKEPBGRAPH verification helps solver users who wish to attain a high level of trust in solver conclusions. In this section, we discuss our new elaboration phase, which aids solver authors who wish to add trustworthy proof logging and checking to their tools. The convenience afforded by proof elaboration is illustrated in the workflow in Figure 1. First, solver authors can design their proof output with respect to their own (untrusted) pseudo-Boolean encodings, without following the verified encodings from CAKEPBGRAPH exactly; elaboration helps to automatically line up (where possible) untrusted and verified encodings. Second, elaboration supports an augmented proof format with syntactic sugar that makes proof logging much easier at runtime; elaboration then fills in the necessary details to convert the proof into the kernel format understood by CAKEPBGRAPH. The VERIPB proof elaborator also performs (unverified) proof checking during the translation process, helping solver authors to detect bugs in their proof logging or solver code even before the formal verification process starts. Lining up Encodings Many VERIPB proof rules refer to constraints by positive integer constraint IDs, assigned automatically in order of appearance in the proof. It would be quite a hassle for solver authors to keep track of the exact order in which constraints in the encoding are generated by CAKEPBGRAPH. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8042 clique eq str n def = "s VERIFIED MAX CLIQUE SIZE |CLIQUE| = " ˆ toString n ˆ "\n" clique bound str l u def = "s VERIFIED MAX CLIQUE SIZE BOUND " ˆ toString l ˆ " <= |CLIQUE| <= " ˆ toString u ˆ "\n" 1 2 3 4 5 6 7 8 9 10 11 12 ⊩cake pb clique run cl fs mc ms ⇒ machine sem mc (basis ffi cl fs) ms ⊆ extend with resource limit { Terminate Success (cake pb clique io events cl fs) } ∧ ∃out err. extract fs fs (cake pb clique io events cl fs) = Some (add stdout (add stderr fs err) out) ∧ (out ̸= "" ⇒ ∃g. get graph dimacs fs (el 1 cl) = Some g ∧ (length cl = 2 ∧out = concat (print pbf (full encode g)) ∨ length cl = 3 ∧ (out = clique eq str (max clique size g) ∨ ∃l u. out = clique bound str l u ∧(∀vs. is clique vs g ⇒card vs ≤u) ∧∃vs. is clique vs g ∧l ≤card vs))) Figure 5: End-to-end correctness theorem for CAKEPB with a maximum clique pseudo-Boolean encoder frontend. Fortunately, it is straightforward to instead recover an ID by rederiving the constraint, which provides it with a new, known ID, before it is used. This can either be done upfront, at the start of the proof, or lazily (which avoids a potentially large overhead for instances with very short proofs). A useful fact is that the two constraints do not need to match exactly—it is sufficient that they are close enough so that VERIPB can automatically check and prove that one of them follows from the other. When it comes to variable names, the solver proof logging routines are required to agree exactly with the CAKEPBGRAPH encoding. This is an easier task, however, since VERIPB and CAKEPB both support expressive variable names. For example, for subgraph mapping problems, we use the protocol that the variable name x1_2 means that pattern vertex 1 will be mapped to target vertex 2. Elaborating on Syntactic Sugar The augmented proof format contains a number of rules designed to support the ease of proof logging. Chief among these is reverse unit propagation (RUP), which allows to add a constraint when the VERIPB proof checker can easily verify that it is implied by applying unit propagation. Such RUP steps occur frequently in proofs in many applications, and so have to be dealt with efficiently by the proof checker, but implementing efficient formally verified unit propagation is a challenging task even for the simpler case of CNF (Fleury, Blanchette, and Lammich 2018). Instead, a RUP rule application deriving C from F is converted to an explicit cutting planes proof of contradiction from F ∪{¬C}. This is possible since unit propagation on the latter set of constraints leads to a violation (by the definition of RUP), and this in turn means that pseudo-Boolean conflict analysis can be used to derive contradiction. This algorithm is more involved than CNF-based conflict analysis as used in SAT solvers, but we employ a procedure similar to the PB conflict analysis in Elffers and Nordstr¨om (2018) for this. For optimization problems, the augmented format allows incumbent solutions to be partially specified, so long as the given assignment unit propagates to a full solution; the kernel format will always specify a full solution instead. This is illustrated in Figure 6. Another convenient rule is syntactic implication, where a constraint to be derived is implied by a single (unspecified) previous constraint by simple syntactic manipulations. This condition is again easy to check, but the elaborator converts this into an explicit derivation or explicitly annotates the kernel proof with IDs. Yet another important aspect that we are ignoring here, but which is crucial for efficient proof checking, is deletion of constraints no longer needed in the proof. Finally, applications of strengthening rules generate a separate proof goal for each constraint currently in use in the proof, which is a potentially huge overhead, but often most of these proof goals are obvious and can be skipped in the augmented format (e.g., if they can be obtained by RUP or syntactic implication). The proof elaborator fills in the necessary missing details for such proof goals. Experiments To validate our approach, we performed experiments on a cluster of machines with dual AMD EPYC 7643 processors, 2TBytes RAM, and a RAID array of solid state drives, running Ubuntu 22.04. We ran up to 40 jobs in parallel, and limited each individual process to 64GBytes RAM. Note that performance of the verification process is strongly affected by I/O and memory cache speeds, and so we do not expect running time measurements to be highly reproducible, but they should still be indicative of the feasibility of the approach and the slowdowns that one might encounter. We used the Glasgow Subgraph Solver (McCreesh, Prosser, and Trimble 2020) as the proof-producing solver for all experiments, and made small modifications so that it would lazily recover constraint IDs as required. The results are plotted on an instance by instance basis in Figure 7 and explained below. For maximum clique, we took the 54 instances from the Second DIMACS Implementation Challenge (Johnson and Trick 1996) that Gocht et al. were able to check. We managed to produce proofs for and formally verify 50 of these The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8043 Pattern Target 0 1 2 3 4 5 5 6 7 8 9 0 1 2 3 4 Verified Encoding min: 1 x0_n 1 x1_n 1 x2_n 1 x3_n 1 x4_n 1 x5_n ; 1 x0_n 1 x0_0 1 x0_1 1 x0_2 1 x0_3 1 x0_4 1 x0_5 \ 1 x0_6 1 x0_7 1 x0_8 1 x0_9 = 1 ; 1 x1_n 1 x1_0 1 x1_1 1 x1_2 1 x1_3 1 x1_4 1 x1_5 \ 1 x1_6 1 x1_7 1 x1_8 1 x1_9 = 1 ; ... 1172 omitted constraints ... Augmented Proof pseudo-Boolean proof version 2.0 ... * Specifying a partial solution soli x5_9 x2_7 ... (58 omitted literals) ... * Unit propagation step u 1 ∼x4_0 >= 1 ; ... conclusion BOUNDS 2 2 end pseudo-Boolean proof Kernel Proof pseudo-Boolean proof version 2.0 ... * Specifying a full solution soli x0_n x1_n ... (304 omitted literals) ... * Derivation by cutting planes red 1 ∼x4_0 >= 1 ; ; begin pol 8784 8778 + 8772 + 8766 + ... \ + 8133 13 * + 8085 13 * + end 8786 ... conclusion BOUNDS 2 : 8798 2 end pseudo-Boolean proof Figure 6: (Top) MCCIS problem encoding for the pattern graph K3,3 and the target Petersen graph. (Bottom) An augmented proof generated by a solver on the left, and a corresponding elaborated kernel proof on the right; kernel annotations in bold. When run on the kernel proof, CAKEPBGRAPH outputs: s VERIFIED MAX CCIS SIZE |CCIS| = 4. This corresponds to the conclusion in the proof, which claims that at least two of the six pattern vertices must be mapped to null. instances; for the 4 instances that we could not verify, 3 were due to VERIPB taking over one week to check the proof files, and the final one to the 64GByte memory limit for the verified checker. Over the successfully checked instances, translating augmented proofs to kernel proofs took, on average, 18% longer than simply verifying the proofs, and produced proof files that were on average 2.26 times as large. However, verified checking of these kernel proofs was consistently faster than checking the original augmented proofs using VERIPB: the average running time was 3.8 times lower. For subgraph isomorphism, we used the same subset of 1,226 small-to-medium-sized instances from the benchmark set in (Kotthoff, McCreesh, and Solnon 2016) as was studied by Gocht, McCreesh, and Nordstr¨om (2020). We were able to verify 417 satisfiable and 784 unsatisfiable instances; 13 instances failed due to memory limits on the verified checker, and 12 instances when the converted kernel proofs exceeded 500GBytes in size. Performance-wise, running VERIPB and asking it to output a kernel proof was on average 27% slower than verification alone. Producing the verified encoding was never a significant cost in the process. Verifying kernel proofs was on average 2.4 times slower than verifying the original, augmented proofs; the former were on average 10.5 times larger than the latter. For maximum common connected induced subgraph, we used a database of randomly generated instances (Conte, Foggia, and Vento 2007; De Santo et al. 2003), and ran the solver in clique reformulation mode. We were able to verify all 690 instances involving up to 20 vertices in each graph. Elaborating the proofs took on average 43% longer than verifying them using VERIPB, and the proofs were on average 14.7 times larger. However, verifying the kernel proofs using CAKEPB took on average only 9% longer than using VERIPB for the original, augmented proofs. Across each problem family, producing formally verified encodings was always extremely cheap, and asking VERIPB to produce an elaborated kernel proof was never substantially more expensive than simply checking the augmented proof. This is to be expected: VERIPB already has to produce nearly all of the information needed for proof elaboration to check a proof anyway. Checking elaborated proofs was sometimes a little faster than checking the original, augmented proof, and sometimes a little slower, and we were able to formally check almost every proof that was amenable to unverified checking. Conclusion In this paper, we present the first efficient toolchain for formal end-to-end verification of state-of-the-art subgraph solving. Our design is easily adaptable, which opens up the possibility of bringing formal verification to other combinatorial problem domains where problem instances can be suitably represented using the expressivity of 0–1 integer linear programs. In fact, our formally verified CAKEPB proof checker equipped with a CNF frontend has also been used for SAT solving in the SAT Competition 2023 (Bogaerts et al. 2023b), supporting, also for the first time, efficient The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8044 103 106 109 1012 103 106 109 1012 Kernel proof (bytes) Augmented proof (bytes) 10−3 100 103 106 10−3 100 103 106 CAKEPB time (s) VERIPB time (s) (a) Max clique 103 106 109 1012 103 106 109 1012 Kernel proof (bytes) Augmented proof (bytes) 10−3 100 103 106 10−3 100 103 106 CAKEPB time (s) VERIPB time (s) (b) Subgraph isomorphism 103 106 109 1012 103 106 109 1012 Kernel proof (bytes) Augmented proof (bytes) 10−3 100 103 106 10−3 100 103 106 CAKEPB time (s) VERIPB time (s) (c) Max common connected induced subgraph Figure 7: Experiments using the Glasgow Subgraph Solver on (a) max clique, (b) subgraph isomorphism, and (c) max common connected induced subgraph problem instances. In the top row, comparisons of kernel and augmented proof sizes; in the bottom row, time comparisons for verified and unverified checking of kernel and augmented proofs, respectively. Crosses indicate failures due to space or memory limits. verified proof logging and checking for the full range of advanced techniques used in modern SAT solvers such as cardinality reasoning, Gaussian elimination, and symmetry breaking. A future challenge of particular interest would be to provide a formally verified setting for the proof logging techniques for constraint programming developed in a sequence of papers by Elffers et al. (2020); Gocht, McCreesh, and Nordstr¨om (2022) and McIlree and McCreesh (2023). It would also be valuable to expand the reach of pseudoBoolean proof logging to problems like (projected) model enumeration problems, which were dealt with in a somewhat ad-hoc fashion by Gocht et al. (2020). To further improve performance, it would be highly desirable to enhance the VERIPB elaborator with proof trimming to be able to remove unnecessary proof steps before handing the kernel proof to CAKEPB. Currently, our system verifies all of the steps carried out by the solver to reach its conclusion. This is useful for detecting solver bugs, but for storing and distributing proofs a trimmed proof would suffice and could be much faster to verify. Another significant source of performance gains could come from switching from a text proof format to a binary format: although this would lose some human-readability, our experiments suggest that text parsing often forms a substantial portion of the elaboration and checking times. Acknowledgments Stephan Gocht and Jakob Nordstr¨om were supported by the Swedish Research Council grant 2016-00782, and Jakob Nordstr¨om also received funding from the Independent Research Fund Denmark grant 9040-00389B. Ciaran McCreesh was supported by a Royal Academy of Engineering research fellowship, and by the Engineering and Physical Sciences Research Council [grant number EP/X030032/1]. Magnus Myreen was supported by Swedish Research Council grant 2021-05165. Andy Oertel was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Yong Kiam Tan was supported by A*STAR, Singapore. Part of this work was carried out while taking part in the Dagstuhl workshops 22411 “Theory and Practice of SAT and Combinatorial Solving” and 23261 “SAT Encodings and Beyond”, as well as in the extended reunion of the program “Satisfiability: Theory, Practice, and Beyond” in the spring of 2023 at the Simons Institute for the Theory of Computing at UC Berkeley. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8045 References Akg¨un, ¨O.; Gent, I. P.; Jefferson, C.; Miguel, I.; and Nightingale, P. 2018. Metamorphic Testing of Constraint Solvers. In Proceedings of the 24th International Conference on Principles and Practice of Constraint Programming (CP ’18), volume 11008 of Lecture Notes in Computer Science, 727–736. Springer. Bankovic, M.; Drecun, I.; and Maric, F. 2023. A Proof System for Graph (Non)-Isomorphism Verification. Logical Methods in Computer Science, 19(1). Biere, A.; Heule, M. J. H.; van Maaren, H.; and Walsh, T., eds. 2021. Handbook of Satisfiability, volume 336 of Frontiers in Artificial Intelligence and Applications. IOS Press, 2nd edition. Bixby, R.; and Rothberg, E. 2007. Progress in Computational Mixed Integer Programming—A Look back from the Other Side of the Tipping Point. Annals of Operations Research, 149(1): 37–41. Bogaerts, B.; Gocht, S.; McCreesh, C.; and Nordstr¨om, J. 2023a. Certified Dominance and Symmetry Breaking for Combinatorial Optimisation. Journal of Artificial Intelligence Research, 77: 1539–1589. Preliminary version in AAAI ’22. Bogaerts, B.; McCreesh, C.; Myreen, M. O.; Nordstr¨om, J.; Oertel, A.; and Tan, Y. K. 2023b. Documentation of VeriPB and CakePB for the SAT Competition 2023. Available at https://satcompetition.github.io/2023/checkers.html. Bogaerts, B.; McCreesh, C.; and Nordstr¨om, J. 2022. Solving with Provably Correct Results: Beyond Satisfiability, and Towards Constraint Programming. Tutorial at the 28th International Conference on Principles and Practice of Constraint Programming. Slides available at http://www. jakobnordstrom.se/presentations/. Buss, S. R.; and Nordstr¨om, J. 2021. Proof Complexity and SAT Solving. In (Biere et al. 2021), chapter 7, 233–350. Codel, C. R.; Avigad, J.; and Heule, M. J. H. 2023. Verified Encodings for SAT Solvers. In Proceedings of the 23rd Conference on Formal Methods in Computer-Aided Design (FMCAD ’23), 141–151. Conte, D.; Foggia, P.; and Vento, M. 2007. Challenging Complexity of Maximum Common Subgraph Detection Algorithms: A Performance Analysis of Three Algorithms on a Wide Database of Graphs. Journal of Graph Algorithms and Applications, 11(1): 99–143. Cook, W.; Coullard, C. R.; and Tur´an, G. 1987. On the Complexity of Cutting-Plane Proofs. Discrete Applied Mathematics, 18(1): 25–38. Cook, W.; Koch, T.; Steffy, D. E.; and Wolter, K. 2013. A Hybrid Branch-and-Bound Approach for Exact Rational Mixed-Integer Programming. Mathematical Programming Computation, 5(3): 305–344. Cruz-Filipe, L.; Heule, M. J. H.; Hunt Jr., W. A.; Kaufmann, M.; and Schneider-Kamp, P. 2017. Efficient Certified RAT Verification. In Proceedings of the 26th International Conference on Automated Deduction (CADE-26), volume 10395 of Lecture Notes in Computer Science, 220–236. Springer. Cruz-Filipe, L.; Marques-Silva, J. P.; and Schneider-Kamp, P. 2017. Efficient Certified Resolution Proof Checking. In Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS ’17), volume 10205 of Lecture Notes in Computer Science, 118–135. Springer. Cruz-Filipe, L.; Marques-Silva, J. P.; and Schneider-Kamp, P. 2019. Formally Verifying the Solution to the Boolean Pythagorean Triples Problem. Journal of Automated Reasoning, 63(3): 695–722. De Santo, M.; Foggia, P.; Sansone, C.; and Vento, M. 2003. A Large Database of Graphs and Its Use for Benchmarking Graph Isomorphism Algorithms. Pattern Recognition Letters, 24(8): 1067–1079. Elffers, J.; Gocht, S.; McCreesh, C.; and Nordstr¨om, J. 2020. Justifying All Differences Using Pseudo-Boolean Reasoning. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI ’20), 1486–1494. Elffers, J.; and Nordstr¨om, J. 2018. Divide and Conquer: Towards Faster Pseudo-Boolean Solving. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI ’18), 1291–1299. Fleury, M.; Blanchette, J. C.; and Lammich, P. 2018. A Verified SAT Solver with Watched Literals Using Imperative HOL. In Proceedings of the 7th ACM SIGPLAN International Conference on Certified Programs and Proofs (CPP ’18), 158––171. Garcia de la Banda, M.; Stuckey, P. J.; Van Hentenryck, P.; and Wallace, M. 2014. The Future of Optimization Technology. Constraints, 19(2): 126–138. Gillard, X.; Schaus, P.; and Deville, Y. 2019. SolverCheck: Declarative Testing of Constraints. In Proceedings of the 25th International Conference on Principles and Practice of Constraint Programming (CP ’19), volume 11802 of Lecture Notes in Computer Science, 565–582. Springer. Gocht, S.; McBride, R.; McCreesh, C.; Nordstr¨om, J.; Prosser, P.; and Trimble, J. 2020. Certifying Solvers for Clique and Maximum Common (Connected) Subgraph Problems. In Proceedings of the 26th International Conference on Principles and Practice of Constraint Programming (CP ’20), volume 12333 of Lecture Notes in Computer Science, 338–357. Springer. Gocht, S.; McCreesh, C.; Myreen, M. O.; Nordstr¨om, J.; Oertel, A.; and Tan, Y. K. 2023. End-to-End Verification for Subgraph Solving: Supplementary Material. https: //doi.org/10.5281/zenodo.10369401. Gocht, S.; McCreesh, C.; and Nordstr¨om, J. 2020. Subgraph Isomorphism Meets Cutting Planes: Solving With Certified Solutions. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI ’20), 1134–1140. Gocht, S.; McCreesh, C.; and Nordstr¨om, J. 2022. An Auditable Constraint Programming Solver. In Proceedings of the 28th International Conference on Principles and Practice of Constraint Programming (CP ’22), volume 235 of Leibniz International Proceedings in Informatics (LIPIcs), 25:1–25:18. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8046 Gocht, S.; and Nordstr¨om, J. 2021. Certifying Parity Reasoning Efficiently Using Pseudo-Boolean Proofs. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI ’21), 3768–3777. Gu´eneau, A.; Myreen, M. O.; Kumar, R.; and Norrish, M. 2017. Verified Characteristic Formulae for CakeML. In Proceedings of the 26th European Symposium on Programming (ESOP ’17), volume 10201 of Lecture Notes in Computer Science, 584–610. Springer. Heule, M. J. H.; Hunt Jr., W. A.; and Wetzler, N. 2013a. Trimming While Checking Clausal Proofs. In Proceedings of the 13th International Conference on Formal Methods in Computer-Aided Design (FMCAD ’13), 181–188. Heule, M. J. H.; Hunt Jr., W. A.; and Wetzler, N. 2013b. Verifying Refutations with Extended Resolution. In Proceedings of the 24th International Conference on Automated Deduction (CADE-24), volume 7898 of Lecture Notes in Computer Science, 345–359. Springer. Heule, M. J. H.; and Kullmann, O. 2017. The Science of Brute Force. Communications of the ACM, 60(8): 70–79. Johnson, D. S.; and Trick, M. A. 1996. Introduction to the Second DIMACS Challenge: Cliques, Coloring, and Satisfiability. In Cliques, Coloring and Satisfiability: Second DIMACS Implementation Challenge, volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 1–10. American Mathematical Society. Kotthoff, L.; McCreesh, C.; and Solnon, C. 2016. Portfolios of Subgraph Isomorphism Algorithms. In 10th International Conference on Learning and Intelligent Optimization (LION ’16), Selected Revised Papers, volume 10079 of Lecture Notes in Computer Science, 107–122. Springer. Kumar, R.; Mullen, E.; Tatlock, Z.; and Myreen, M. O. 2018. Software Verification with ITPs Should Use Binary Code Extraction to Reduce the TCB. In Proceedings of the 9th International Conference on Interactive Theorem Proving (ITP ’18), volume 10895 of Lecture Notes in Computer Science, 362–369. Springer. Lammich, P. 2020. Efficient Verified (UN)SAT Certificate Checking. Journal of Automated Reasoning, 64(3): 513–532. Extended version of paper in CADE 2017. McConnell, R. M.; Mehlhorn, K.; N¨aher, S.; and Schweitzer, P. 2011. Certifying Algorithms. Computer Science Review, 5(2): 119–161. McCreesh, C.; Prosser, P.; and Trimble, J. 2020. The Glasgow Subgraph Solver: Using Constraint Programming to Tackle Hard Subgraph Isomorphism Problem Variants. In Proceedings of the 13th International Conference on Graph Transformation (ICGT ’20), volume 12150 of Lecture Notes in Computer Science, 316–324. Springer. McIlree, M.; and McCreesh, C. 2023. Proof Logging for Smart Extensional Constraints. In Proceedings of the 29th International Conference on Principles and Practice of Constraint Programming (CP ’23), volume 280 of Leibniz International Proceedings in Informatics (LIPIcs), 26:1– 26:17. Myreen, M. O.; and Owens, S. 2014. Proof-Producing Translation of Higher-Order Logic into Pure and Stateful ML. Journal of Functional Programming, 24(2–3): 284–315. Shi, X.; Fu, Y.; Liu, J.; Tsai, M.; Wang, B.; and Yang, B. 2021. CoqQFBV: A Scalable Certified SMT Quantifier-Free Bit-Vector Solver. In Proceedings of the 33rd International Conference on Computer Aided Verification (CAV ’21), volume 12760 of Lecture Notes in Computer Science, 149–171. Springer. Slind, K.; and Norrish, M. 2008. A Brief Overview of HOL4. In Proceedings of the 21st International Conference on Theorem Proving in Higher Order Logics (TPHOLs ’08), volume 5170 of Lecture Notes in Computer Science, 28–32. Springer. Tan, Y. K.; Heule, M. J. H.; and Myreen, M. O. 2023. Verified Propagation Redundancy and Compositional UNSAT Checking in CakeML. International Journal on Software Tools for Technology Transfer, 25: 167–184. Preliminary version in TACAS ’21. Tan, Y. K.; Myreen, M. O.; Kumar, R.; Fox, A. C. J.; Owens, S.; and Norrish, M. 2019. The Verified CakeML Compiler Backend. Journal of Functional Programming, 29: e2:1–e2:57. Wetzler, N.; Heule, M. J. H.; and Hunt Jr., W. A. 2014. DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs. In Proceedings of the 17th International Conference on Theory and Applications of Satisfiability Testing (SAT ’14), volume 8561 of Lecture Notes in Computer Science, 422–429. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8047
2024
894
18,733
SAT-Based Techniques for Lexicographically Smallest Finite Models Mikol´aˇs Janota1, Choiwah Chow1, Jo˜ao Ara´ujo2, 3, Michael Codish4, Petr Vojtˇechovsk´y5 1Czech Technical University in Prague, Czechia 2Center for Mathematics and Applications (NOVA Math), Portugal 3Department of Mathematics, NOVA FCT, Portugal 4Ben-Gurion University of the Negev, Beer-Sheva, Israel 5University of Denver, USA [email protected] Abstract This paper proposes SAT-based techniques to calculate a specific normal form of a given finite mathematical structure (model). The normal form is obtained by permuting the domain elements so that the representation of the structure is lexicographically smallest possible. Such a normal form is of interest to mathematicians as it enables easy cataloging of algebraic structures. In particular, two structures are isomorphic precisely when their normal forms are the same. This form is also natural to inspect as mathematicians have been using it routinely for many decades. We develop a novel approach where a SAT solver is used in a black-box fashion to compute the smallest representative. The approach constructs the representative gradually and searches the space of possible isomorphisms, requiring a small number of variables. However, the approach may lead to a large number of SAT calls and therefore we devise propagation techniques to reduce this number. The paper focuses on finite structures with a single binary operation (encompassing groups, semigroups, etc.). However, the approach is generalizable to arbitrary finite structures. We provide an implementation of the proposed algorithm and evaluate it on a variety of algebraic structures. Introduction Finite model finding of first-order or higher-order logic has a long-standing tradition in automated reasoning. A number of techniques have been researched in SAT (Claessen and S¨orensson 2003), constraint programming (Audemard, Benhamou, and Henocque 2006; Zhang 1996; Zhang and Zhang 1995), or SMT (Reynolds et al. 2013a,b). In theorem proving and software verification, finite models are typically used to identify incorrectly stated theorems. In computational algebra, mathematicians use finite model finding to study fundamental algebraic structures. This paper does not focus on calculating specific models but on providing a normal form for a given model. This is one of the most prevalent problems in mathematics, i.e., assigning a canonical representative to an equivalence class. For example, the canonical form of a rational fraction is the quotient with the common prime factors removed (reduced fraction); Jordan’s canonical form for matrices assigns a matrix to an equivalence class of similar matrices; there are Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ways of assigning a canonical form to a graph so that any two are isomorphic if and only if their canonical forms are the same, etc. But helping with decision problems is just one of the applications of canonical forms. When we want to enumerate all structures of a given type (e.g., all triangulated 3-manifolds) up to some size (e.g., on 11 vertices (Lutz 2008, 2009)), it suffices to generate the canonical forms and ignore all the rest. These are just a few examples as the applications of canonical forms are countless, including applications to topics as far as chemistry (Weininger, Weininger, and Weininger 1989; Schneider, Sayle, and Landrum 2015). The key feature of canonical systems of representatives is that two objects belong to the same equivalence class if and only if their canonical forms are equal. A largely widespread technique to assign canonical forms to mathematical objects is to associate each object in the class with a vector and then order all the vectors lexicographically: the canonical object will be the object with the smallest vector. We will call this the lexicographically smallest representative, lexmin for short. In constraint programming literature, a related term lex-leader is defined, cf. Walsh (2012); Peter et al. (2014). Lexmin for graphs is also extensively studied in the literature, cf. Babai and Luks (1983); Crawford et al. (1996). In computational algebra, this idea naturally translates to concatenating the rows of a multiplication table into a single vector. This canonical form was used as early as 1955 to calculate all the distinct1 semigroups of order 4. More recently, Jipsen maintains an online database of a variety of mathematical structures stored as lexmin (Jipsen 2016), the GAP package Smallsemi enables calculating lexmin semigroups (Distler and Mitchell 2022). Figure 1 shows a motivating example of a possible multiplication table for an operation ∗together with its lexicographically smallest representative ⋄. It is relatively easy for a human to detect that ∗is a quasigroup (aka Latin square), however, further properties are harder to see. In contrast, the multiplication table of ⋄is much easier to comprehend— we see that the operation corresponds to addition modulo 7, which is in fact the unique group of order 7 (the cyclic group Z7). 1Two semigroups are distinct if they cannot be mapped to one another by an isomorphism or by an anti-isomorphism. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8048 ∗1 2 3 4 5 6 7 1 7 5 6 1 4 2 3 2 5 3 1 2 6 7 4 3 6 1 5 3 7 4 2 4 1 2 3 4 5 6 7 5 4 6 7 5 2 3 1 6 2 7 4 6 3 1 5 7 3 4 2 7 1 5 6 ⋄1 2 3 4 5 6 7 1 1 2 3 4 5 6 7 2 2 3 4 5 6 7 1 3 3 4 5 6 7 1 2 4 4 5 6 7 1 2 3 5 5 6 7 1 2 3 4 6 6 7 1 2 3 4 5 7 7 1 2 3 4 5 6 Figure 1: (D, ∗) and its lexmin (D, ⋄) for D = {1..7}. Developing efficient algorithms for calculating the lexmin form is paramount in the field of computational algebra: • It enables presenting a concrete algebra in a familiar way to researchers. • Computational algebra systems, such as GAP (GAP4), contain a large number of packages for handling algebras for specific forms and lexmin provides a uniform exchange format between these packages. • Lexmin provides a uniform way of storing and recalling algebras. The form is especially interesting for prefix trees (tries) since inherently, many algebras will share the same prefix in the lexmin form. This paper presents the following contributions. • We develop a SAT-based algorithm that enables calculating the normal form on the fly, rather than working with explicit representation of the target normal form. • We design a variety of propagation techniques that enable avoiding SAT calls in a large number of cases, which has proven indispensable in many real-world problems. • We provide a prototype implementation of the proposed algorithm, using state-of-the-art SAT solvers in a black box fashion. This prototype is evaluated on a number of algebras that mathematicians deal with on daily basis. Preliminaries Throughout the paper we focus on finite mathematical structures with a single binary operation, hereafter referred to as magmas (the term groupoid is also used in the literature). For instance, any finite group or semigroup is a magma. Magmas are denoted by a pair (D, ◦) where D is the domain and ◦a binary operation on D. We rely on the well-established term of isomorphism. Definition 1 (isomorphism). A bijection f : D1 →D2 is an isomorphism from a magma (D1, ∗) to (D2, ⋄) if f(a∗b) = f(a) ⋄f(b), for all a, b ∈D1. Two magmas are isomorphic iff there exists at least one isomorphism between them. Throughout the paper, we consider a finite domain D = {1, . . . , n} for n ∈N+. The goal is to obtain the lexicographically smallest (D, ⋄) isomorphic to the given (D, ∗). Definition 2 (⪯). Define a total order ⪯on magmas on domain D as follows. For magmas A = (D, ∗) and B = (D, ⋄), we have A ⪯B iff 1∗1, 1∗2, . . . , 1∗n, 2∗1, . . . , n∗n is lexicographically smaller or equal to 1 ⋄1, 1 ⋄2, . . . , 1 ⋄ n, 2 ⋄1, . . . , n ⋄n. Definition 3 (LEXMIN). For magma A = (D, ∗), magma B = (D, ⋄) is the lexicographically smallest representative (lexmin) of A iff B is the ⪯-least element among all magmas (D, ⋄′) isomorphic to A. The LEXMIN problem is finding the lexicographically smallest representative of A. In several cases we rely on the notion of an idempotent, which is invariant under isomorphism. Definition 4 (idempotent). For a magma (D, ∗), an element a ∈D is an idempotent iff a ∗a = a. Observation 5. Let A = (D1, ∗) and B = (D2, ⋄) be isomorphic magmas under some isomorphism f, and let a be an idempotent of A, then f(a) is an idempotent of B. Example 6. This example shows a multiplication table for a small magma (D, ∗) with D = {1, 2} together with an extensive representation as a set of assignments. On the righthand side, we see its lexicographically smallest representative ⋄. The corresponding isomorphism swaps 1 and 2, i.e., f(1) = 2, f(2) = 1, alternatively represented as a permutation in the cyclic notation (1 2). ∗1 2 1 1 2 2 2 2 1 ∗1 = 1 1 ∗2 = 2 2 ∗1 = 2 2 ∗2 = 2 2 ⋄2 = 2 2 ⋄1 = 1 1 ⋄2 = 1 1 ⋄1 = 1 ⋄1 2 1 1 1 2 1 2 Note that the isomorphism not only changes the contents of the table but also permutes rows and columns. In this example, to obtain ⋄from ∗means swapping rows 1 and 2, columns 1 and 2, and values 1 and 2 in the table. Example 6 also illustrates that properties based on equality are preserved: both tables contain a row with all elements distinct, have 2 idempotents, etc. This is a more general property, which we state here informally.2 Observation 7. Any property of A = (D, ⋄) that does not rely on the names of elements of D is preserved in all isomorphic copies of A. Note that in the small Example 6, there is a unique isomorphism from the input magma to its lexmin but in general, there may be many—despite the fact that the lexmin is unique. We conclude the preliminaries by relating isomorphism to lexicographic representatives. Observation 8. Magmas A = (D, ∗) and B = (D, ⋄) are isomorphic iff their lexicographically smallest representatives are equal. The isomorphism problem for finite magmas is graphisomorphism-hard (GI-hard) even if we consider only semigroups (Zemljachenko, Korneenko, and Tyshkevich 1982). Further, deciding whether an incidence matrix of a graph is lexmin is NP-hard (Babai and Luks 1983), despite the fact that GI is believed to be easier than NP. Therefore, we do not expect the LEXMIN problem for general magmas to be computationally easy. 2More precisely, a set S defined by an FOL formula in a magma A corresponds to the set f(A) in B for an isomorphism f from A to B, cf. Theorem 1.1.10 in (Marker 2002). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8049 Explicit Encoding A straightforward approach to the lexmin problem is to encode to SAT that a target (unknown) magma (D, ⋄) is isomorphic to a given magma (D, ∗). Then, we can apply standard algorithms for finding the lexicographically smallest magma (D, ⋄), cf. (Nadel and Ryvchin 2016; Trentin 2019; Petkovska et al. 2016; Marques-Silva et al. 2011). Effectively, (D, ⋄) is represented in 1-hot encoding. First represent an isomorphism f : D →D by introducing Boolean variables xi→j meaning that f(i) = j for i, j ∈D. Second, introduce additional Boolean variables xi,j:v meaning that i ⋄j = v. To ensure that the xi→j variables represent a bijection, generate cardinality constraints (converted to CNF by standard means (Roussel and Manquinho 2021)). X(D) := nX j∈D xj→i = X j∈D xi→j = 1 | i ∈D o (1) To ensure that xi,j:v represent an isomorphic (D, ⋄), generate implications covering possible mappings between rows, columns, and values. (xr→r′ ∧xc→c′ ∧xr∗c→v′) ⇒xr′,c′:v′, for r, r′, c, c′, v′ ∈D (2) Note that for row r and column c, the value r ∗c is given. An advantage is that we can easily apply any bitlevel lexicographic optimization algorithms over the vector of variables representing the magma (D, ⋄), in the following order x1,1:n, x1,1:n−1, . . . , x1,1:1, x1,2:n, . . . , xn,n:1. A significant disadvantage is the sheer size of the encoding, which involves Θ(|D|5) clauses. Therefore we propose a solution where the explicit representation of (D, ⋄) is not necessary. Gradual Construction Instead of introducing variables for the unknown (D, ⋄), we construct it gradually starting from its top-left corner, continuing by filling the first row and then the second, and so on. Here we avail of the concept of isomorphic copy, which is a magma induced by an isomorphism. Definition 9 (isomorphic copy). Consider a magma (D1, ∗) and a bijection f : D1 →D2 then the isomorphic copy (D2, ⋄)f is defined as a⋄b = f(f −1(a)∗f −1(b)). In the remainder of the paper, we omit the subscript f from (D2, ⋄)f, whenever it is clear from the context that f is present. The intuition behind an isomorphic copy is that to obtain the value a⋄b, we first obtain the pre-images of a and b, then apply the (known) operation ∗to the pre-images in the context of (D1, ∗), and finally map the result back to (D2, ⋄). This is well-defined because f is a bijection. Observation 10. Magmas A = (D1, ∗) and B = (D2, ⋄) are isomorphic iff there exists a bijection f : D1 →D2 such that B is an isomorphic copy of A by f : D1 →D2. To construct (D2, ⋄), we will need to encode the constraints of the shape r ⋄c = v, e.g., 1 ⋄1 = 1 means placing 1 in the top left corner of the multiplication table. Since Algorithm 1: Calculate lexmin (D, ⋄) for given (D, ∗) by gradual construction. A ←∅ // empty set of assignments for r, c ∈1..|D|, 1..|D| do v ←1 while ¬SAT(X(D) ∪enc(A ∪{r ⋄c = v})) do v ←v + 1 A ←A ∪{r ⋄c = v} // extend A (D, ⋄) must be an isomorphic copy of (D, ∗), the constraint r ⋄c = v can be written as follows: f(f −1(r) ∗f −1(c)) = v, (3) where f is an unknown permutation of D. As in the previous encoding, we encode f as Boolean variables xi→j coupled with the appropriate cardinality constraints (see (1)). The equality (3) yields a set of implications covering all possible values of f.3 enc(r ⋄c = v) := {(xi→r ∧xj→c) ⇒xi∗j→v | i, j ∈D} (4) Algorithm 1 shows how the lexmin representative is calculated by maintaining a set of equalities A of the form r ⋄c = v for which we already know that they must hold in the multiplication table of (D, ⋄) (this is a loop invariant of the outer loop). The inner loop attempts to extend the set of assignments A for the next cell of the multiplication table going from 1 to higher values. The call to the function enc conjoins the encoding of the assignments according to the equation (4) along with the bijection constraints (1). The algorithm first tries placing 1 in the top left corner and if that is possible it moves onto the next column. Otherwise, it tries placing 2 in the top left corner, and so forth. Once it succeeds in placing a value in a cell, the value is fixed. The algorithm leads to O(|D|3) SAT calls. The permutation f, represented by the Boolean variables xi→j, spans all permutations and therefore enables the creation of any isomorphic copy of (D, ∗) on the domain D. This also justifies termination of the inner loop because one of the SAT calls is bound to succeed since the set of isomorphic copies is always nonempty—it, for instance, contains the input magma itself. Since |A| ∈O(|D|2) and (4) requires O(|D|2) clauses, Algorithm 1 requires space for O(|D|4) clauses. Efficiency Improvements Algorithm 1 faces two major pitfalls: a high number of SAT calls, and hard individual SAT calls. The upper bound of O(|D|3) on SAT calls in Algorithm 1 is tight. For instance, for quasigroups (aka Latin squares) it is also Ω(|D|3).4 The second issue, where an individual SAT call might be too hard, is potentially even more worrisome. 3The implementation avoids repeated and tautologous clauses. 4Each row of a quasigroup contains all the elements of D, therefore each row requires n(n−1) 2 SAT calls as v calls are needed for a cell containing the value v. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8050 Indeed, we have a reason to believe that some SAT calls will be hard due to an underlying pigeonhole principle. For instance, if the original magma (D, ∗) does not contain any element more than k-times on any given row, the same must hold for the target magma (D, ⋄). Then, for the SAT solver to prove that it cannot place an element for the k + 1-th time on the same row is indeed reminiscent of the pigeonhole principle formulas, which are well known to be difficult for SAT (and resolution in general) (Haken 1985; de Rezende et al. 2020). Such hard SAT calls could get the Algorithm 1 simply stuck on a single cell. Here we focus on designing new propagation techniques that let us bypass calls to the SAT solver in specific scenarios. We focus mainly on techniques that rely on counting because that is a famous Achilles’ heel for modern SAT solvers. We begin with a technique that enables in some cases identifying the first row. Identification of the First Row Recall that any row r of the original magma (D, ∗) must be projected to some row r′ = f(r) in the target magma. Here we show that in certain cases it is possible to identify possible candidates that might be mapped to the first row, i.e., we construct a set C1 ⊆D, s.t. f(a) = 1 only if a ∈C1. This is encoded into the SAT solver as a set of unit clauses: {{¬xa→1} | a /∈C1} before Algorithm 1 starts. Suppose that 4 ∗x = 4, for all x ∈D, i.e., the 4th row is entirely filled with 4’s. If 4 is renamed to 1, i.e., pick an isomorphic copy with f(4) = 1, the first row of ⋄becomes all 1’s, i.e., lexicographically smallest first row possible. We generalize this idea to find candidates for the first row of ⋄. Definition 11. Let A = (D, ∗) be a magma with some idempotents. The idempotent apex of A is the largest value of |{x ∈D | e ∗x = e}|, for e ∈D idempotent of A. Possible rows that can be mapped to the first row are obtained by calculating for each row r of ∗that contains an idempotent, how many times r appears in it, i.e., or := |{c ∈ D | r ∗c = r}|, if r ∗r = r. We claim that only a row that maximizes this number can become the first row in the smallest representative ⋄, i.e., f(r) = 1 implies or is the apex of the input magma. If the input magma does not contain any idempotents, this technique is not applied. Note that in the example of Figure 1 only row 4 contains an idempotent and therefore it necessarily must become the first one. We proceed with the correctness proof of this statement. For succinctness we introduce the following notation. We write [(D, ∗)] for the set of isomorphic copies (D, ⋄) isomorphic to (D, ∗). We write ↓(D, ∗) for the lexicographically smallest representative according to the ordering ≺r (by-rows). We write 1 ⋄{1, . . . , k} = {1} as a shorthand for 1 ⋄i = i, for i ∈1..k, which effectively means that the first k columns of the first row of ⋄are equal to 1. Proposition 12. Let A = (D, ∗) be a magma with idempotents and idempotent apex k. Let Mk := {(M, ⋄) ∈ [(D, ∗)] | 1 ⋄{1, . . . , k} = {1}}. Then 1. Mk ̸= ∅; 2. ↓(D, ∗) ∈Mk. Proof. Let e ∈D such that e ∗e = e and D0 := {x ∈ D | e ∗x = e} has size k. Pick g, a permutation of D, such that g(D0) = {1, . . . , k} and g(e) = 1. Define on D the following operation: x ⋄y := g(g−1(x) ∗g−1(y)), for all x, y ∈D. For all x ∈{1, . . . , k}, we have 1 ⋄x = g(g−1(1) ∗g−1(x)) = g(e ∗g−1(x)) = g(e), because g−1(x) ∈D0 and e ∗a = e, for all a ∈D0. It is proved that 1 ⋄x = g(e) = 1, for all x ∈{1, . . . , k}. In addition, x⋄y := g(g−1(x)∗g−1(y)) implies that (replacing x with g(x) and y with g(y)) g(x) ⋄g(y) = g(g−1(g(x)) ∗ g−1(g(y))) = g(x∗y). It is proved that g is an isomorphism of the magmas (D, ⋄) and (D, ∗). Therefore (D, ⋄) ∈Mk. The first claim follows. Regarding the second claim, suppose that (D, ×) is a lexmin of (D, ∗). Since Mk is not empty, we must have 1 × j = 1 for all j in 1, . . . , i and some i ≥k. Since the idempotent apex is preserved by isomorphism (see Observation 7), we have i ≤k. Hence i = k and (D, ×) is in Mk. Budgeting Next, we describe a technique that is invoked for every SAT call of Algorithm 1. Roughly speaking, each element a ∈D is assigned a budget, which is decremented whenever a is placed in the target table. SAT calls r ⋄c = v with values v that have 0 budget are not invoked (and deemed unsatisfiable). We consider budgets per row/column or for the whole table. In the context of constraint programming, similar propagation techniques are abundantly used for global constraints (Peter et al. 2014, Chapter 3). For intuition, consider a situation where each row of the multiplication table of ∗contains at most one occurrence of any given element (as in the example Figure 1). Then the same property will hold in the rows of ⋄by Observation 7. This means that if Algorithm 1 has placed an element a in a certain row, it does not need to try placing it in the same row again. This enables the algorithm to skip SAT calls on values that are no longer possible (in that row). This idea is readily generalized to an arbitrary number of occurrences. Define o∗(r, a) = |{c | r ∗c = a, c ∈D}| and calculate max{o∗(r, a) | r, a ∈D} to give a budget for an arbitrary element in an arbitrary row of (D, ⋄). The same can be applied to columns and the total number of occurrences in the table. This is especially useful for quasigroups, where each element appears precisely once in each row/column. The budget calculated as described above is an upper bound, which can sometimes be improved. Consider the case when the first row was uniquely identified by the technique outlined in the previous section. Then we have established that f(k) = 1 for some k ∈D, for any f yielding the lexmin copy. This enables splitting budgets for the element 1 and the rest of the elements according to the following equalities. max{o∗(r, k) | r ∈D} = max{o⋄(r, 1) | r ∈D} (5) max{o∗(r, a) | a ̸= k ∧r, a ∈D} = max{o⋄(r, 1) | a ̸= 1 ∧r, a ∈D} (6) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8051 Row Invariants As shown above, the budgeting technique can benefit from knowing which element has been mapped to the first row. More generally, once it is established that f(k) = j, for some k, j ∈D, it must hold that the number of occurrences of k in (D, ∗) will be equal to the number of occurrences of j in the copy (D, ⋄). But how to establish such correspondence? Note that the variables xi→j determine the permutation on the elements of D but this permutation may change over the course of the algorithm. From the definition of isomorphic copy (Definition 9), the contents of a row of the original table of (D, ∗) must correspond to the contents of some row of the table of (D, ⋄). More precisely, the bag of elements [r ∗c | c ∈D] is equal to the bag of elements [f(r) ⋄c | c ∈D]. In some cases, this lets us unequivocally identify that a row r in the original magma maps to a row r′ in the isomorphic copy. This is done by calculating invariants (properties invariant under isomorphism) and matching pairs of rows with unique invariants. Currently, we use the following invariants bundled into a single one. Similar invariants have been used before for isomorphism testing (Ara´ujo, Chow, and Janota 2021, 2022; Nagy and Vojtˇechovsk´y 2018). • |{r ◦c = c | c ∈D}|, for fixed r ∈D and ◦∈{∗, ⋄} • |{r ◦c = r | c ∈D}|, for fixed r ∈D and ◦∈{∗, ⋄} • |{r ◦r = r}|, for fixed r ∈D and ◦∈{∗, ⋄} • define gr(a) = r◦a and mr(a) as minimal k s.t. gk ∗(a) = gj ∗(a) for some j < k. Take the bag [mr(c) | c ∈D] as invariant, for fixed r and ◦∈{∗, ⋄}. For the example in Figure 1, only row 4 has 7 columns c s.t. 4 ∗c = c and m4(c) = 1. The invariants are used in Algorithm 1 as follows. Each time a row r of (D, ⋄) is entirely filled, its invariant is calculated and if there is a unique row r′ in the input table (D, ∗) with the same invariant, set f(r′) = r, add the corresponding unit clause {xr′→r} and recalculate budgets. We also exploit invariants even if they do not give us a unique correspondence of rows. In the case that an invariant is shared by k rows in ∗and it already appears k times in the partially filled copy ⋄, subsequent rows will never be mapped to the ones that gave rise to the invariant in question. More concretely, if there is a set of rows R ⊆D with |R| = k that correspond to a certain invariant I and the invariant I already appears k times in the first r rows of ⋄then for f(r′) ̸= j for j ∈R and r′ > r. In the implementation, corresponding unit clauses are inserted into the SAT encoding once that takes place. We remark the same technique could be applied to columns but it would not be useful since columns are never complete until the very end. Mid-Row Budgeting Refinement The techniques described in the previous section enable refining budgets after a row of the target table is filled. Here we also show that this can be done mid-row. We propose a cheap technique that is easy to implement where we split the rows of ∗into rows containing an idempotent and into rows that do not. Note that row r contains an idempotent iff r ∗r = r. This lets us calculate three types of budgets: (1) for all rows of ∗; (2) for rows of ∗containing an idempotent; (3) for rows of ∗not containing an idempotent. When Algorithm 1 starts filling a row r, it does not know in which group the row falls and therefore starts with the global budget. Once the rth position is filled, the budget can be refined accordingly. In the row-based traversal, in the first row, the refinement happens once the top left corner has been filled (the first column of the first row). Upper Bound by Last Value A simple improvement is obtained by inspecting the model obtained from satisfiable SAT calls. Even though Algorithm 1 only imposes assignments to the table ⋄for those cells that have been traversed so far, any SAT model represents a permutation for all the elements in the domain D, from which one can infer the rest of the table of ⋄. The remainder (untraversed) of the table does not necessarily guarantee that it is lexicographically smallest but it gives us an upper bound. This means that for each cell (r, c) ∈D × D there is always a tentative value vu for which we already have a witnessing permutation. This lets us avoid the SAT call for the query r⋄c = v for any v ≥vu. This upper bound is also used in different search strategies described in the upcoming section. We remark that an analogous technique has also been used for explicit representation-based calculation of lexicographically smallest SAT assignment (Knuth 2015). Search Strategies Algorithm 1 performs |D| tests for a single cell of the table ⋄in the worst case. It is tempting to apply standard techniques for minimization, such as binary search. However, these are not directly applicable because the behavior is not monotone, e.g., it might be possible to place 3 and 7 in a specific cell, but not 5. Nevertheless, monotone behavior can be obtained by constructing SAT queries over a disjunction of values. Hence, instead of querying r ⋄c = v, we query W v∈V r⋄c = v over some set of V ⊆D. In terms of the SAT encoding, one could calculate a disjunction over the encoding for a single value (equation (4)) but we are able to avail of the common part and r ⋄c ∈V is encoded as follows. {(xi→r ∧xj→c) ⇒ _ v∈V xi∗j→v, | i, j ∈D} (7) This approach has monotone behavior in the sense that if r⋄c ∈V is satisfiable then also r⋄c ∈V ′ is satisfied for any V ⊆V ′. This enables us to use standard MaxSAT iterative techniques, where the basic Algorithm 1 is in fact a linear UNSAT-SAT strategy. Additionally, taking into account values obtained from satisfiability calls enables improving the upper bound for linear SAT-UNSAT or binary search. In our experiments, standard binary search did not perform well because it still requires Ω(log2 |D|) SAT calls to prove an optimum. Therefore we apply a modified binary search where first we test if the optimum has not already been reached. In the case that the optimum has not been reached, the upper bound is updated. If the upper bound reduced the search space by a factor of 2, we simply recur. If the upper bound falls into the top half of the possible values, another SAT call is issued. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8052 Structure Definition in FOL Groups x ∗(y ∗z) = (x ∗y) ∗z, x ∗e = x, x ∗e = x, x ∗x′ = e, x′ ∗x = e Loops x∗y = x∗z →y = z, y ∗x = z ∗x → y = z, e ∗x = x, x ∗e = x Quasigroups x∗y = x∗z →y = z, y ∗x = z ∗x → y = z Semigroups x ∗(y ∗z) = (x ∗y) ∗z Magmas no requirement Table 1: FOL definitions of the used algebraic structures. Experiments The experiments are run on an Intel® Xeon®CPU E5-2630 v2 2.6 GHz ×24 computer, with 64 Gb RAM. We call our tool mlex and it supports two SAT solvers, minisat (E´en and S¨orensson 2003) and cadical (Biere 2017). Unless otherwise stated, minisat is used in our experiments. Both SAT solvers are used incrementally and cadical is used via the IPASIR interface (Balyo et al. 2016). We excluded the Explicit Encoding from the evaluation since it led to unwieldy memory consumption (dozens of gigabytes even for small problems). The GAP package Smallsemi (Distler and Mitchell 2022) provides a function to calculate lexmin semigroup, which is not included in the evaluation because the ordering used traverses the table by the diagonal first and the implementation suffers from timeouts and large memory consumption even on small problem instances (order 20). Hence, the evaluation is based on Algorithm 1 and its extensions described in the Efficiency Improvements section. The tool was evaluated on several popular algebraic structures (algebras) defined in Table 1. In this table, “e” is a constant, “∗” is a binary operation, and “′” unary; all clauses are implicitly universally quantified. Even though mlex currently supports only a single binary operation, it can handle all these algebras. This is because in many finite algebraic structures, such as those listed here, the constant and the unary function are uniquely determined by the binary operation. Hence, they can be removed from the inputs to mlex. The evaluation was performed on randomly generated samples from five algebraic structures: groups, loops, general magmas, quasigroups, and semigroups. For groups, we randomly pick the groups given by the AllSmallGroups function in GAP. For magmas and semigroups, we generate them with the help of GAP functions such as Random. For quasigroups and loops, we use the RandomQuasigroup and RandomLoop functions in the LOOPS package in GAP. We make sure the models in each structure do not belong to a sub-structure in the list above. For example, the magmas we use are not semigroups or quasigroups. We consider a total of 210 random samples of the five algebraic structures listed in Table 1, of orders 16 to 128 in increments of 16. In addition, we include random samples of 5 magmas of each of the orders 192 and 256. Finally, a timeout of 30 minutes is used for calculating the lexmin copy of each model. 0 25 50 75 100 125 150 175 Instances 0 250 500 750 1000 1250 1500 1750 CPU time (s) w/o first row identification all enhancements w/o invariants w/o mid-row budgeting w/ strategy linear-unsat-sat w/o budgeting basic algorithm Figure 2: Performance of mlex with different options. Ablation Study of Techniques We test the introduced techniques in an ablation study. We consider basic Algorithm 1, a version with all improvements turned on, and the effect of turning off each one of them individually. For search strategies, we compare between linearunsat-sat (lus) and modified binary search (bin2). Figure 2 shows a cactus plot for the ablation study. Although all the techniques lead to an improvement in the tool, the most significant is the use of budgeting, which confirms our suspicion that hard SAT calls might occur due to counting arguments. Interestingly, the binary search technique also has a significant impact. Turning off the other techniques does not have a significant impact on the number of solved instances. However, there are specific classes of problems that cannot be solved without using all the techniques. Also, the “all enhancement” version of the solver appears to be the fastest and the most robust version. It is well-known that minisat is simple and fast and that for more complex problems, cadical usually performs much better (Dutertre 2020). This pattern is also observed with mlex. As shown in the cactus diagram Figure 3, when enhancement features are turned on, then for simpler problems that take a shorter time, minisat usually solves more problems for the same time, but for more complex problems, the opposite is true. However, as also shown in the same diagram, the choice of other input options to mlex has a much more pronounced impact on the speed of mlex than the underlying SAT solver, as the curves corresponding to both SAT-solvers are very close for the same set of input options. Surprisingly, cadical performs poorly compared to minisat when all improvements are turned off. Related Work Finite model finding is ubiquitous to automated reasoning. Sometimes, users are interested in models rather than in proving a theorem (McCune 1994). In theorem proving, models serve as counterexamples to invalid conjectures (Blanchette 2010), which also appear in software veriThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8053 0 25 50 75 100 125 150 175 Instances 0 250 500 750 1000 1250 1500 1750 CPU time (s) cadical all enhancements minisat all enhancements cadical basic algorithm minisat basic algorithm Figure 3: Comparison of minisat and cadical in mlex. fication (Torlak and Jackson 2007). Finite models have also been used as a semantic feature for lemma selection learning (Urban et al. 2008). In certain fragments, finite model finding provides a complete decision procedure, e.g., the Bernays-Sch¨onfinkel fragment (EPR). Throughout the years, CP, SAT, and SMT tools have been used in finite model finders (Audemard, Benhamou, and Henocque 2006; Claessen and S¨orensson 2003; Reynolds et al. 2013a,b; Zhang 1996; Zhang and Zhang 1995; Ara´ujo, Chow, and Janota 2023). SAT and CP are routinely used to solve algebraic problems (Heule 2018; Distler et al. 2012; Janota, Morgado, and Vojtechovsk´y 2023). It is important to note that finite models are also constructed by dedicated approaches based on deep domain knowledge. Notably, the algebraic system GAP (GAP4) contains a number of packages for specific types of algebraic structures. The Small Groups library (Besche, Eick, and O’Brien 2002) contains all (≈4 × 108) non-isomorphic groups up to order 2000 (except for order 1024). Similarly, Smallsemi (Distler and Mitchell 2022) catalogues semigroups and LOOPS packages loops (Nagy and Vojtˇechovsk´y 2018). However, currently, these packages do not provide the lexicographically smallest representative. Adding our tool into GAP is a subject of future work. Normal forms are ubiquitous in computer science and mathematics. Here we highlight the canonical labeling algorithms implemented in the nauty system (McKay and Piperno 2014). The system has been developed since the 80’s and it is considered state-of-the-art for graph isomorphism (and more). It is possible to construct a canonical form of a magma by using nauty: for a magma A, construct a special graph G′ A and find its canonical graph GA, cf. (Khan 2020). This form is canonical in the sense that two isomorphic magmas will give the same canonical graph but the resulting graph is opaque to the user. Hence, it cannot be used for solving the problem tackled in this paper. A large body of research exists on symmetry breaking in SAT and CP (Peter et al. 2014; Sakallah 2021). In general, however, the objective of symmetry breaking is different from our objective: it is a means speeding up search by avoiding symmetric parts of the search space. In contrast, in our case, the normal form is the objective. Typically, symmetry breaking is meant to be fast, when used dynamically, or should add a small number of constraints, when used statically (Codish et al. 2018; Itzhakov and Codish 2020). Therefore, symmetry breaking is often incomplete. Even though, Heule investigates optimal complete symmetry breaking for small graphs (≈5 vertices) (Heule 2019). Kirchweger and Szeider (2021) develop a specific symmetry breaking, called SAT Modulo Symmetries, where a SAT solver is enhanced to look for the lexicographically smallest graph (similarly to lazy SMT). There, the objective is to enumerate non-isomorphic graphs with certain properties. More broadly, this paper fits into the SAT+CAS paradigm, where SAT is combined with computer algebra systems, cf. Bright, Kotsireas, and Ganesh (2022). Conclusions and Future Work This paper tackles the problem of calculating the lexicographically smallest representative of a given algebraic structure. This is a fundamental problem in computational algebra, where the user, a mathematician, requires a specific canonical form. A prominent feature of this canonical form is that it enables a “common language” between different mathematical libraries and it enables the mathematicians to identify familiar patterns and structures. Our prototype of the proposed algorithms shows that the SAT technology is up to the task. The proposed encoding enables tackling large problem instances by avoiding explicitly representing the target structure. The SAT solver is used in a black box fashion with repeated SAT calls, which gradually construct the targeted structure (the lexicographically minimal representative). We further design a number of dedicated techniques that enable simplifying, or completely avoiding, certain SAT calls. The experimental evaluation shows that the approach decidedly benefits from this additional propagation (done outside of the SAT solver). This work opens a number of avenues for further research. More powerful propagation techniques still could be considered—such as different invariants and more aggressive and fine-grained propagation. A tighter integration with the SAT solver and application to structures with several multiplication tables is more of an engineering effort but would further increase the practicality of the implemented tool. Rather than invoking the approach on a given structure, it would also be interesting to integrate it into the calculation of non-isomorphic structures under constraints. Acknowledgements We thank Brendan D. McKay for helpful comments. The results were supported by the Ministry of Education, Youth and Sports within the dedicated program ERC CZ under the project POSTMAN no. LL1902 and by national funds through the FCT — Fundac¸˜ao para a Ciˆencia e a Tecnologia, I.P., under the scope of the project UIDB/00297/2020 (doi.org/10.54499/UIDB/00297/2020) and the project UIDP/00297/2020 (doi.org/10.54499/UIDP/00297/2020) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8054 (Center for Mathematics and Applications) and co-funded by the European Union under the project ROBOPROX (reg. no. CZ.02.01.01/00/22 008/0004590). This article is part of the RICAIP project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 857306. P. Vojtˇeechovsk´y was supported by the Simons Foundation Mathematics and Physical Sciences Collaboration Grant for Mathematicians no. 855097. References Ara´ujo, J.; Chow, C.; and Janota, M. 2023. Symmetries for Cube-And-Conquer in Finite Model Finding. In Yap, R. H. C., ed., 29th International Conference on Principles and Practice of Constraint Programming, CP 2023, August 2731, 2023, Toronto, Canada, volume 280 of LIPIcs, 8:1–8:19. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Ara´ujo, J. a.; Chow, C.; and Janota, M. 2021. Filtering Isomorphic Models by Invariants. In Michel, L. D., ed., 27th International Conference on Principles and Practice of Constraint Programming (CP), volume 210 of Leibniz International Proceedings in Informatics (LIPIcs), 4:1–4:9. Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum f¨ur Informatik. ISBN 978-3-95977-211-2. Ara´ujo, J. a.; Chow, C.; and Janota, M. 2022. Boosting isomorphic model filtering with invariants. Constraints, 27: 1– 20. Audemard, G.; Benhamou, B.; and Henocque, L. 2006. Predicting and Detecting Symmetries in FOL Finite Model Search. J. Autom. Reason., 36(3): 177–212. Babai, L.; and Luks, E. M. 1983. Canonical Labeling of Graphs. In Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing, STOC ’83, 171–183. New York, NY, USA: Association for Computing Machinery. ISBN 0897910990. Balyo, T.; Biere, A.; Iser, M.; and Sinz, C. 2016. SAT Race 2015. Artificial Intelligence, 241: 45–65. Besche, H. U.; Eick, B.; and O’Brien, E. A. 2002. A Millennium Project: Constructing Small Groups. Int. J. Algebra Comput., 12(5): 623–644. Biere, A. 2017. CaDiCaL, Lingeling, PLingeling, Treengeling and YalSAT Entering the SAT Competition 2017. Blanchette, J. C. 2010. Nitpick: A Counterexample Generator for Isabelle/HOL Based on the Relational Model Finder Kodkod. In Voronkov, A.; Sutcliffe, G.; Baaz, M.; and Ferm¨uller, C. G., eds., Short papers for 17th International Conference on Logic for Programming, Artificial intelligence, and Reasoning, LPAR-17-short, volume 13 of EPiC Series in Computing, 20–25. EasyChair. Bright, C.; Kotsireas, I. S.; and Ganesh, V. 2022. When satisfiability solving meets symbolic computation. Commun. ACM, 65(7): 64–72. Claessen, K.; and S¨orensson, N. 2003. New Techniques that Improve MACE-style Finite Model Finding. In Proceedings of the CADE-19 Workshop: Model Computation - Principles, Algorithms, Applications. Codish, M.; Miller, A.; Prosser, P.; and Stuckey, P. J. 2018. Constraints for symmetry breaking in graph representation. Constraints, 24(1): 1–24. Crawford, J. M.; Ginsberg, M. L.; Luks, E. M.; and Roy, A. 1996. Symmetry-Breaking Predicates for Search Problems. In Aiello, L. C.; Doyle, J.; and Shapiro, S. C., eds., Proceedings of the Fifth International Conference on Principles of Knowledge Representation and Reasoning, 148–159. de Rezende, S. F.; Nordstr¨om, J.; Risse, K.; and Sokolov, D. 2020. Exponential Resolution Lower Bounds for Weak Pigeonhole Principle and Perfect Matching Formulas over Sparse Graphs. In Saraf, S., ed., 35th Computational Complexity Conference, CCC, volume 169 of LIPIcs, 28:1– 28:24. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Distler, A.; Jefferson, C.; Kelsey, T.; and Kotthoff, L. 2012. The Semigroups of Order 10. In Milano, M., ed., Principles and Practice of Constraint Programming - 18th International Conference, CP 2012, Qu´ebec City, QC, Canada, October 8-12, 2012. Proceedings, volume 7514 of Lecture Notes in Computer Science, 883–899. Springer. Distler, A.; and Mitchell, J. 2022. Smallsemi - A library of small semigroups Version 0.6.13. https://www.gap-system. org/Packages/smallsemi.html. GAP package. Dutertre, B. 2020. An Empirical Evaluation of SAT Solvers on Bit-vector Problems. In Bobot, F.; and Weber, T., eds., Proceedings of the 18th International Workshop on Satisfiability Modulo Theories co-located with the 10th International Joint Conference on Automated Reasoning, volume 2854 of CEUR Workshop Proceedings, 15–25. CEURWS.org. E´en, N.; and S¨orensson, N. 2003. An Extensible SAT-solver. In Giunchiglia, E.; and Tacchella, A., eds., Theory and Applications of Satisfiability Testing, 6th International Conference, SAT, volume 2919 of Lecture Notes in Computer Science, 502–518. Springer. GAP4. 2021. GAP – Groups, Algorithms, and Programming, Version 4.11.1. The GAP Group. Haken, A. 1985. The intractability of resolution. Theoretical Computer Science, 39: 297–308. Heule, M. J. H. 2018. Schur Number Five. In McIlraith, S. A.; and Weinberger, K. Q., eds., Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), 6598–6606. AAAI Press. Heule, M. J. H. 2019. Optimal Symmetry Breaking for Graph Problems. Math. Comput. Sci., 13(4): 533–548. Itzhakov, A.; and Codish, M. 2020. Incremental Symmetry Breaking Constraints for Graph Search Problems. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI, 1536–1543. AAAI Press. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8055 Janota, M.; Morgado, A.; and Vojtechovsk´y, P. 2023. Computing generating sets of minimal size in finite algebras. J. Symb. Comput., 119: 50–63. Jipsen, P. 2016. Mathematical Structures. https://math. chapman.edu/∼jipsen/uajs/. Accessed: 2024-02-13. Khan, M. A. 2020. Efficient Enumeration of Higher Order Algebraic Structures. IEEE Access, 8: 41309–41324. Kirchweger, M.; and Szeider, S. 2021. SAT Modulo Symmetries for Graph Generation. In Michel, L. D., ed., 27th International Conference on Principles and Practice of Constraint Programming, CP, volume 210 of LIPIcs, 34:1– 34:16. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Knuth, D. E. 2015. The Art of Computer Programming: Satisfiability, Volume 4, Fascicle 6. Addison-Wesley Professional. ISBN 9780134394572. Lutz, F. 2008. Enumeration and Random Realization of Triangulated Surfaces. In Bobenko, A.; Sullivan, J.; Schr¨oder, P.; and Ziegler, G., eds., Discrete Differential Geometry, volume 38 of Oberwolfach Seminars. Birkh¨auser Basel. Lutz, F. H. 2009. Isomorphism-free lexicographic enumeration of triangulated surfaces and 3-manifolds. European Journal of Combinatorics, 30(8): 1965–1979. Marker, D. 2002. Model Theory: An Introduction. New York, NY: Springer. Marques-Silva, J.; Argelich, J.; Grac¸a, A.; and Lynce, I. 2011. Boolean lexicographic optimization: algorithms & applications. Ann. Math. Artif. Intell., 62(3-4): 317–343. McCune, W. 1994. A Davis-Putnam program and its application to finite first-order model search: Quasigroup existence problems. Technical report, Argonne National Laboratory. McKay, B. D.; and Piperno, A. 2014. Practical graph isomorphism, II. J. Symb. Comput., 60: 94–112. Nadel, A.; and Ryvchin, V. 2016. Bit-Vector Optimization. In Chechik, M.; and Raskin, J., eds., Tools and Algorithms for the Construction and Analysis of Systems - 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS, volume 9636 of Lecture Notes in Computer Science, 851–867. Springer. Nagy, G.; and Vojtˇechovsk´y, P. 2018. LOOPS, Computing with quasigroups and loops in GAP, Version 3.4.1. https: //gap-packages.github.io/loops/. Refereed GAP package. Peter, V. B.; Rossi, F.; Van Beek, P.; and Walsh, T., eds. 2014. Handbook of constraint programming. Foundations of Artificial Intelligence. Elsevier Science & Technology. Petkovska, A.; Mishchenko, A.; Soeken, M.; Micheli, G. D.; Brayton, R. K.; and Ienne, P. 2016. Fast generation of lexicographic satisfiable assignments: enabling canonicity in SATbased applications. In Liu, F., ed., Proceedings of the 35th International Conference on Computer-Aided Design, ICCAD 2016, Austin, TX, USA, November 7-10, 2016, 4. ACM. Reynolds, A.; Tinelli, C.; Goel, A.; and Krsti´c, S. 2013a. Finite Model Finding in SMT. In Computer Aided Verification - 25th International Conference, CAV, 640–655. Reynolds, A.; Tinelli, C.; Goel, A.; Krsti´c, S.; Deters, M.; and Barrett, C. 2013b. Quantifier Instantiation Techniques for Finite Model Finding in SMT. In Automated Deduction - CADE-24 - 24th International Conference on Automated Deduction, Lake Placid, NY, USA, June 9-14, 2013. Proceedings, 377–391. Roussel, O.; and Manquinho, V. M. 2021. Pseudo-Boolean and Cardinality Constraints. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability Second Edition, volume 336 of Frontiers in Artificial Intelligence and Applications, 1087–1129. IOS Press. Sakallah, K. A. 2021. Symmetry and Satisfiability. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability - Second Edition, volume 336 of Frontiers in Artificial Intelligence and Applications, 509–570. IOS Press. Schneider, N.; Sayle, R. A.; and Landrum, G. A. 2015. Get Your Atoms in Order-An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm. Journal of Chemical Information and Modeling, 55(10): 2111–2120. Torlak, E.; and Jackson, D. 2007. Kodkod: A Relational Model Finder. In Grumberg, O.; and Huth, M., eds., Tools and Algorithms for the Construction and Analysis of Systems, 13th International Conference, TACAS 2007, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS, volume 4424 of Lecture Notes in Computer Science, 632–647. Springer. Trentin, P. 2019. Optimization Modulo Theories with OptiMathSAT. Ph.D. thesis, University of Trento, Italy. Urban, J.; Sutcliffe, G.; Pudl´ak, P.; and Vyskoˇcil, J. 2008. MaLARea SG1 – Machine Learner for Automated Reasoning with Semantic Guidance. In Armando, A.; Baumgartner, P.; and Dowek, G., eds., International Joint Conference on Automated Reasoning (IJCAR), volume 5195 of Lecture Notes in Computer Science, 441–456. Springer. ISBN 9783-540-71069-1. Walsh, T. 2012. Symmetry Breaking Constraints: Recent Results. In Hoffmann, J.; and Selman, B., eds., Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. AAAI Press. Weininger, D.; Weininger, A.; and Weininger, J. L. 1989. SMILES. 2. Algorithm for generation of unique SMILES notation. Journal of Chemical Information and Modeling, 29(2): 97–101. Zemljachenko, V. N.; Korneenko, N. M.; and Tyshkevich, R. I. 1982. Problema Izomorfizma Grafov. Zapiski nauchnyh seminarov POMI, 118(0): 83–158. Zhang, J. 1996. Constructing finite algebras with FALCON. Journal of Automated Reasoning, 17: 1–22. Zhang, J.; and Zhang, H. 1995. SEM: a System for Enumerating Models. In IJCAI, 298–303. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8056
2024
895
18,734
Theoretical and Empirical Analysis of Cost-Function Merging for Implicit Hitting Set WCSP Solving Javier Larrosa, Conrado Mart´ınez, Emma Rollon Computer Science Department, Universitat Polit`ecnica de Catalunya {larrosa, conrado, erollon}@cs.upc.edu Abstract The Implicit Hitting Set (HS) approach has shown very effective for MaxSAT solving. However, only preliminary promising results have been obtained for the very similar Weighted CSP framework. In this paper we contribute towards both a better theoretical understanding of the HS approach and a more effective HS-based solvers for WCSP. First, we bound the minimum number of iterations of HS thanks to what we call distinguished cores. Then, we show a source of inefficiency by introducing two simple problems where HS is unfeasible. Next, we propose two reformulation methods that merge cost-functions to overcome the problem. We provide a theoretical analysis that quantifies the magnitude of the improvement of each method with respect to the number of iterations of the algorithm. In particular, we show that the reformulations can bring an exponential number of iterations down to a constant number in our working examples. Finally, we complement our theoretical analysis with two sets of experiments. First, we show that our results are aligned with real executions. Second, and most importantly, we conduct experiments on typical benchmark problems and show that costfunction merging may be heuristically applied and it may accelerate HS algorithms by several orders of magnitude. In some cases, it even outperforms state-of-the-art solvers. Introduction In the Weighted Constraint Satisfaction Problem (WCSP) framework the goal is to optimize the combined cost of local cost functions. It includes important problems such as Most Probable Explanation in probabilistic networks (Dechter 2003; Koller and Friedman 2009) and it has many applications in resource allocation (Cabon et al. 1999), bioinformatics (Viricel et al. 2018), scheduling (Bensana, Lemaˆıtre, and Verfaillie 1999), etc. State-of-the-art WCSP solvers in the last two decades follow a branch-and-bound strategy enforcing at each expanded node local consistency properties that prune unfeasible nodes and provide effective bounds (Cooper et al. 2010; Larrosa and Schiex 2004; Allouche et al. 2015). The MaxSAT problem has much in common with the WCSP framework. A surprisingly very effective technique for MaxSAT solving is the so called Implicit Hitting Set Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (HS) approach (Davies and Bacchus 2013; Berg, Bacchus, and Poole 2020; Bacchus et al. 2018). The HS approach was adapted from MaxSAT to WCSP in (Delisle and Bacchus 2013) and a simple algorithm, HS-WCSP, showed some promising preliminary results. Given the similarity between MaxSAT and WCSP, we believe that it is worth advancing further towards efficient HS-based WCSP solvers. In particular, we believe that efficient HS algorithms for WCSP must take advantage of the more structured language of WCSP (using cost functions) in contrast to the flatter language of MaxSAT (using clauses). What makes WCSPs difficult in practice is that different cost functions often disagree on which assignments are likely to be good. For example, there are many optimization problems where the objective function is made of two conflicting components (e.g: risk vs benefit in finances, performance vs robustness in design,...) so optimal solutions corresponds to the best trade-off. Thus, one can think of WCSP solutions in general as those assignments that make complex trade-offs among many conflicting cost-functions. In HS algorithms, this phenomenon causes the discovery of many cores that are identical except for a portion that is different but represents (roughly) a different version of the same trade-off. When this happens, it translates to a large number of iterations. In this paper, and for the sake of a theoretical analysis, we consider the extreme case of this phenomenon that we call core interchangeability. We define core interchangeability at two levels: symbolic and numeric; and show that both types can cause HS-WCSP to iterate an exponential number of times. Motivated by that observation, we propose a method to overcome the problem. Our approach is to reformulate the problem by conveniently merging cost functions aiming at making the space of cores more compact. We propose two different types of merging: i) symbolic merging, which treats weights as symbols without being aware of their numerical semantics, and ii) numeric merging, that treats weights as numbers. We show that symbolic merging is sufficient for dealing with symbolic interchangeability, but insufficient for dealing with numeric interchangeability, where only numeric merging renders HS-WCSP feasible. These results are summarized in Table 1. We complement our theoretical work empirically. First, we confirm the asymptotic bounds obtained for our two case study problems and evaluate how The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8057 GreaterThan problem AllDiff problem Orig. Ω( 4n √n) Ω(2n) Symb. Ω  c √n n  Ω(n) c = 13.0019 . . . Num. Ω(1) Ω(1) Table 1: Number of iterations required by HS-WCSP for the GreaterThan and AllDiff problems with three different encodings (original (Orig.), symbolic (Symb.) and numeric (Num.)). n denotes the number of variables. The bound is tight (there are actual implementations of HS-WCSP for which the Ω(·) can be replaced by Θ(·)). reasonable was to restrict our attention to the number of iterations as the main performance measure. Second, we explore how useful our theoretical analysis on our two artificial case study problems may be in practice. For that purpose, we propose a heuristic method to automatically identify pieces of real WCSPs which are likely to have core interchangeability (up to a certain level) and show that both symbolic and numeric merging are useful techniques in practice. Preliminaries CSP. A Constraint Satisfaction Problem (CSP) is a tuple (X, D, C) where X is a set of variables, D is a set of finite domains (dp ∈D is the domain of variable xp ∈X) and C is a set of constraints. Each constraint ci ∈C depends on a subset of variables Yi ⊆X, called the scope. The set DYi denotes the Cartesian product of the domains of the variables in Yi. Thus, t ∈DYi is a tuple over Yi and t[S] with S ⊆Yi denotes the projection of t over the variables of S. Constraints are boolean functions ci : DYi −→{true, false}. A solution to the CSP is a tuple t ∈DX that satisfies all the constraints (ci(t[Y1]) = true for all ci ∈C). A CSP is satisfiable if it has at least one solution. WCSP. A Weighted CSP (WCSP) is a CSP augmented with a set of cost functions F (i.e., (X, D, C, F)). A cost function fi ∈F is a mapping fi : DYi −→N. A solution is a tuple t ∈DX that satisfies all the constraints in C and we will assume that there is at least one such tuple. The cost of a solution t is P i fi(t[Yi]). The WCSP problem consists of computing a minimum-cost solution opt(P). The following two WCSPs will be used in our analysis. GreaterThan Problem. The GreaterThan problem (noted PGT (n)) is a WCSP with n variables X = {x1, . . . , xn} with di = {0, 1, . . . , n −1}. It has a cost function fi(xi) = xi for each 1 ≤i ≤n and a single constraint Pn i=1 xi ≥ n. Clearly, an optimal solution is any assignment such that Pn i=1 xi = n and the optimal cost is opt(PGT ) = n. AllDiff Problem. The AllDiff problem (noted PAD(n)) is a WCSP with n variables X = {x1, . . . , xn} with di = {0, 1, . . . , n−1}. It has a cost function fi(xi) = xi for each 1 ≤i ≤n and one constraint xi ̸= xj for every pair of variables. Clearly, all solutions are optimal and correspond to assignments that assign a different value to each variable (e.g., (x1 ←0, x2 ←1, . . . , xn ←n −1)), and the optimal cost is opt(PAD) = n(n−1) 2 . HS for Weighted CSPs Next we review (and largely rephrase in an arguably simplified way) the HS approach for WCSP introduced in (Delisle and Bacchus 2013). The following definitions assume an arbitrary WCSP P = (X, D, C, F) with m cost functions F = {f1, f2, . . . , fm}. Weight Vector. A weight vector (or, simply, a vector) is ⃗v = (v1, v2, . . . , vm) with each component vi being associated to cost function fi. The value of vi must be a weight occurring in fi (i.e, vi ∈{fi(t) | t ∈DYi}). The cost of a vector ⃗v is cost(⃗v) = Pm i=1 vi. Partial Order. Recall that the usual (partial) order among vectors, ⃗v ≤⃗u, holds if for each component i we have that vi ≤ui. When ⃗v ≤⃗u we say that ⃗u dominates ⃗v. When ⃗v ̸≤⃗u we say that ⃗v hits ⃗u. Induced CSP. A vector ⃗v induces a CSP P(⃗v) where every cost function fi is replaced by constraint (fi ≤vi). Core. A vector ⃗k is a core if its induced CSP P(⃗k) is unsatisfiable. C will denote the set of cores. A core is maximal if it is not dominated by any other core or, in other words, if increasing any of its non-maximal components produces a satisfiable induced CSP. MC will denote the set of maximal cores. Note that this definition of core is similar but not equivalent to the usual concept of core in constraint programming and SAT (Gupta, Genc, and O’Sullivan 2021), since in our definition a core requires an underlying WCSP. Minimum Cost Hitting Vector (MHV). A vector ⃗h hits a set of cores K if⃗h hits each vector ⃗k ∈K (or, in other words, ⃗h is not dominated by any ⃗k ∈K). The minimum cost hitting vector MHV of K, noted MHV(K), is a hitting vector⃗h with minimum cost. The MHV problem is a generalization of the Hitting Set Problem, so it is easy to see that it is NP-hard. Core Extracting Solver (CES). A core-extracting Solver, noted CES, is a function that receives as input a WCSP P and a vector ⃗h. If ⃗h is a core, it returns a core ⃗k such that ⃗h ≤⃗k. Otherwise (i.e., if ⃗h is not a core), it returns NULL. Since a CES needs to solve a CSP and there is no special requirement about the core (so ⃗h itself could do the job), it is easy to see that core-extraction is no easier (and can be made no harder) than solving an NP-complete problem. Algorithm HS-WCSP. The HS approach for WCSPs is based on the following. Let K be a set of cores of a WCSP P. Let vector ⃗h be a MHV of K. Then, • cost(⃗h) is a lower bound of opt(P). • If the induced CSP P(⃗h) is satisfiable, then any solution of P(⃗h) is an optimal solution of P with optimal cost cost(⃗h). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8058 Algorithm 1: HS-WCSP receives as input a WCSP P and returns its optimal cost. Input: WCSP P Output: 1: K := ∅;⃗h := ⃗0; 2: while true do 3: ⃗k :=CES(P,⃗h) 4: if ⃗k = NULL then 5: return cost(⃗h) 6: else 7: K := K ∪{⃗k} 8: ⃗h :=MHV(K) 9: end if 10: end while • If the induced CSP P(⃗h) is unsatisfiable, then there is at least one core ⃗k ̸∈K such that ⃗h ≤⃗k. Algorithm 1 shows HS-WCSP, a simple HS algorithm for WCSPs. It maintains a growing set of cores K and a minimum hitting vector ⃗h of K. At each iteration, a coreextraction solver is invoked with the induced CSP P(⃗h) (line 3). If it is satisfiable, the cost of ⃗h is the optimal solution of the WCSP (line 5). Otherwise, core ⃗k is added to K (line 7), the MHV vector ⃗h is updated (line 8) and the algorithm continues. Note that the cost of ⃗h is a non-decreasing lower bound of the optimum solution. In (Delisle and Bacchus 2013) the MHV problem is modeled as an Integer Program and CES is modeled as a SAT formula with assumptions, so both problems are solved with off-the-shelf solvers. They also propose some practical improvements of HS-WCSP. For the sake of simplicity in our analysis, we will not consider them until the empirical evaluation in Section . Bounding Below the Number of Iterations In this Section we study the performance of HS-WCSP in terms of the number of iterations. Note that iterations correspond to the number of cores that have to be extracted before obtaining the optimal value. The number of iterations of an arbitrary implementation of HS-WCSP is non-deterministic since it depends on the cores computed by the CES function and the minimum hitting vectors computed by MHV and they may vary from implementation to implementation. Thus, we give a lower bound on that number which is independent from the particular implementation. For that purpose, we first identify a subset of maximal cores that are associated to necessary cores for obtaining the optimum cost. Consider a maximal core ⃗k ∈MC and let U(⃗k) be the set containing those cores dominated by ⃗k and not dominated by any other maximal core. That is, U(⃗k) = {⃗k′ ∈C | ⃗k′ ≤⃗k, ∀⃗ k′′∈MC,s.t. ⃗k′̸= ⃗ k′′ ⃗k′ ̸≤⃗k′′} Definition 1 (distinguished core). A maximal core ⃗k ∈MC is distinguished if there is some ⃗k′ ∈U(⃗k) such that cost(⃗k′) < opt(P). Lemma 1. Let ⃗k ∈MC be a distinguished maximal core, and let ⃗h be the MHV(C −U(⃗k))). Then, cost(⃗h) < opt(P). The following theorem follows directly from the lemma. Theorem 1 (lower bound). Let DC be the set of distinguished maximal cores. Then, HS-WCSP(P) iterates at least |DC| times. Definition 2 (HS-WCSP-max). HS-WCSP-max is the specific version of HS-WCSP such that the CES function always returns a maximal core when the induced CSP is unsatisfiable (i.e., ⃗k ∈MC in line3 of Algorithm 1). The following theorem gives an upper bound on the number of iterations of HS-WCSP-max. This bound is useful because it shows that the previous lower bound is sometimes tight, that is, it is not an underestimation since there are actual implementations of HS-WCSP for which it is attained. Theorem 2 (upper bound). Let MC be the set of maximal cores. Then, HS-WCSP-max(P) iterates at most |MC| times. Now, we use the previous two results to show that HSWCSP is unfeasible for the GreaterThan (PGT (n)) and AllDiff (PAD(n)) problems. Note that both of them have the same set of cost functions. Therefore, their set of vectors is {(v1, v2, . . . vn) | 0 ≤vi < n}. We start showing that GreaterThan requires at least an exponential number of iterations. Besides, this best-case bound is tight in the sense that HS-WCSP-max requires exactly that number of iterations. Lemma 2. Consider PGT (n): • MC is the set of vectors with cost n −1. • |MC| = 2n−2 n−1  ∼4(n−1) √πn . • All maximal cores are distinguished. Theorem 3. • HS-WCSP(PGT (n)) iterates Ω( 4(n−1) √πn ) times. • HS-WCSP-max(PGT (n)) iterates Θ( 4(n−1) √πn ) times. Proof. Follows directly from Theorem 1, Theorem 2 and Lemma 2. Next we do a similar analysis for the AllDiff problem. Lemma 3. Consider PAD(n): • MC is the set of permutations of {(0, 0, (n −1), (n −1), . . . , (n −1)), (1, 1, 1, (n −1), (n −1), . . . , (n −1)), (2, 2, 2, 2, (n −1), (n −1), . . . , (n −1)), . . . , ((n −2), . . . , (n −2))} • |MC| = 2n −n. • All maximal cores are distinguished. Theorem 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8059 • HS-WCSP(PAD(n)) iterates Ω(2n) times. • HS-WCSP-max(PAD(n)) iterates exactly 2n −n times. Proof. Follows directly from Theorem 1, Theorem 2 and Lemma 3. Core Interchangeability The PGT (n) and PAD(n) problems are unfeasible for HSWCSP because both of them have a symmetrical structure that produces an exponentially large number of necessary cores. In the following, we characterize the two forms of symmetry exhibited by these two problems. Consider two m-vectors ⃗v and ⃗u and a subset of its components I ⊆{1, 2, . . . , m}. Then, • ⃗v and ⃗u are symbolic-equivalent in I iff they are a permutation of each other in components in I and identical in components not in I. • ⃗v and ⃗u are numerical-equivalent in I iff cost(⃗u) = cost(⃗v) and they are identical in components not in I. For example, consider vectors ⃗v = (3, 7, 4), ⃗u = (3, 4, 7) and ⃗w = (3, 8, 3), and I = {2, 3}. They are all numericequivalent in I, because they are identical out of I and their cost in I is 11. Further, only ⃗v and ⃗u are symbolic-equivalent because (7, 4) is a permutation of (4, 7). Definition 3 (symbolic core interchangeability). A WCSP P is s-interchangeable in I if for every pair of symbolicequivalent vectors {⃗v, ⃗u} in I, ⃗v is a core iff ⃗u is a core. Definition 4 (numerical core interchangeability). A WCSP P is n-interchangeable in I if for every pair of numericequivalent vectors {⃗v, ⃗u} in I, ⃗v is a core iff ⃗u is a core. For example, the set of cores of PGT (n) is the set of vectors with cost less than n. Since this condition only depends on the cost, PGT (n) is both s-interchangeable and n-interchangeable in its full set of cost functions I = {1, . . . , n}. On the other hand, the set of cores of PAD(n) is a set of permutations of n −2 canonical vectors. Since this condition is preserved under any permutation, PAD(n) is s-interchangeable but it is not n-interchangeable in I = {1, . . . , n} (for instance, ⃗v = (0, 1, 2, 3, 4, . . . , n −1) is not a core but ⃗u = (0, 0, 3, 3, 4, . . . , n −1) has the same cost and is a core). Note that n-interchangeability is stronger than sinterchangeability. Consequently, it causes a larger number of core redundancy and therefore requires a more complex solving approach. Cost Function Merging s-interchangeability and n-interchangeability are welldefined examples of cost-functions disagreeing to each other as we discussed in the Introduction. They may be too strong to happen in practice, but we will use them to motivate and quantify the theoretical advantage of our approaches. In the following, we present two methods aiming at reducing interchangeable cores. The first method, called symbolic merging, may be enough for problems with symbolic interchangeability. However, problems with numeric interchangeability need a stronger form of merging, that we call numeric merging. Symbolic Merging The idea behind symbolic merging is the following. Let ⃗k = (3, 2, 7, 5) be a core of some problem P. If the set of cores of P is symbolic-equivalent in I = {1, 2, 3}, it means that vectors (2, 3, 7, 5), (2, 7, 3, 5), ... are also cores. These permutations of cores can be more compactly represented by just counting the number of times each value appears in indexes in I. That is, vectors having a 2, a 3 and a 7 in components in I are cores. Formally, let P = (X, D, C, F) be a WCSP and G ⊆F a subset of its functions. The symbolic merging of G is a new WCSP P S = (X ∪Y, D ∪DY , C ∪CY , F ∪HY − G) where Y is a set of auxiliary variables taking values in {0, . . . , |G|}. There is a variable yw ∈Y for every weight w > 0 appearing in any of the functions in G. Variable yw counts the number of cost functions in G assigning weight w which is enforced by constraint yw = P fi∈G[fi(Yi) = w] in CY . Finally, the set of functions G is replaced by a cost function fw(yw) = w · yw for every auxiliary variable yw. It is easy to see that the reformulation preserves the optimal cost. Let us now analyse the effect of symbolic merging on our two case studies. Let G = F. Since PGT (n) and PAD(n) have the same set of cost functions F, their reformulation will be identical (except for their original set of constraints C). For every weight 0 < w < n there is one auxiliary variable yw, one constraint yw = Pn i=1[xi = w] and one cost function fw(yw) = w · yw. Original cost functions are removed. With this new encoding, the set of vectors in both problems is {⃗v = (v1, v2, . . . vn−1) | vi = i · αi, 0 ≤αi ≤ n}. Let P S GT (n) denote the symbolic merging reformulation of PGT (n). We show next that symbolic merging reduces the number of iterations of HS-WCSP from exponential to subexponential (albeit superpolynomial). Although the reduction is significant, the algorithm remains unfeasible in practice for this problem because the minimum number of iterations cannot be bounded by any polynomial. Lemma 4. Consider P S GT (n), • MC is the set of vectors with cost n −1. • |MC| ∼ 1 4n √ 3 exp  π q 2n 3  = Θ  (13.0019...) √n n  • All maximal cores are distinguished. Theorem 5. • HS-WCSP(P S GT (n)) iterates Ω  (13.0019...) √n n  times. • HS-WCSP-max(P S GT (n)) iterates Θ  (13.0019...) √n n  times. Proof. Follows directly from Theorem 1, Theorem 2 and Lemma 4. Let P S AD(n) denote the symbolic merging reformulation of PAD(n). We show next that symbolic merging reduces the number of iterations of HS-WCSP from exponential to linear. Consequently, symbolic merging is enough for this problem. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8060 Lemma 5. Consider P S AD(n), • MC is {(0, 2n, 3n, . . . , (n −2)n, (n −1)n), (n, 0, 3n, . . . , (n −2)n, (n −1)n), . . . , (n, 2n, 3n, . . . , 0, (n −1)n), (n, 2n, 3n, . . . , (n −2)n, 0)} • |MC| = n. • All maximal cores are distinguished. Theorem 6. • HS-WCSP(P S AD(n)) iterates Ω(n) times. • HS-WCSP-max(P S AD(n)) iterates n times. Proof. Follows directly from Theorem 1, Theorem 2 and Lemma 5. Numeric Merging Numeric merging is the strongest form of merging. The idea is as follows. Let ⃗k = (3, 2, 7, 5) be a core of some problem P. If the set of cores of P is numericalequivalent on indexes I = {1, 2, 3}, it means that vectors (2, 2, 8, 5), (2, 3, 7, 5), (2, 7, 3, 5), ... are also cores. This set of cores can be more compactly represented by just recording the sum of the components in I. That is, vectors having a total cost of 12 in components in I are cores. Formally, let P = (X, D, C, F) be a WCSP and G ⊆ F be a subset of its functions. The numeric merging of G is a new WCSP P N = (X, D, C, F ∪{g} −G) where G is replaced by cost function g(X) = P fi∈G fi(Yi). The reformulation preserves the optimal cost. Let us now analyse the effect of numerical merging on our two case studies. Let G = F. Since PGT (n) and PAD(n) have the same set of cost functions F, their reformulation will be identical (except for their original set of constraints C). All their unary cost-funcitons are replaced by a single nary cost function g(X) = Pn i=1 xi. Consequently, with this new encoding, vectors become 1-dimensional and the set of vectors corresponds to ⃗v = (v1) with 0 ≤v1 ≤(n −1)2. Let P N GT (n) and P N AD(n) denote the numerical merging reformulation of PGT (n) and PAD(n), respectively. Then, Lemma 6. Consider P N GT (n), • MC is {(n −1)}. • Vector (n −1) is distinguished. Theorem 7. • HS-WCSP(P N GT (n)) iterates at least once. • HS-WCSP-max(P N GT (n)) iterates once. Proof. Follows directly from Theorem 1, Theorem 2 and Lemma 6. The numerical merging of all functions in the GreaterThan problem reduces the number of iterations from exponential to a constant. Of course, it begs the question of whether the single call to the CES function with the reformulated problem pays off in practice. Although symbolic merging was already a feasible approach for the AllDiff problem, for completeness, we also report the effect of numeric merging on this problem. Lemma 7. Consider P N AD(n), • MC is {((n(n −1)/2) −1)}. • Vector ((n(n −1)/2) −1) is distinguished. Theorem 8. HS-WCSP(P N AD(n)) iterates 1 time. Proof. Follows directly from Theorem 1, Theorem 2 and Lemma 7. Experimental Results We implemented the HS-WCSP in C++ as follows1. The core-extraction solver (CES, line 3 in Algorithm 1) uses the assumption-based SAT solver CaDiCal (Biere et al. 2020), and the minimum hitting vector solver (MHV, line 8 in Algorithm 1) is implemented as a 0-1 integer program solved to optimality with CPLEX. Our encodings are similar to (Delisle and Bacchus 2013)). The merged cost-functions for both symbolic and numeric merging are encoded into SAT using the totalizer encoding (Joshi, Martins, and Manquinho 2015). All experiments were run on a Linux machine with 2.90 GHz CPU, 128GB RAM memory and 16 cores. GreaterThan and AllDiff Problems We have provided tight bounds on the number of iterations for two combinatorial problems with three different encodings when the CES function computes maximal cores (i.e, HS-WCSP-max). However, our theoretical analysis only gives partial information about the cost of executing HSWCSP-max on them. In particular, some limitations of our analysis are: i) It is asymptotic, so it may not capture what happens with small instances. ii) It focus on the number of iterations but disregards the cost of each iteration which may change significantly with the different encodings. To assess the effect of these limitations, we report information from real executions of HS-WCSP-max on the two problems with the three different formulations. We report the number of iterations to see the accuracy of our asymptotics; the number of calls to the SAT solver because the CES implementation obtains maximal cores as a sequence of non-maximal core extractions; the CPU time because the encodings are increasingly more sophisticated and it may be worth to see how it affects CPU time. Figure 1 reports the results for the GreaterThan (top row) and AllDiff (bottom row) problems. The first column shows the actual number of iterations for the three encodings. As expected, it follows an exponential, sub-exponential (albeit super-polynomial) and constant growing pattern for GreaterThan; and an exponential, linear and constant growing pattern for AllDiff. We also include the asymptotic bounds of GreaterThan (dashed lines) which turn to be very precise for the original encoding (the two lines overlap) and quite accurate for the symbolic merging. The second column shows the number of calls to the SAT solver. In both problems, the number of calls seems to be just a constant away 1https://github.com/erollon/MHS-WCSP The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8061 0 5 10 15 20 25 0 1,000 2,000 3,000 # iters orig symb num 0 5 10 15 20 25 0 1 2 3 ·104 # sat calls GreaterThan problem 0 5 10 15 20 25 0 200 400 600 800 CPU Time (sec.) 0 2 4 6 8 10 0 200 400 600 800 1,000 nb. of vars # iters 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 1.2 ·104 nb. of vars # sat calls AllDiff problem 0 2 4 6 8 10 0 200 400 600 800 nb. of vars %CPU Time (sec.) Figure 1: Number of iterations (first column), number of SAT calls (second column) and CPU time in seconds (third column) as a function of the number of variables on the GreaterThan problem (top row) and AllDiff problem (bottom row) using the original encoding (orig.) symbolic-merging (symb.) and numeric merging (num.). Time limit is 900 seconds. from the number of iterations (around 10 times). Finally, the third column shows the CPU time. It can be seen that merging clearly pays-off in both problems and, consistently with the theoretical results, numeric merging is the best option for GreaterThan and symbolic merging is the best option for AllDiff. Moreover, we see that if we experiment across encodings, the number of iterations is not always a good measure, at least with the totalizer encoding for summations. The experiments show that the cost of the CES component (which corresponds to the aggregated cost of SAT solving) seems to grow exponentially with the size of the instances. There are two lessons to be taken from that: merging can only be done in a limited way (i.e, with respect to small subsets of cost functions), and the totalizer encoding has to be replaced by a more sophisticated alternative to augment the applicability of our approach. Benchmark Instances In the Introduction we argued that difficult real problems have cost functions that disagree with each other and that this disagreement turns into an approximation of our notions of core interchangeability. To test our hypothesis, and considering what we have learned in the previous experiments, we need a method to identify clusters of cost functions with that pattern. We conjectured that such clusters would correspond to groups of functions that share many variables in their domain. Accordingly, we implemented a heuristic to partition the set of cost functions of arbitrary WCSPs into clusters. For that, we used the well-known concept of tree-decomposition (Kask et al. 2005). Tree-decompositions are frequently used in WCSP solvers to identify and exploit structural properties. Intuitively, a tree-decomposition (TD) of a WCSP is an arrangement of its cost functions into clusters such that the cluster structure is acyclic. The width of a TD is the size of the largest combined scope among its clusters. There are many heuristics for obtaining a TD with small width (Bodlaender and Koster 2010). Therefore, we computed a TD and merged all the costfunctions that are placed in the same cluster as a pre-process. Table 2 reports the results on the Spot5 benchmark merging functions according to a min-fill TD (Bodlaender and Koster 2010). All instances were made virtual arc consistent (VAC) before the execution. For this experiment, we used a more realistic HS-WCSP implementation that does not guarantee that the CES function returns a maximal core. Instead, the algorithm includes two of the improvements proposed in (Delisle and Bacchus 2013): (i) the CES function computes several disjoin cores at each iteration and, (ii) it improves each of them by greedely increasing the lowest ki value until the induced CSP becomes satisfiable. When it happens it undoes the last increment and returns it as a core. Our version of HS-WCSP produces results similar to what was reported in (Delisle and Bacchus 2013). For reference with respect to state-of-the-art WCSPs solvers, we also report results obtained with Toulbar2 v.1.1.1 with its default options (Hurley et al. 2016). As can be seen, both forms of cost-function merging clearly outperform the original formulation solving more instances and more efficiently. Moreover, in this particular benchmark, both forms of cost-function merging also outperform Toulbar2 in all instances (except for 2 instances solved in under a second by all formulations). These results The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8062 Original Symbolic Numerical Toulbar2 Inst. lb t #its lb t #its lb t #its lb t 54 37 0.43 35 37 0.03 6 37 0.04 9 37 0.03 29 8059 1.23 56 8059 0.31 29 8059 0.14 16 8059 0.03 404 114 2.87 77 114 0.36 28 114 0.25 24 114 30.97 503 11113 1.01 32 11113 0.12 15 11113 0.11 13 11079 408 6225 131 6228 8.04 83 6228 7.05 96 4165 412 29345 69 32381 46.15 148 32381 39.18 146 22185 414 35979 108 38478 170.95 228 38478 114.05 318 27687 42 147050 207 155050 38.28 590 155050 75.66 985 96049 505 21245 100 21253 6.19 74 21253 4.80 107 14128 507 25870 82 27390 72.62 160 27390 127.87 329 17246 509 33429 78 36446 298.09 265 36446 91.35 274 28184 28 206103 35 245107 197 247104 259 150558 Table 2: Results on Spot5 instances. All instances are pre-processed to be VAC. CPU time in seconds. Time limit 1800 seconds (indicated as ”-”). When the time limit is not reached lb reports the optimum value. Otherwise, lb reports the obtained lower bound. The best CPU time is highlighted. When all solvers reach the time limit, the best lower bound is highlighted. suggest that partitioning the set of functions into clusters being guided by tree-decompositions may pay-off. Comparing the two merging alternatives, we see that there is no clear winner. Numerical merging outperforms symbolic merging on almost all instances but the difference is not large. Note that in instance 28, where both approaches timed out, numerical merging also obtains a better lower bound. Related Work It is well-known that many combinatorial problems contain different forms of symmetries and that not addressing them may render solvers highly inefficient (Gent, Petrie, and Puget 2006). To overcome this problem, several works propose methods to eliminate them. Clearly, our notion of core interchangeability is just another form of problem symmetry and our two merging approaches can be seen as methods to eliminate them in the particular case of HS algorithms. However, it is worth emphasizing that our approach does not require interchangeability. As we showed in the experiments, it is likely to be effective even on clusters of conflicting (i.e, disagreeing) cost functions. Our two merging methods are closely related to the socalled abstract cores framework (Berg, Bacchus, and Poole 2020) proposed in the MaxSAT context. The idea behind abstract cores is to group equally weighted soft clauses (the so-called abstraction sets) and to extract cores over new variables that correspond to sums over the original blocking variables (the so-called abstract cores). Each sum counts how many of the original blocking variables are falsified and, as a consequence, it is modelled as a cardinality constraint. Essentially, this is our symbolic merging. Our numerical merging goes one step further and drops the restriction of only clustering equally weighted clauses. Now each sum constraint counts the total sum of the coefficients (weights) of the original blocking variables and, as a consequence, it is modelled as a pseudo-Boolean constraint. In this sense, our work advances in the theoretical understanding of the differences arising from the usage of pseudoBoolean constraints in contrast to cardinality constraints in the abstract cores framework. Many heuristics to determine how to cluster clauses/functions may be appropriate. In (Berg, Bacchus, and Poole 2020), the heuristic dynamically finds meaningful clusters based on the core structure (which is not available up front). In contrast, we propose to use the tree-decomposition structure (which is available a-priori). Since the tree-decomposition does not change, our clustering is static and so our symbolic/numeric reformulation is done as a pre-process. Conclusion and Future Work Our long-term research aims at making the Hitting Set approach competitive with state-of-the-art alternatives for WCSP solving, as it happens in the very similar MaxSAT problem. In this paper we have shown that a naive application may be unfeasible even for very simple problems. We have characterized two forms of symmetry where HSWCSP fails and proposed a method based on cost-function merging to overcome each one of them. Our theoretical analysis allowed to clearly identify the limitations of HS-WCSP and quantify the potential advantage of our approach. We claim that these forms of symmetry happen up to a certain level in real problems. Thus, we have introduced a heuristic method to identify those parts of a problem where costfunction merging is likely to be useful. Our method, that uses the notion of tree-decomposition, clearly improves over the basic hitting set approach and, in some cases, may be even competitive with state-of-the-art solvers. Our work leaves many open lines of work. Just to name a few, we need to explore better implementations of the CES component and this can be done in different ways: improving the SAT encodings of the mergings or moving from SAT to other solving languages. We also need to explore alternatives to the use of tree-decomposition to identify appropriate clusters of cost-functions and, of course, extend our experiments to a wider set of benchmark instances. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8063 Acknowledgements The work of Javier Larrosa and Emma Rollon was supported by grant PID2021-122830OB-C43, funded by MCIN/AEI/ 10.13039/501100011033 and by “ERDF: A way of making Europe”. The work of Conrado Mart´ınez was supported by grant PID2020-112581GB-C21 (MOTION Project), funded by MCIN/AEI/10.13039/501100011033. References Allouche, D.; de Givry, S.; Katsirelos, G.; Schiex, T.; and Zytnicki, M. 2015. Anytime Hybrid Best-First Search with Tree Decomposition for Weighted CSP. In Pesant, G., ed., Principles and Practice of Constraint Programming - 21st International Conference, CP 2015, Cork, Ireland, August 31 - September 4, 2015, Proceedings, volume 9255 of Lecture Notes in Computer Science, 12–29. Springer. Bacchus, F.; Hyttinen, A.; J¨arvisalo, M.; and Saikko, P. 2018. Reduced Cost Fixing for Maximum Satisfiability. In Lang, J., ed., Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, 5209–5213. ijcai.org. Bensana, E.; Lemaˆıtre, M.; and Verfaillie, G. 1999. Earth Observation Satellite Management. Constraints An Int. J., 4(3): 293–299. Berg, J.; Bacchus, F.; and Poole, A. 2020. Abstract Cores in Implicit Hitting Set MaxSat Solving. In Pulina, L.; and Seidl, M., eds., Theory and Applications of Satisfiability Testing - SAT 2020 - 23rd International Conference, Alghero, Italy, July 3-10, 2020, Proceedings, volume 12178 of Lecture Notes in Computer Science, 277–294. Springer. Biere, A.; Fazekas, K.; Fleury, M.; and Heisinger, M. 2020. CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling Entering the SAT Competition 2020. In Balyo, T.; Froleyks, N.; Heule, M.; Iser, M.; J¨arvisalo, M.; and Suda, M., eds., Proc. of SAT Competition 2020 – Solver and Benchmark Descriptions, volume B-2020-1 of Department of Computer Science Report Series B, 51–53. University of Helsinki. Bodlaender, H. L.; and Koster, A. M. C. A. 2010. Treewidth computations I. Upper bounds. Inf. Comput., 208(3): 259– 275. Cabon, B.; de Givry, S.; Lobjois, L.; Schiex, T.; and Warners, J. P. 1999. Radio Link Frequency Assignment. Constraints An Int. J., 4(1): 79–89. Cooper, M. C.; de Givry, S.; S´anchez-Fibla, M.; Schiex, T.; Zytnicki, M.; and Werner, T. 2010. Soft arc consistency revisited. Artif. Intell., 174(7-8): 449–478. Davies, J.; and Bacchus, F. 2013. Postponing Optimization to Speed Up MAXSAT Solving. In Schulte, C., ed., Principles and Practice of Constraint Programming - 19th International Conference, CP 2013, Uppsala, Sweden, September 16-20, 2013. Proceedings, volume 8124 of Lecture Notes in Computer Science, 247–262. Springer. Dechter, R. 2003. Constraint processing. Elsevier Morgan Kaufmann. ISBN 978-1-55860-890-0. Delisle, E.; and Bacchus, F. 2013. Solving Weighted CSPs by Successive Relaxations. In Schulte, C., ed., Principles and Practice of Constraint Programming - 19th International Conference, CP 2013, Uppsala, Sweden, September 16-20, 2013. Proceedings, volume 8124 of Lecture Notes in Computer Science, 273–281. Springer. Gent, I. P.; Petrie, K. E.; and Puget, J. 2006. Symmetry in Constraint Programming. In Rossi, F.; van Beek, P.; and Walsh, T., eds., Handbook of Constraint Programming, volume 2 of Foundations of Artificial Intelligence, 329–376. Elsevier. Gupta, S. D.; Genc, B.; and O’Sullivan, B. 2021. Explanation in Constraint Satisfaction: A Survey. In Zhou, Z., ed., Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, 4400–4407. ijcai.org. Hurley, B.; O’Sullivan, B.; Allouche, D.; Katsirelos, G.; Schiex, T.; Zytnicki, M.; and de Givry, S. 2016. Multilanguage evaluation of exact solvers in graphical model discrete optimization. Constraints An Int. J., 21(3): 413–434. Joshi, S.; Martins, R.; and Manquinho, V. M. 2015. Generalized Totalizer Encoding for Pseudo-Boolean Constraints. In Pesant, G., ed., Principles and Practice of Constraint Programming - 21st International Conference, CP 2015, Cork, Ireland, August 31 - September 4, 2015, Proceedings, volume 9255 of Lecture Notes in Computer Science, 200–209. Springer. Kask, K.; Dechter, R.; Larrosa, J.; and Dechter, A. 2005. Unifying tree decompositions for reasoning in graphical models. Artif. Intell., 166(1-2): 165–193. Koller, D.; and Friedman, N. 2009. Probabilistic Graphical Models - Principles and Techniques. MIT Press. ISBN 9780-262-01319-2. Larrosa, J.; and Schiex, T. 2004. Solving weighted CSP by maintaining arc consistency. Artif. Intell., 159(1-2): 1–26. Viricel, C.; de Givry, S.; Schiex, T.; and Barbe, S. 2018. Cost function network-based design of protein-protein interactions: predicting changes in binding affinity. Bioinform., 34(15): 2581–2589. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8064
2024
896
18,735
Automatic Core-Guided Reformulation via Constraint Explanation and Condition Learning Kevin Leo1, Graeme Gange1, Maria Garcia de la Banda1,2, Mark Wallace1 1 Department of Data Science & AI (DSAI), Monash University, Australia 2 ARC Training Centre in Optimisation Technologies, Integrated Methodologies, and Applications (OPTIMA), Australia [email protected], [email protected], [email protected], [email protected] Abstract SAT and propagation solvers often underperform for optimisation models whose objective sums many single-variable terms. MaxSAT solvers avoid this by detecting and exploiting cores: subsets of these terms that cannot jointly take their lower bounds. Previous work demonstrated that manual analysis of cores can help define model reformulations likely to speed up solving for many model instances. This paper presents a method to automate this process. For each selected core the method identifies the instance constraints that caused it; infers the model constraints and parameters that explain how these instance constraints were formed; and learns the conditions that made those model constraints generate cores, while others did not. It then uses this information to reformulate the objective. The empirical evaluation shows this method can produce useful reformulations. Importantly, the method can be useful in other situations that require explaining a set of constraints. 1 Introduction Combinatorial problems are often tackled using a modelling+solving approach, whose first step is to model the problem’s parameters, variables, constraints and objective function (if any) using a modelling language such as AMPL (Fourer, Gay, and Kernighan 1987), OPL (Van Hentenryck 1999), Essence (Frisch et al. 2007) or MINIZINC (Nethercote et al. 2007). Each instantiation of the model parameters with input data yields a model instance, which is then compiled to the format required by the selected solver to find its solutions. This compilation step uses sophisticated methods to generate a flattened instance (often written in a leaner formalism such as Essence’ (Rendl 2010) and FLATZINC (Nethercote et al. 2007)) that is no longer intuitive for humans but is efficient for the selected solver. This approach gives users expressive and intuitive languages to model their problems, and frees them from knowing how to best map models onto solving algorithms. Further, modelto-model transformation methods exist to improve a model for many/all its instances, rather than just the one being flattened (e.g., (Hentenryck et al. 2005; Charnley, Colton, and Miguel 2006; Mears et al. 2015; Leo et al. 2013)). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A promising model-to-model transformation method is that of (Leo et al. 2020), which takes advantage of advances made by Lazy Clause Generation (Ohrimenko, Stuckey, and Codish 2007) and MaxSAT core-guided solvers (Andres et al. 2012; Morgado, Dodaro, and Marques-Silva 2014) to improve a common class of models: those with an additive (or separable) objective function, i.e. a sum of terms, each with a single variable. Constraint Programming (CP) and SAT solvers often underperform for models of this class because sums do not yield much propagation. MaxSAT solvers can avoid this by detecting and exploiting cores: subsets of terms that cannot collectively take their lower bounds. (Leo et al. 2020)’s method uses information from these detected cores to identify terms that can yield better bounds when grouped. It then adds new variables that group those terms, and reformulates the objective to use those variables. While the above method automatically identifies sets of objective terms that cannot collectively take their lower bounds, all remaining (and very challenging) steps were manual. These include identifying which instance constraints blocked the lower bounds from being assigned to those terms, and what properties of the instance data caused those constraints to be posted. Further, this knowledge needs to be lifted from the instance level to the model level. All these manual steps require a significant degree of knowledge of both the model and the underlying domain. This paper closes the gap by showing how to automate each of these challenging stages, building up to a fullyautomatic method that constructs reformulated models similar to those manually developed in (Leo et al. 2020). 2 Background Constraints: A constraint optimisation model M[∆] is a tuple (X[∆], C[∆], D[∆], f[∆]), where for every element δ of the model’s parameter space ∆mapping each parameter to its value, X[δ] is a set of variables, C[δ] a set of constraints over X[δ], D[δ] a domain mapping each variable x ∈X[δ] to set of values D[δ](x), and f[δ] an objective function over X[δ]. C[δ] is logically interpreted as the conjunction of its elements, and D[δ](x) as the conjunction of unary constraints on x ∈X[δ]. Thus, M[δ] denotes instance (X[δ], C[δ], D[δ], f[δ]). A literal of M[δ] is a unary constraint whose variable is in X[δ]. To solve instance M[δ], CP solvers first apply constraint propagation to reduce domain The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8065 D[δ] to D′[δ] by executing the propagators associated with the constraints in C[δ] until fixpoint. If D′[δ] is equivalent to false (D′[δ](x) is empty for some x ∈X[δ]), we say M[δ] fails. If D′[δ] is not equivalent to false and fixes all variables, we found a solution to M[δ]. Otherwise, M[δ] is split into n sub-instances M[δ]i ≡(X[δ], C[δ] ∧ci, D′[δ], f[δ]), 1 ≤ i ≤n, where C[δ] ∧D′[δ] ⇒(c1 ∨c2 ∨. . . ∨cn) and ci are literals (the decisions). These sub-instances are then iteratively searched using traditional branch-and-bound. Lazy Clause Generation (LCG): LCG solvers extend CP solvers by modifying their propagators to explain domain changes via literals of the form x = d, x ̸= d, x ≥d, and x ≤d for d ∈D[δ](x). An inferred literal ℓis explained as S →ℓ, where S is a set of literals (interpreted as a conjunction). For example, literal y ̸= 5 inferred by the propagator of constraint x ̸= y given literal x = 5, is explained by {x = 5} →y ̸= 5. Each literal inferred when solving instance M[δ] is stored with its explanation, forming an implication graph. If failure is detected for sub-instance M[δ]i, LCG solvers use this graph to compute a clause L (or nogood): a disjunction of literals that holds under any solution of M[δ] but is inconsistent under M[δ]i. L is then added to C[δ] to avoid failing for the same reasons. Core-Guided Optimisation: CP solvers underperform for additive objectives because the lower bound of any objective term, say oti of variable xi for minimising function f[δ] ≡ot1 + · · · + otn, can often be achieved by increasing the value of others, and f[δ]’s lower bound is inferred from those of its terms. Core-guided solvers avoid this by first fixing all terms to their lower bounds and then searching for a solution. If this succeeds, an optimum has been found. Otherwise, they return a core: a (hopefully small) subset of terms that cannot collectively take their lower bounds. They then update f[δ]’s bound and adjust the lower bounds of the core terms. Finally they re-solve, iterating until a solution is found. Core-guided solvers differ in their handling of cores, term bounds, and f[δ]. We assume they all return a set S that is empty if the current sub-instance M[δ]i is satisfiable; and otherwise contains literals of the form x ≥k where variable x appears in f[δ] and at least one literal holds. Extending LCG solvers to support this interface is straightforward. The LCG core-guided solver GEAS (Gange et al. 2020) is used herein. It is based on OLL (Andres et al. 2012), which progressively reformulates f[δ] to use the cores: upon finding core S, OLL adds a new variable p = P ((x≥k)∈S) x to M[δ] (with lower bound up by at least 1) and rewrites f[δ] in terms of p. GEAS improves the basic OLL with stratification (Marques-Silva et al. 2011; Ans´otegui et al. 2012), extracting cores on high-coefficient terms first; weightaware core extraction (Berg and J¨arvisalo 2017), delaying adding new variables until no cores are found; and hardening (Ans´otegui et al. 2012), upper-bound propagation on new variables. Since the value of k in ∀(x ≥k) ∈S is irrelevant to our method, we will refer to S as the raw core and instead use the set {x|(x ≥k) ∈S} as our core. Paths: Variable and constraint paths (Leo and Tack 2017) assign a unique identifier to each variable and constraint in a flattened instance that connects them to the model’s source code. They describe the path the compiler took when flattening those variable and instance constraints. Minimum Unsatisfiable Subset (MUS): Given instance M[δ] ≡(X[δ], C[δ], D[δ], f[δ]) where C[δ] is unsatisfiable, the subset C ′[δ] ⊆C[δ] is a MUS of C[δ] iff C ′[δ] is unsatisfiable and removing any constraint from C ′[δ] makes it satisfiable. Our method uses FINDMUS (Leo and Tack 2017) – a MUS enumeration tool available for MINIZINC– to find the MUSes associated to a core (or rather, to the nogood obtained by negating its raw core). Running example: We use the running example of (Leo et al. 2020): the Resource-Constrained Project Scheduling Problem with Weighted Earliness and Tardiness cost, which schedules tasks of a given duration and desired start time, subject to precedence and cumulative resource constraints. Its aim is to find a schedule that minimises the weighted earliness and tardiness costs of tasks not completed by their desired times. The model M[∆] used (rcpsp-wet in the MINIZINC benchmarks) has the following objective f[∆]: objective = sum (i in Tasks) |Original % earliness cost deadline[i,2]*max(0,deadline[i,1]-s[i]) + % tardiness cost deadline[i,3]*max(0,s[i]-deadline[i,1])); i.e., the sum of earliness and tardiness costs for every task i in input set Tasks where parameter deadline[i,1] is the desired start time for i, parameter deadline[i,2] (deadline[i,3]) is the cost per time unit for i to start before (after) its desired time, and variable s[i] represents i’s start time. For reasons of space, we will denote the terms deadline[i,2]*max(0,deadline[i,1]-s[i]) and deadline[i,3]*max(0,s[i]-deadline[i,1]) by e(i) and t(i), respectively. Note that |Original is used to mark the code as part of the original model. 3 Method Overview Our automated method follows the three main steps of (Leo et al. 2020) shown in Figure 1. While Step 1 was mostly automated in (Leo et al. 2020), the other two were manual. This section illustrates the main issues found when automating these two steps and how we tackled them, using as example the rcpsp-wet model M[∆] instantiated with input data δ1 ∈∆from file j30_1_3-wet.dzn and δ2 ∈∆ from j30_43_10-wet.dzn. Consider the four cores found by Step 1 shown in Figure 2: {e(16),t(25)}, {e(8),t(14)}, {e(17),t(27)}, and {e(21),t(8)}; the first three from instance M[δ1], the last from M[δ2]. Step 2 selects candidate cores by determining their cause. (Leo et al. 2020) do this by first grouping cores that follow identical-up-to-renaming patterns. It is easy to see (and automatically infer; see Step 2.1) that our four cores follow pattern {e(A),t(B)}, shown in the “Patterns” column in Figure 2. Automatically identifying the cause for the cores (and thus for the pattern) is more complex. We do this by using FINDMUS to find the instance constraints that generate the nogood associated to each core (see Step 2.2). As shown by the “Explanations” column labeled 1⃝in Figure 2, this yields one instance constraint for each of the first two cores and two instance constraints for each of the last two cores. We then use constraint paths to automatically trace back all The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8066 Step 1 Finding core candidates Minimize cores Model Analysis & instrumentation Scale scores Step 1.4: Collect new var candidates Step 1.3: Rename the cores Step 1.2: Collect the cores Step 1.1: Solver instrumentation Step 3 Reformulating the model Simple reformulation Tuple enumeration & scoring Collect features Train scoring functions Step 3.1: Reformulate the objective Step 2 Selecting good candidates Step 2.1: Find patterns among the cores Acquire explanations No Detect simple reformulations Find explanation patterns Step 2.2: Interpret the patterns Yes Step 3.2: Add bounds for new variables Figure 1: Method overview showing changes w.r.t. (Leo et al. 2020). Blue marks new, automated sub-steps; hatched yellow, old ones that were already automated but executed separately by hand; yellow (red), old ones that are now (still not) automated. six instance constraints (arrow labeled 2⃝) to the following model constraint (shown at the top of Figure 2), which ensures each task i finishes before its successor task j starts: forall (i in Tasks, j in suc[i]) |Original (s[i]+d[i]<=s[j]); where variable s[i] is as before, and parameters d[i] and suc[i] give i’s duration and successor set, resp. With this information, modellers should know that the first two cores are caused by a task and its successor (16 and 25 in the first core; 8 and 14 in the second), while the last two are caused by a chain of three tasks that starts and ends with the tasks in the cores (17 and 27 in the third core; 21 and 8 in the fourth) and has a middle task (24 and 12, resp). To automatically achieve this, our method first automatically annotates the model (see extra sub-step “Model analysis and instrumentation”of Step 1) to connect the data in the instance constraints to the parameters in the model constraints. This is used later (Step 2.2, shown here as an arrow labelled 3⃝) to automatically generate explanations for each core (the column labelled “Generators”), and the associated explanation patterns (the two boxes outlined in orange coming from the arrows labeled 4⃝). Importantly, if the cores of a core pattern are caused by different explanations (as is the case with {e(A),t(B)}), they should be treated differently. Once explanation patterns for all cores are found, Step 3 reformulates the model. Automating this is also complex; it requires finding the conditions that make those explanations constraining enough to generate cores for those tasks but not for other tasks. Otherwise, we might group tasks in the 16 5 2 17 1 4 24 2 0 25 4 5 27 5 2 Figure 3: Part of a Gantt chart for instance M[δ1] objective that do not generate cores. Consider the Gantt chart in Figure 3, which shows some of the tasks in M[δ1], where the y axis shows the task number i, the x axis represents time, the lengths of the rectangles show the task durations d[i], each task i appears at its desired start time deadline [i,1], and arrows represent suc[i] dependencies. The three grey tasks correspond to core {e(17),t(27)} while the two orange ones correspond to {e(16),t(25)}. However, many other tasks satisfy the same explanation patterns and generate no cores; mostly chains of two or three tasks. This is because the constraints posted by the instantiation of those explanations are not “tight” enough to fail and, thus, do not change the objective’s bound (in contrast to those associated to cores). Modellers should be able to see that the explanation pattern of a chain of three tasks is constraining only if: deadline[A,1] + d[A] + d[C] > deadline[B,1], i.e., if the period of time between the ideal start times of tasks A and B is shorter than the sum of durations of tasks             constraint forall (i in Tasks, j in suc[i]) (s[i] + d[i] <= s[j]) Nogoods Patterns Explanations Generators Explanation Patterns e(16), t(25) e(A), t(B) s[16] + 8 <= s[25] 16 in Tasks, 25 in suc[16] A in Tasks, B in suc[A]  e(8), t(14) e(A), t(B) s[ 8] + 7 <= s[14]  8 in Tasks, 14 in suc[ 8] A in Tasks, B in suc[A] e(17), t(27) e(A), t(B) s[17] + 2 <= s[24] s[24] + 4 <= s[27] 17 in Tasks, 24 in suc[17] 24 in Tasks, 27 in suc[24] A in Tasks, C in suc[A] C in Tasks, B in suc[C] e(21),  t(8) e(A), t(B) s[21] + 8 <= s[12] s[12] + 8 <= s[ 8] 21 in Tasks, 12 in suc[21] 12 in Tasks,  8 in suc[12] A in Tasks, C in suc[A] C in Tasks, B in suc[C] 1 4 3 2 Figure 2: Step 2 with cores {e(16),t(25)}, {e(8),t(14)}, {e(17),t(27)}, and {e(21),t(8)}. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8067 A and C, implying that A and/or B must be scheduled earlier or later than their respective target times. To infer this automatically we must identify the properties (e.g., duration and desired start time) that might affect the tightness of the posted constraints. We can then use standard machine learning techniques (such as regression) to train predictors of nogood importance. Section 6 explains how we tackled this and Section 7 shows the associated results. 4 Step 1: Finding Core Candidates Step 1 in (Leo et al. 2020) starts by executing the already instrumented GEAS solver (Step 1.1 in Figure 1) on each instance M[δ] to find the cores (Step 1.2). It then renames them to map the variables introduced by either the flattening process or the solver, back to those in the model M[∆] (Step 1.3), and collects any core with more than one variable (Step 1.4). We extend these four sub-steps to perform three extra ones needed to automate later steps: core minimisation, scoring, and model analysis & instrumentation. Core minimisation: The order in which core variables are introduced by solvers can yield cores with redundant terms, i.e., terms that if removed, do not affect the core’s reduction in objective bound. This sub-step aims to find a minimal subset of the core terms that yields the same bound, as this can uncover patterns between cores with redundant terms and those without them. We do this by first computing the bound achieved by minimising the sum of the core’s terms, and then using an adapted MUS enumeration strategy to find a minimal cardinality subset that yields the same bound. As this is expensive, our method can be asked to only analyse cores with high scores and/or a small number of terms, and can be terminated with a time-limit yielding the smallest core found so far. Core minimisation was not in (Leo et al. 2020) because that work focused on small, easy to understand cores that happened to be already minimal. Scoring: Each core of M[δ] is assigned a score representing its effectiveness in solving M[δ]. It is computed as its objective bound improvement divided by the sum of the improvements for all cores of M[δ]. Importantly, the division allows us to compare the effectiveness of cores across instances, which is needed in Step 2.1 for scoring core patterns. Scoring was not part of (Leo et al. 2020) because that work assumed the most effective cores are those found early in the search, which is often the case for a single instance. Model Analysis & Instrumentation: This sub-step modifies model M[∆] to generate information used by Step 2 to find each core’s explanation, e.g., to find (17 in Tasks,24 in suc[17]),(27 in Tasks,27 in suc[24]) in the upper right box of Figure 2 for core {e(17),t(27)}. This requires linking the instance constraints that caused the core (i.e., s[17]+2<=s[24], s[24]+4<=s[27]) with the generator conditions of the model constraint (i in Tasks, j in suc[i]) that, when instantiated with (i=17,j=24) and (i=24,j=17), produced the instance constraints. Information about generator conditions is lost during flattening due to, e.g., loop unrolling. To keep it, we automatically add to each constraint in M[∆] annotations that describe the data dependencies used (and lost) when flattening its expressions. This is achieved by a depth-first traversal of the MINIZINC Abstract Syntax Tree (AST) that traverses conjunctions (forall, /\) and if constraints to collect their generator variables (e.g., the i and j above), data dependencies (Tasks, suc[i]), and any conditions in if or where expressions of an AST node. Boolean expressions are annotated with the collected data dependencies and either recursed upon (if one of the above) or backtracked over. The generated code below shows the result of this process for the model constraint that yields core {e(17),t(27)}: forall ( i in Tasks ) ( |Generated forall ( j in suc[i] ) ( (s[i]+d[i]<=s[j]) ::data(3, 5, "in", "j", "suc[i]") ::data(3, 4, "assign", "j", show(j)) )::data(1, 2, "in", "i", "Tasks") ::data(1, 1, "assign", "i", show(i))); where the forall expression has been split into two nested foralls (for i and for j) to illustrate the depth-first traversal. The first argument of each data annotation is the AST depth, used to alias generator variables correctly. The second is a counter used for ordering the annotations to simplify pattern matching. The third is the annotation type, which currently only includes: in, the index set of a generator variable (e.g., i in Tasks); assign, the value assigned to a generator variable (i=1); and if, the condition on which this constraint depends (none in this case). Note that |Generated is used to mark the code as generated MINIZINC code. During flattening, annotations are propagated down yielding FLATZINC constraints with instantiated data annotations that act as path conditions: the conjunction of constraints that allowed us to reach this program point. For example, annotation data(1, 1, "assign", "i", show(i)) indicates that generator variable i took the value given by function show(i) when the instance constraint was flattened. 5 Step 2: Selecting Good Candidates The first new sub-step of Step 2 (see Figure 1) bypasses the expensive parts of Steps 2 and 3 if it can detect cores that lead to simple, pre-determined reformulations. This sub-step is discussed in Appendix A; the remaining sub-steps here. Step 2.1: Find Patterns Among the Cores We automatically generate core patterns using the approach of (Zeighami et al. 2018) to compute a most specific generalisation (MSG) per subset of cores with similar terms. In doing this, we keep track of the mapping between each pattern and its cores, and of each pattern’s score computed as the sum of those of its cores. For example, the MSG of cores {e(21),t(8)} and {e(17),t(27)} yields core pattern {e(A),t(B)} and mapping {{A/21,B/8},{A/17,B/27 }}, stating they both contain terms of the form e(A) and t(B), where pattern variables A and B take respectively values 21 and 8 in the first core, and 17 and 27 in the second. If each of these cores improves the objective by 3%, their pattern will have a score of 0.06. Currently, the MSG is approximated by first sorting the terms in every core and then assuming an ordered match between terms. Even then, this The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8068 step can be expensive if there are many cores. Importantly, when computing an MSG our system always checks whether simple pre-defined relationships occur between the instantiated pattern variables. In particular, it looks for =, ̸=, <, >, and = ±1 relationships either between the variables themselves or when used as index into parameter arrays, such as A=suc[B]+1. We refer to these relationships as facts if they occur in all cores of a pattern, and use them in Step 3. Step 2.2: Interpret the Patterns As illustrated in Section 3, each identified core pattern needs to be interpreted by first acquiring explanations for its cores and then finding patterns among these explanations. Acquire explanations: To acquire a core explanation, our method must find the instance constraints that caused the core, match them to the associated model constraints and use the instrumented version of these constraints to generate the explanation. Let c be the (raw) core we want to explain from flattened instance M[δ]. Our method starts by negating c to obtain nogood N = ¬c and construct the unsatisfiable instance φ = M[δ]∧N. Then, it uses FINDMUS to identify a MUS in φ, i.e., a minimal subset of instance constraints that are incompatible with N and, thus, caused c. Currently, our method stops after finding the first MUS, as finding and later using more can be very time consuming (see next subsection). Consider, for example, the following output produced by GEAS for the flattened instance of model M[∆] =rcspc -wet with data δ ∈∆from j30_1_3-wet.dzn: NEW BOUND: 63 |Output CORE: max(0, deadline[17,1]-s[17]) >= 3, max(0, s[27]-deadline[27,1]) >= 1 VAR: x242 = max(0, deadline[17,1]-s[17]) + max(0, s[27]-deadline[27,1]) NEW BOUND: 83 CORE: x242 >= 3 where the last line shows raw core c ≡x242>=3 whose associated nogood is N ≡x242<3. Note that |Output is used to mark plain output. Applying the definition of variable x242 yields raw core e(17)+t(27)>=3 and associated nogood e(17)+t(27)<3. Applying FINDMUS to the result of appending N and its variables’ definitions to the end of M[δ] yields a MUS combining N with 2 additional constraints: s[17]+2<=s[24] and s[24]+4<=s[27]. These are the instance constraints in Figure 2 (arrows labeled 1⃝) that caused this core. Once a MUS is found, our method uses constraint and variable paths to identify the model constraint from which each instance constraint in this MUS comes, and match its model variables with the instance ones. Then, thanks to the analysis and instrumentation of Step 1, we can substitute any occurrences of constants that match the pattern variables in the constraint and data annotations to generate the core explanation. For example, consider again the MUS with constraints s[17]+2<=s[24] and s[24]+4<=s[27], in addition to nogood N. These two instance constraints are mapped to the model constraint shown in Section 3, and the instance variables (e.g., s[27]) to the model ones (s[i]). With the annotated constraint shown in Section 4, they are used to generate FLATZINC code similar to the following: (s[24] - s[27] <= -4) |FlatZinc :: data("in", "j", "suc[i]") :: data("assign", "j", 27); :: data("in", "i", "Tasks") :: data("assign", "i", 24); (s[17] - s[24] <= -2) :: data("in", "j", "suc[i]") :: data("assign", "j", 24) :: data("in", "i", "Tasks") :: data("assign", "i", 17); which tells us 17 is the successor of 24, and 24 of 27. From this our method generates the explanations shown in the upper right blue box of Figure 2 for core {e(17),t(27)}. Find Explanation Patterns: As shown in Figure 2, cores with the same pattern can have different explanations, which may require different model reformulations. Thus, we should only group cores with the same “kind” of explanation, i.e., the same explanation pattern. To do this, for each core pattern our method collects the explanations of all its cores. It then substitutes the core pattern variable mappings into the explanations. Finally, it computes the MSG of the resulting explanations. For example, consider again the cores {e(17),t(27)}, {e(21),t(8)}; pattern {e(A),t(B)} with mapping {{A/17,B/27}, {A/21,B/8}}; and their explanations shown in the upper and lower right blue boxes in Figure 2. Applying the mapping for core {e(17),t(27) } to its explanation generates: ( A in Tasks, 24 in suc[A]) (24 in Tasks, B in suc[24]) The same is generated for core {e(21),t(8)}with 12 rather than 24. Thus, their MSG yields explanation pattern: (A in Tasks, C in suc[A]) (C in Tasks, B in suc[C]) and mapping {{A/17,B/27,C/24}, {A/21,B/8,C/12}}. This automatically infers that these cores share an explanation pattern different (as shown in Figure 2) from that of cores {e(8),t(14)} and {e(16),t(25)}. Thus, the two kinds are later used separately. Note that with only one MUS per core, our method may miss some explanations. Exploring the accuracy/speed trade-off is interesting future work. 6 Step 3: Reformulating The Model Step 3.1 Reformulate the Objective As explained in Section 3, the method must now determine the conditions that make some instantiations of an explanation pattern yield constraints tight enough to fail creating a core that changes the objective’s bound, while other instantiations do not. This section discusses how we achieve this. Tuple Enumeration & Scoring: For each explanation pattern EP we perform three actions. First, we generate a new MINIZINC model MEP [∆] that enumerates all possible instantiations of EP for the parameter values of any δ ∈∆. For the EP in the previous section, this means adding: forall(pA in Tasks, pC in suc[pA], |Generated pB in suc[pC]) (trace("Tuple: \(pA) \(pB) \(pC)\n")); to an MEP [∆] containing the parameter definitions from M[∆] (here, the suc array of successors). The trace function, which outputs information during compilation, will The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8069 output the values for pA, pB, and pC, i.e., the possible assignments to the pattern variables of EP. Thus, compiling each MEP [δ] enumerates all possible tuples of A, B and C, and thus all instantiations of EP, in M[δ]. Second, we identify the types of terms in M[∆]’s objective f[∆], and connect them to the cores in which they appear. Identifying term types involves traversing the AST to collect any top level definitions, including those under top level conjunctions. We start from the objective variable definition and expand any sum expressions, collecting generators (i in Tasks), coefficients (deadline[i,3]), and the term variables (max(0,s[i]-deadline[i,1]) at the leaves of the tree, with their location. We identify as leaves any non-linear expression or call to a function outside M[∆]. We also collect conditions from generators, and enter branches of if statements. For rcpsp-wet, the two term types (i.e., earliness and tardiness) in the objective yield: TERM TYPE 0: |Output Loc : "rcpsp-wet_orig.mzn|122|26|122|54|" Gens: ["i in Tasks"] Coef: ["deadline[i,3]"] Var : "max(0,s[i]-deadline[i,1])" TERM TYPE 1: Loc : "rcpsp-wet_orig.mzn|120|26|120|54|" Gens: ["i in Tasks"] Coef: ["deadline[i,2]"] Var : "max(0,deadline[i,1]-s[i])" where each line of a term type shows its path: a string locating the term in M[∆] as model-name|start-line|startcolumn|end-line|end-column|; the generator for the term variable (if any); the coefficient for the term (if any); and the actual variable. Connecting the term types to the core variables is done by matching the paths of the FLATZINC variables in the core with those of the term type to be used. And third, we score each tuple generated by MEP [δ] by solving M[δ], minimizing the penalty the terms of the core pattern can yield when instantiated to the tuple values. For example, for the above EP and tuple (A=17, B=27, C=24) the objective is to minimize: objective = |Generated deadline[17,2]*max(0,deadline[17,1]-s[17]) + deadline[27,3]*max(0,s[27]-deadline[27,1]) This gives us an idea of how useful this grouping will be. Flattening for each possible group can be slow, so we modified GEAS to optimize arrays of objectives (thus flattening only once), and create arrays of at most 1, 000 objectives. Collect Features: The training data must include model parameters likely to be useful features. Our method adds any parameters in the objective terms and in the constraints used to build the explanations. For our example EP, it adds deadline[A,1] and deadline[B,1] from the objective, and d[A] from the constraints used to build its explanations. It then generates for each EP an expression that outputs the values of these parameters when instantiated using the tuples found for that EP. For our example EP, it yields: forall ( |Generated pA in Tasks, pB in suc[pA] where pA < pB) (trace("\(pA) \(pB) \(deadline[pA,1]) \(deadline[pB,1]) \(d[pA])\n")) where pA < pB, which was discovered when computing the MSG for EP, filters out spurious candidates. Train scoring functions: We can now construct a training dataset for each EP. Consider, for brevity, the EP for a chain of two tasks (A in Tasks, B in suc[A]). The method generates features by instantiating the expression: A B deadline[A,1] deadline[B,1] d[A] isCore with the values of A and B from each tuple, and isCore, a 0/1 value indicating if this tuple incurred a penalty. The aim is to build a scoring function that can predict how useful grouping the terms of the cores explained by EP may be. Our implementation uses the LinearSVC regressor from Scikit-learn (Pedregosa et al. 2011) to learn the coefficients of a function that predicts isCore. A trained scoring function approximating the overlap between the target start time of tasks A and B above is: function float: score_0(int: A, |Generated int: B) = max([deadline[A,2], deadline[B,3]]) * ( 0.4644 * deadline[A,1] + 0.4657* d[A] + -0.4621 * deadline[B,1] + -0.2326); where the numbers in bold are the learned coefficients. The resulting sum is multiplied by the coefficients of the terms involved ([deadline[A,2] and deadline[B,3]]) to infer a score that approximates the penalty of moving these tasks. Code is then added to the model that uses these scoring functions to compute scores for all possible groupings. These scores are then used to decide which groups should be introduced in the final objective. Grouping terms automatically: The last step adds the following code to the original MINIZINC model: objective = |Generated decompose_bottomup(get_order_array(...)); where function decompose_bottomup takes an array of elements |x1, · · · , xn|, creates a new variable zi = x2i−1+x2i for each pair of adjacent elements, and recursively calls itself with the new array; and function get_order_array adds new variables by ranking candidate groupings and grouping those with the highest score. It outputs an array containing these new variables and any remaining un-grouped terms. Step 3.2 Add Bounds for New Variables While the reformulations from (Leo et al. 2020) were useful for LCG solvers, traditional CP solvers required the introduction of stronger upper bounds on new variables to improve performance. Our automated approach produces semantically equivalent objective functions, as the terms are simply reordered. Adding bounds to the introduced variables requires stronger reasoning since, if performed incorrectly, could yield incorrect solutions. This is not tackled yet. 7 Experimental Evaluation This section evaluates the results of applying our automated method to the rcpsp-wet model. In doing this, it uses a data set of 12 instances, split into 6 easy (30 task), 3 medium (60 task), and 3 hard (90 task). All experiments were performed on an Intel Xeon 8260 CPU with 24 cores with 268.55GB of RAM, and using the GEAS solver in free-search mode with core-guided features disabled for solving. The results of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8070 EMH:N T(s) Cs Es EPs PVs S F 300:1 39.9 40 20 2 3 0.8 1.3 210:1 60.2 54 26 3 4 1.0 1.6 120:1 88.0 61 35 4 5 1.0 2.5 111:1 100.7 84 38 3 4 0.9 1.7 300:A 56.4 40 26 2 3 0.8 1.3 210:A 258.4 54 40 4 5 0.9 11.6 120:A 352.3 61 51 5 5 1.0 12.2 111:A 473.9 84 60 4 5 0.9 12.5 Table 1: Using different training sets for rcpsp-wet applying our method to the other benchmarks in (Leo et al. 2020) are discussed in Appendix B. Table 1 shows a summary of the impact of several training sets on the training process and on the resulting models. The EMH:N column shows the number of easy, medium, and hard instances used for training, followed by whether the resulting reformulation is based on all (:A) patterns or the highest scoring one (:1). Column T shows the training time; Cs the number of cores found in the selected instances; Es the number of explanations acquired; EPs the number of resulting EPs; and LVs the largest number of variables in a pattern. Each instance was solved twice per reformulation, and the ratio of the average solving and flattening times to those of the hand-reformulated model were computed. The S and F columns show the geometric means of these ratios. The choice of training set can significantly impact the resulting reformulation. Using the highest scoring pattern resulted in faster training and models that flatten faster than using all patterns. Interestingly, the solve times were not very different to the hand-written model. Table 2 compares the performance of the original rcpsp -wet model (column O) with that of five reformulations for several instances. The first three come from (Leo et al. 2020): its hand-written direct model, which groups the earliness/tardiness terms of direct successor tasks that overlap based on the cost of enforcing their precedence (H); a na¨ıve reformulation that groups terms in order of occurrence (N); and a “weighted” na¨ıve reformulation that groups terms sorted by their coefficients (NW). The last two (300:1 and 111:A) are the best and worst performing reformulations from Table 1. A timeout of 600 seconds was used and represented by TO, underscored if the best objective value was found (but not proven optimal). Results for other reformulations and instances are presented in Appendix B. The results show that while the na¨ıve reformulations are not faster than the original model, the performance of the reformulations produced by our method (300:1 and 111:A) is comparable to that of the hand-written model. 8 Interesting Applications of the Method At its heart, our method first groups cores by their common explanation patterns (Step 2), and finds the conditions that make those explanations generate cores rather than feasible constraints (Step 3.1). While this is done for cores, the same can be done for any set of infeasible constraints. A particularly interesting application is explainability. Consider answering query “why didn’t you perform task X Instance O H N NW 300:1 111:A 30 27 5 0.52 0.18 2.11 1.60 0.15 0.71 30 43 10 6.18 0.67 10.99 5.07 0.70 1.17 30 44 8 1.49 0.24 1.66 1.24 0.30 1.07 60 19 6 158.72 4.26 180.32 84.36 0.79 3.63 60 28 3 TO 2.43 TO TO 3.86 5.72 60 36 8 TO 4.64 TO TO 4.60 10.36 90 10 10 TO TO TO TO TO TO 90 19 7 TO TO TO TO TO TO 90 48 4 TO TO TO TO TO TO Table 2: Flatten and solve times for models of rcpsp-wet after task Y?” for an instance of rcpsp-wet by modifying it to force X be executed before Y. If it is is infeasible, a useful system will try to find MUSes to help users understand why. This would be easier if the explanation for each MUS (first part of Step 2.2) is also displayed. Further, often, the number of MUSes found is large enough to overwhelm users. This could be avoided if they can be grouped according to their explanation patterns (Step 2) reducing their number and, explaining the failure. Consider, for example, an instance of rcpsp-wet where the user asked to perform a task before five of its predecessors. It would be clearer for the user to get one MUS pattern and its explanation indicating that task needs to be after all its predecessors, than five MUSes each for one predecessor. Further, MUS enumeration could be sped up if patterns among MUSes could be detected and the search modified to forbid more MUSes that match the pattern. We will explore such applications in future work. 9 Conclusions and Future Work This paper shows how to automate the process defined in (Leo et al. 2020) to use the cores inferred by a core-guided solver to reformulate a model in such a way it is likely to speed up solving for many model instances. To achieve this, the method instruments the model to find the constraints and associated explanations that caused each core, group cores that share the same explanation pattern, and train a scoring function to predict how useful it is to group the objective terms associated to the cores in that explanation pattern. The results of applying the method to the rcpsp-wet model show that the performance of the reformulated model (1) depends on the instances used to detect cores, find the associated explanation patterns and train the scoring function; and (2), in the case of rcpsp-wet, it is comparable to that of a hand-crafted reformulation. Future work includes exploring several accuracy/speed trade-offs such as the use of more than one MUS per core (particularly disjoint ones), more complex MSGs, and different instance selection strategies. We also plan to explore other uses of the method, with special focus on applications related to constraint explainability. Finally, we would like to explore the relationship between the new variables introduced by our system with the theory of backdoors (Williams, Gomes, and Selman 2003). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8071 Acknowledgements This work was partly funded by Australian Research Council grant DP180100151. This material is based on research partially sponsored by the DARPA Assured Neuro Symbolic Learning and Reasoning (ANSR) program under award number FA8750-23-2-1016. References Andres, B.; Kaufmann, B.; Matheis, O.; and Schaub, T. 2012. Unsatisfiability-based optimization in clasp. In Proc. ICLP Technical Communications, volume 17 of LIPIcs, 211–221. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik. Ans´otegui, C.; Bonet, M. L.; Gab`as, J.; and Levy, J. 2012. Improving SAT-Based Weighted MaxSAT Solvers. In Proc. CP, volume 7514 of Lecture Notes in Computer Science, 86–101. Springer. Berg, J.; and J¨arvisalo, M. 2017. Weight-Aware Core Extraction in SAT-Based MaxSAT Solving. In Proc CP, volume 10416 of Lecture Notes in Computer Science, 652–670. Springer. Charnley, J.; Colton, S.; and Miguel, I. 2006. Automatic Generation of Implied Constraints. In Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 – September 1, 2006, Riva del Garda, Italy, 73–77. Amsterdam, The Netherlands, The Netherlands: IOS Press. ISBN 1-58603-642-4. Fourer, R.; Gay, D. M.; and Kernighan, B. W. 1987. AMPL: A mathematical programming language. AT & T Bell Laboratories Murray Hill, NJ 07974. Frisch, A. M.; Grum, M.; Jefferson, C.; Mart´ınez, B.; and Miguel, H. I. 2007. The design of ESSENCE: a constraint language for specifying combinatorial problems. In IJCAI07, 80–87. Gange, G.; Berg, J.; Demirovi´c, E.; and Stuckey, P. J. 2020. Core-guided and Core-boosted Search for CP. In Hebrard, E.; and Musliu, N., eds., Proceedings of Seventeenth International Conference on Integration of Artificial Intelligence and Operations Research techniques in Constraint Programming (CPAIOR2020), 205 – 221. Springer. Hentenryck, P.; Flener, P.; Pearson, J.; and ˚Agren, M. 2005. Compositional Derivation of Symmetries for Constraint Satisfaction. In Zucker, J.-D.; and Saitta, L., eds., Abstraction, Reformulation and Approximation, volume 3607, chapter LNCS, 234–247. Springer Berlin Heidelberg. ISBN 9783-540-27872-6. Leo, K.; Gange, G.; de la Banda, M. G.; and Wallace, M. 2020. Core-Guided Model Reformulation. In Simonis, H., ed., Principles and Practice of Constraint Programming, 445–461. Cham: Springer International Publishing. Leo, K.; Mears, C.; Tack, G.; and Banda, M. G. d. l. 2013. Globalizing Constraint Models. In Schulte, C., ed., CP, volume 8124 of LNCS, 432–447. Springer. ISBN 978-3-64240626-3. Leo, K.; and Tack, G. 2017. Debugging Unsatisfiable Constraint Models. In Salvagnin, D.; and Lombardi, M., eds., CPAIOR 2017, volume 10335 of Lecture Notes in Computer Science. Springer. ISBN 978-3-319-59775-1. Marques-Silva, J.; Argelich, J.; Grac¸a, A.; and Lynce, I. 2011. Boolean lexicographic optimization: algorithms & applications. Annals of Mathematics and Artificial Intelligence, 62(3-4): 317–343. Mears, C.; Garcia De La Banda, M.; Wallace, M.; and Demoen, B. 2015. A method for detecting symmetries in constraint models and its generalisation. Constraints, 20(2): 235–273. Morgado, A.; Dodaro, C.; and Marques-Silva, J. 2014. CoreGuided MaxSAT with Soft Cardinality Constraints. In O’Sullivan, B., ed., Principles and Practice of Constraint Programming, 564–573. Cham: Springer International Publishing. ISBN 978-3-319-10428-7. Nethercote, N.; Stuckey, P. J.; Becket, R.; Brand, S.; Duck, G. J.; and Tack, G. 2007. MiniZinc: Towards a Standard CP Modelling Language. In Bessiere, C., ed., CP, volume 4741 of LNCS, 529–543. Springer. ISBN 978-3-540-74969-1. Ohrimenko, O.; Stuckey, P. J.; and Codish, M. 2007. Propagation = Lazy Clause Generation. In Bessiere, C., ed., Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming, volume 4741 of LNCS, 544–558. Springer. ISBN 978-3-540-74969-1. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikitlearn: Machine Learning in Python. Journal of Machine Learning Research, 12: 2825–2830. Rendl, A. 2010. Effective Compilation of Constraint Models. Ph.D. thesis, Univ. of St Andrews. Van Hentenryck, P. 1999. The OPL Optimization Programming Language. Cambridge, MA, USA: MIT Press. ISBN 0-262-72030-2. Williams, R.; Gomes, C. P.; and Selman, B. 2003. Backdoors to Typical Case Complexity. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI’03, 1173–1178. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Zeighami, K.; Leo, K.; Tack, G.; and de la Banda, M. G. 2018. Towards Semi-Automatic Learning-Based Model Transformation. In Hooker, J. N., ed., Principles and Practice of Constraint Programming - 24th International Conference, CP 2018, Lille, France, August 27-31, 2018, Proceedings, volume 11008 of Lecture Notes in Computer Science, 403–419. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8072
2024
897
18,736
Learning to Pivot as a Smart Expert Tianhao Liu1, Shanwen Pu1, Dongdong Ge1, Yinyu Ye2 1Research Institute for Interdisciplinary Sciences, Shanghai University of Finance and Economics 2Stanford University [email protected], [email protected], [email protected], [email protected] Abstract Linear programming has been practically solved mainly by simplex and interior point methods. Compared with the weakly polynomial complexity obtained by the interior point methods, the existence of strongly polynomial bounds for the length of the pivot path generated by the simplex methods remains a mystery. In this paper, we propose two novel pivot experts that leverage both global and local information of the linear programming instances for the primal simplex method and show their excellent performance numerically. The experts can be regarded as a benchmark to evaluate the performance of classical pivot rules, although they are hard to directly implement. To tackle this challenge, we employ a graph convolutional neural network model, trained via imitation learning, to mimic the behavior of the pivot expert. Our pivot rule, learned empirically, displays a significant advantage over conventional methods in various linear programming problems, as demonstrated through a series of rigorous experiments. 1 Introduction Linear programming (LP) is among the most fundamental problems and has been well-studied in the field of optimization. LP is not only directly used across various industries but has also become an important cornerstone of mixed integer programming (MIP) and sequential linear programming (SLP) methods for solving nonlinear programming (NLP). Nowadays, most commercial (Ge et al. 2023; Gurobi Optimization, LLC 2023; Nickel et al. 2022; Xpress 2014) and open-source (Huangfu and Hall 2018; Achterberg 2009) solvers have implemented fast and stable LP solvers (software for solving LP) and constantly achieve new advancements for large-scale LP problems. A general LP formulation solves the problem with only a linear objective function and linear constraints. It is wellknown that these linear constraints geometrically form a polyhedron and if the optimal solution exists, it exists in one of the vertices which belong to basic solutions from an algebra point of view. The state-of-the-art methods for LP include the simplex methods, the interior-point methods (IPMs), and some recently developed first-order methods (FOMs) (Applegate et al. 2021; Deng et al. 2022). For Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. high accuracy and reliability, the simplex methods and IPMs are preferred and become two main classes of algorithms implemented in commercial solvers. The simplex methods start from a basic solution and improve objective or feasibility by reaching a certain adjacent basic solution, which is called the pivot. The criterion for switching from the current basic solution to its neighbor is called the pivot rule. Different pivot rules greatly affect the performance of the simplex methods, so designing a smart pivot rule is one of the most significant simplex’s tasks. From the theoretical aspect, whether there exists a strongly polynomial bound for the length of the pivot path, which represents the iteration number of the simplex methods, attracts much research interest but is still an open problem. Instead of moving between vertices, IPMs keep an interior point and walk along a central path approaching the optimal solution (Karmarkar 1984). In practice, IPMs usually yield a dense primal-dual approximate solution, from which modern commercial solvers tend to conduct a crossover and run simplex methods for a sparse exact solution. In this paper, we focus on designing smart pivot rules for the primal simplex method. It is believed that our study can be easily migrated to other types of simplex methods. The smart pivot rules are expected to generate short pivot paths for different kinds of LP instances in different scales and should not run intolerably slow. Our contribution We propose a class of novel pivot experts that can outperform several classical and popular pivot rules. To modify the experts for practical use, we also apply machine learning methods to imitate the pivot expert. Our contribution can be summarized as follows. • First, we design novel pivot experts. Compared with classical pivot rules that only utilize local information, we consider a smart pivot rule should be able to combine global and local information together. Based on this idea, two pivot experts are proposed and can generate significantly shorter pivot paths than classical pivot candidates in a series of experiments. The pivot paths generated by experts are also analyzed on Klee-Minty variants. • Second, to the best of our knowledge, this paper is the first to combine imitation learning with dynamic pivot for general LP of different scales. Incorporating a graph convolutional neural network (GCNN) model, our learned The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8073 rule pivots by predicting the experts’ pivot behavior, which removes the requirement for global information but maintains expert performance. Organization of the paper The paper is organized as follows. Section 2 reviews related studies in simplex methods and machine learning methods to help optimization, especially for LP. Section 3 describes the novel class of pivot experts and discusses its merits and demerits. Section 4 provides an imitation learning method to help our idea of experts become practical. Section 5 presents twofold experiments to verify the superiority of our pivot experts and the learned pivot rule. Section 6 concludes the paper and discusses some relative topics. Notations Several commonly used notations are listed below. We use bold letters for vectors and matrices. Let Rn denote the n-dimensional Euclidean space. We use ¯R and R to express R∪{+∞} and R∪{−∞}. Let xj be the jth element of vector x. We use x ≥y to express the element-wise inequality xi ≥yi. Let 0, 1, and ∞be a vector of zeros, ones, and infinities. Let I be the identity matrix and ej be the jth column of I. The dimension of a vector or a matrix will be unspecified whenever it is clear from the context. ∥· ∥ℓis ℓ-norm (2-norm if ℓis omitted) while | · | is absolute value. Let Ai,j be the entry in the ith row and jth column of matrix A. Let Aj be the jth column of matrix A, and AI be the matrix formulated by columns Aj for j ∈I. Let Ai,: be the ith row of matrix A. 2 Related Work 2.1 Simplex Methods In this subsection, we will describe a series of pivot rules for LP in standard form min c⊤x s. t. Ax = b x ≥0, (1) where x ∈Rn and A ∈Rm×n, b ∈Rm, c ∈Rn. Let B and N be indices of basic and non-basic variables. This formulation is used for academic research, but will not be preferred in modern LP solvers. Our implementation of pivot experts considers a more practical formulation which will be described in Section 3 later. Pivot rules in primal simplex method The simplex methods switch between adjacent basic solutions, which is called the pivot, at each iteration. Pivot is algebraically selecting a non-basic variable to enter the basis, conducting a ratio test evaluating distance to go, and letting one basic variable leave the basis. How to choose among adjacent basic solutions is key to a successful simplex method. When Dantzig proposes primal simplex method, he also provides a pivot rule to choose the candidate with the most negative reduced cost ¯cj = cj −c⊤ BA−1 B Aj, which is called Dantzig’s rule (Dantzig 1963). To avoid cycling in a pivot path, Bland’s rule (Bland 1977) of choosing the candidate with minimum index and other lexicographic pivot rules are proposed. These anti-cycling rules make simplex terminate in finite steps but perform poorly in real applications. Since other practically anti-cycling methods like perturbation work well, more practical interest is attracted by generating shorter pivot paths rather than theoretically finite termination. The most widely used pivot rule in modern simplex solvers is the steepest-edge rule (Goldfarb and Reid 1977; Forrest and Goldfarb 1992). It has been observed to generate relatively short pivot paths. The steepest-edge rule chooses the candidate with the most negative ¯cj √ ∥A−1 B Aj∥2+1, which can be explained as moving in the descending direction most parallel to c. Another pivot rule called the greatest improvement rule (Jeroslow 1973) can also generate short pivot paths. Like strong branching (Achterberg, Koch, and Martin 2005) in MIP, this pivot rule prefers a candidate that brings the greatest improvement in objective value to enter the basis. However, sometimes too much greed is not the best option (as will be seen in Section 5). Besides, to calculate the improvement, a ratio test for each candidate variable is needed, which is often too expensive for the simplex method. As the increasing scale of LP has hit the limits of computer power, rules for more efficient computation are proposed including Devex rule (Harris 1973) and the largest distance rule (Roos 1986; Pan 2008). Devex rule inexactly approximates the score in steepest-edge and thus can update faster. The largest distance rule selects the candidate for which the corresponding dual hyperplane is the farthest from the present vertex, i.e., with the most negative score ¯c ∥Aj∥. Notice that the denominator stays the same and only requires to be calculated once. Worst cases of pivot rules With so many rules accumulated and a wide variety of manifestations observed in practice, researchers are puzzled by the complexity of the simplex methods. In other words, what is the worst possible path for the simplex methods? Or generally, can LP be solved with strongly polynomial algorithms? These problems are surprisingly difficult to give a general satisfying answer even though we have already known that LP has weakly polynomial bounds guaranteed by IPMs. For some special LP classes, certain pivot rules are proved to be strongly polynomial (Ye 2011; Kitahara and Mizuno 2013). Unfortunately, worst cases with exponential pivot numbers have been discovered for most deterministic pivot rules (Klee and Minty 1972; Avis and Chv´atal 1978; Goldfarb and Sit 1979; Roos 1990). After introducing randomization and parameterized LP, several sub-exponential bounds (Matouˇsek, Sharir, and Welzl 1992; Kalai 1992) or weakly polynomial bounds (Kelner and Spielman 2006) have been derived for general LP. In short, analyzing the complexity of simplex is still a long way to go. 2.2 Machine Learning for Mathematical Optimization Machine learning methods can help accelerate or improve optimization methods. This field of research is called maThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8074 chine learning for mathematical optimization (ML4MO). Many studies in ML4MO help solve MIP since it is a harder and more common problem. Although there have not been many fusions of machine learning and LP, some preliminary attempts are made and deserve mention. GCNN model-based ML4MO Early studies extract feature vectors from problems and use classical models for prediction (Di Liberto et al. 2016; Alvarez, Louveaux, and Wehenkel 2017; He, Daume III, and Eisner 2014; Khalil et al. 2016). Afterwards, a new way to encode problems proposed by Gasse et al. (2019) is to imitate strong branching. They creatively encode MIP to be a bipartite graph. The bipartite graph contains almost all information of the original problem and thus avoids loss of information which the classical methods often suffer from. With a graph as input, GCNN models are naturally adopted to perform more comprehensive feature extraction and are ready for subsequent models to make final decisions. Gasse’s work inspires a new stream of ML4MO (Gupta et al. 2020; Ding et al. 2020; Nair et al. 2020; Sonnerat et al. 2021; Paulus and Krause 2023) and the GCNN approach, together with the bipartite graph encoding, has become one of the preferred methods in practice. Machine learning for LP and simplex Several attempts have been made to accelerate LP utilizing machine learning methods. Most of them study pivot in the primal simplex method. Adham et al. (2021) use boosted trees and neural networks to predict the best pivot rule for each LP instance but the approach is a one-shot decision and lacks flexibility. Suriyanarayana et al. (2022) use reinforcement learning to dynamically switch between Dantzig’s rule and steepestedge rule for solving LP relaxation of non-Euclidean TSPs with five cities. However, it is only proof of concept that is not suitable for larger problems or problems with different scales. Li et al. (2022) use Monte Carlo tree search (MCTS) to directly decide which candidate will enter the basis. For each new LP instance, MCTS explores slowly at every single pivot. The state in both Suriyanarayana’s and Li’s reinforcement learning approaches is based on simplex tableau directly, which is not scalable for large-scale LP. The bipartite graph and GCNN can be a more reasonable tool to encode LP for its permutation invariance and scalability. In theory, Chen et al. (2022) reveal the potential power of GCNN in distinguishing LP with different characteristics. In practice, Fan et al. (2023) use GCNN to predict a better initial basis, which is the preparatory work for the primal simplex method. 3 Smart Pivot Experts 3.1 Primal Simplex Method We consider a general LP formulation min c⊤x s. t. Ax = b l ≤x ≤u, (2) where x ∈Rn and A ∈Rm×n, b ∈Rm, c ∈Rn, l ∈ Rn, u ∈¯Rn. (2) is more user-friendly than (1), so we will derive our pivot experts and conduct experiments based on this formulation. We implement the following two-phase revised simplex in our own primal simplex solver prototype. Phase I: find a basic feasible solution Phase I aims to find a basic feasible solution for Phase II in the primal simplex method. In fact, multiple ways including the big-M method and heuristics are designed to achieve the purpose. However, those methods form an independent topic that is beyond our discussion. Phase II: solve the LP Phase II starts to solve (2) after a basic feasible solution is obtained from Phase I. At each pivot, we check the reduced cost ¯c of those non-basic variables and pick those candidates which have negative (positive) ¯c and are equal to its lower (upper) bound. If there occurs a tie, we will choose the one with minimum (maximum) index to enter (leave) the basis. 3.2 Designing Smart Pivot Experts The existing pivot rules all consider only local information. Here the term “local” refers to the information that describes the landscape around the current basic feasible solution. If a pivot rule makes a myopic decision, the basic feasible solution may lead to being stuck in a rugged area in the future. The main idea for designing a smart pivot expert is to provide global information for it. Some trials proposed by others include tree search for future information (Li et al. 2022), using interior point information (Todd 1990; Roos 1986; Tamura et al. 1988), or choosing more than one variable at one time to enter the basis (Yang 2020). However, there is something the most global but easily overlooked —the optimal basis. With the optimal basis at each iteration, the smart pivot rule can be guided by the following two goals: 1. When selecting a candidate to enter the basis, a smart pivot rule should let the basic variable in the optimal basis enter first. 2. To select a variable to leave the basis when a tie occurs in the ratio test, a smart pivot rule should let the non-basic in the optimal basis leave first. The smart pivot rule will greedily bring the current basis as close to the optimal basis (in terms of the difference in the basic indices) as possible from a global perspective. Theorem 1 guarantees that such a variable can always be found as long as the objective value is not optimal. Theorem 1. Given the optimal basis, if the current objective value is not optimal, there must exist a variable mismatching the optimal basis that can enter the current basis immediately. Yang (2020) provides similar observation and proof based on the formulation (1), but does not continue to make full use of it. We modify his remark for a more practical LP formulation (2) and design smart pivot experts based on it. Designing smart pivot experts We design two pivot experts based on two goals and Theorem 1. Before presenting details, we have to point out that local information still matters. During the development of simplex, much valuable The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8075 local information has been proposed such as reduced cost and steepest-edge score. Given the optimal basis, our pivot experts combine global information and local information together. The first pivot expert (Expert I) satisfies the first goal and then tries to satisfy the second. More precisely, it chooses the candidate among the optimal basis with the best steepestedge score. After the ratio test, it removes the non-basic variable in the optimal basis as far as possible. The second pivot expert (Expert II) considers the two goals at the same time. It will conduct a ratio test for each candidate in the optimal basis and give preference to those that can remove non-basic variables in the optimal basis if there are any. After candidates are filtered by the two goals, it will choose the one with the best steepest-edge score. The two experts run at different speeds. Expert I can be calculated efficiently while Expert II runs more slowly due to multiple ratio tests. Besides paths with monotone objective value, the two pivot experts share Property 1 of generating paths with monotone # DiffOpt defined in Definition 1. Experiments in Section 5 will show the superiority of our pivot experts for overall generating shorter paths compared with other classical pivot rules. Definition 1 (# DiffOpt). Let the status vector sta of the vertex x be stai =    0, if non-basic xi = li 1, if xi is basic 2, if non-basic xi = ui . Given the optimal basis, # DiffOpt is defined as 1-norm of the difference between the status vectors of x and the optimal basis. Property 1. Given the optimal basis, before the current objective is optimal, # DiffOpt is monotonically decreasing. At the end of this subsection, we emphasize that the term “expert” refers to overall better performance rather than total transcendence. Recalling the existence of worst cases, it is almost impossible for a pivot rule to completely beat another rule in every single LP instance. Maros (2012) suggests combining different pivot rules if there are signs of benefit, which is a parallel technical route to ours. 3.3 Pivot Experts on Klee-Minty Cube Variants To further illustrate the value of global information, a linear upper bound is provided for the length of our pivot experts’ pivot path on Klee-Minty (KM) cube variants which are usually the worst cases for classical pivot rules. KM cube variants KM cube variants are a well-known class of squashed cubes that usually lead to poor performance of some pivot rules, encompassing KM variants (Kitahara and Mizuno 2011; Vanderbei 2020) for Dantzig’s rule and the Avis-Chv´atal polytope (Avis and Chv´atal 1978) for Bland’s rule. These KM cubes share some similar properties. First, the feasibility set is combinatorially equivalent to the standard n-dimensional cube Cn = {(x, y) ∈Rn × Rn : x + y = 1, x, y ≥0}, which means there exists a one-to-one correspondence between their faces. Second, each vertex is non-degenerate. The standard cube Cn here is obtained via adding slacks y in the cube [0, 1]n. Cn has 2n vertices whose first n elements are x ∈{0, 1}n and y = 1 −x. The experts’ linear upper bound on KM cubes The main idea for deriving a linear upper bound for the pivot experts on KM cubes is to analyze the length of paths with monotone # DiffOpt. The proof is divided into three steps. In essence, we start by bounding path lengths on the basic cube Cn in Theorem 2, extend that to polytopes combinatorially equivalent to Cn in Theorem 3, and then apply it to the pivot experts in the KM setting under certain mild assumptions in Theorem 4. This allows us to derive an overall linear upper bound on KM cubes. Theorem 2. For Cn with initial point (x0, y0) and any optimal basis B∗, the length of the path with monotone # DiffOpt is # DiffOpt of (x0, y0) 2 , which is bounded by n from above. Theorem 3. For any polytope combinatorially equivalent to Cn with non-degenerate vertices, initial point (x0, y0), and any optimal basis B∗, the length of path with monotone # DiffOpt is # DiffOpt of (x0, y0) 2 , which is bounded by n from above. Theorem 4. For any polytope combinatorially equivalent to Cn with non-degenerate vertices, initial point (x0, y0), and the single optimal basis B∗, the length of a path generated by our pivot experts is upper bounded by # DiffOpt of (x0, y0) 2 , which is bounded by n from above. Theorem 4 illustrates the value of global information. With the guidance of the given optimal basis, our experts will avoid being led to the worst by misleading local information on various KM variants. Notice that the upper bound holds with arbitrary or even no local information. In this aspect, the monotone # DiffOpt is more like a combinatorial rather than algebraic property. The strong performance of our experts on KM variants will not be negatively impacted by scaling, which differs from most classical pivot rules. 4 Learning as a Pivot Expert While our pivot experts offer an advantage, their dependency on the optimal basis may seem prohibitive for direct applications. Thus, we employ machine learning to bypass this requirement. Specifically, we cast the LP as a bipartite graph, using a GCNN model to emulate the choices of pivot experts, which is similar to Gasse’s GCNN (Gasse et al. 2019). State encoding and input features The primal simplex method can be viewed as a Markov decision process, as shown in Figure 1. At the kth iteration, the state sk contains the LP instance and the current basic feasible solution xk. The action space A(sk) encapsulates all possible edges that can improve the objective value. An action ak is selected according to a certain pivot rule and xk will move along ak until reaching the next vertex xk+1. We encode the state of LP and current solution into a bipartite graph, see Figure 2. Variables and constraints form two classes of nodes which will be linked by an edge if the corresponding coefficient Ai,j is non-zero. Each node and edge carry some features that are picked for pivot decision and thus slightly differ from those defined by Gasse. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8076 sǏ sǏ+1 x∗ xǏ ℓ2 ℓ1 xǏ+1 x∗ xǏ ℓ3 ℓ4 aǏ = ℓ2 ੐(sǏ) = ℓ1, ℓ2 ੐(sǏ+1) = ℓ3, ℓ4 Figure 1: The primal simplex method can be viewed as a Markov decision process. The LP is depicted as a polyhedron with directions that can improve objective value marked on edges. x∗denotes the optimal solution. Variables Constraints AǍ,ǎ Figure 2: Bipartite graph encoding for LP. Policy for imitating experts We feed the bipartite graph into our GCNN model and then use a filter and a softmax function. This filter identifies suitable candidates, while the softmax estimates their chance of entering the basis. Our GCNN model shares a similar structure with Gasse’s but has slightly wider and deeper networks. It is essential to distinguish between learning to pivot in LP and learning to branch in MIP. While both involve variable selection, branching explores multiple paths on a tree, requiring crucial early smart choices, whereas pivoting follows a single path, less dependent on initial decisions but needing consistent smart moves. Summary The pivot expert designing and learning framework can be summarized in Figure 3. For a class of LP instances, which are expected to share common features in polyhedra, we have designed two pivot experts that can generate shorter paths. To replicate the expertise using imitation learning, we gather paired data of encoded LP instances, along with labels that detail bipartite graphs and the experts’ pivot decisions. Using this data, we train the policy network to accurately mirror the experts’ actions. 5 Experiments Our experiments are twofold. In Section 5.1, we contrast our advanced pivot experts against others, demonstrating their ability to produce shorter pivot paths. Section 5.2 showcases the commendable performance of the learned rule, affirming the feasibility of training our expert rules via imitation learning without sacrificing overall enhancement. All tests are conducted on an AMD Ryzen 7 5700X CPU and NVIDIA RTX 3060 12GB GPU. Pivot rules to be compared Table 1 enumerates the pivot rules under comparison. We adopt five classical pivot rules (Bland’s, Dantzig’s, SE, GI, and LD) as our benchmarks. Our primary focus is on two expert rules (EXP and EXPII), alongside our learned rule (EXP-LEARN). In all experiments, every pivot rule receives a consistent initial basis from Phase I, resolved by SE. Notably, EXP and EXP-II, requiring an additional optimal basis, are provided in Phase II by SE. EXP-LEARN operates without the extra information that the expert rules need. NO-LOCAL, a derivative of the EXP rule, omits local information and serves for ablation analysis. Type Notation Pivot rule Classical rules Bland Bland’s rule Dantzig Dantzig’s rule SE Steepest-edge rule GI Greatest improvement rule LD Largest distance rule Our experts EXP Expert I EXP-II Expert II EXP-LEARN Rule imitating Expert I Ablation study NO-LOCAL EXP w/o local information Table 1: Pivot rules to be compared. Benchmarks We have chosen a diverse set of LP problem tests, including a NETLIB subset (Gay 1985) and LP relaxations from four combinatorial optimization (CO) classes. NETLIB, a standard LP benchmark, offers varied LP instances in both scale and structure. Our CO classes cover set covering (SC), combinatorial auction (CA), capacitated facility location (FL), and maximum independent set (IS). These CO problems, inspired by Gasse et al. (2019), may differ in scale. We presolve NETLIB instances with Gurobi 10.0.2 and CO instances with SCIP 8.0.3. Evaluation We evaluate pivot rule performance primarily with the geometric mean of pivot numbers serving as our benchmark metric. Two reasons drive this choice. First, while our Python-implemented pivot rules might not mirror modern solvers’ speed, they are generally comparable in pivot numbers with Gurobi’s primal simplex for many LP instances. Second, pivot path length better captures the simplex method’s complexity. Additionally, for thoroughness, we will also report each rule’s execution time. 5.1 Testing Pivot Experts We evaluate our experts against classical methods on the NETLIB subset. Section 5.2 details the outcomes on the CO benchmarks, along with our learning analysis. Setup We utilize 77 selected NETLIB instances, optimized for time and to bypass numerical issues. Each instance adheres to a 300-second time limit. The number of constraints m and variables n for these instances are provided in Table 2. 1Standard deviations are provided in parentheses. This convention is maintained for subsequent tables. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8077 x0 x∗ (Unknown) (a) A class of LP instances. x0 x∗ (Unknown) x0 x∗ (Given) (b) Given the optimal basis, pivot experts can generate shorter paths. xǏ x∗ (Given) xǏ x∗ (Given) xǏ x∗ (Given) xǏ x∗ (Given) (c) Collect expert choices for imitation. x0 x∗ (Unknown) (d) Learned pivot rule gains improvement on new LP instances. Figure 3: Pivot expert designing and learning framework in the order of (a) →(b) →(c) →(d). Bold green lines are expert choices while bold red (dash-dot) lines are bad choices. # instance m n 77 356.6 (±364.4) 1439.6 (±2975.5) Table 2: Average scales of presolved NETLIB instances.1 Numerical results The results in Table 3 and 4 emphasize the efficacy of EXP and EXP-II over classical pivot rules. EXP and EXP-II evidently obtain fewer pivot numbers and more wins, while Bland’s rule has the worst performance and is thus excluded in future comparisons. Bland Dantzig SE GI LD EXP 2 7 16 15 6 52 Bland Dantzig SE GI LD EXP-II 2 6 17 12 6 54 Table 3: Number of wins on the NETLIB subset with a total of 77 instances.2 Bland Dantzig SE GI 851 198 121 131 LD EXP EXP-II NO-LOCAL 177 112 118 139 Table 4: Geometric mean of pivot numbers on the NETLIB subset. Ablation study: the role of local information in expert rules Classical pivot rules, like the SE, GI, Bland’s, and 2Bolded results indicate the best among the evaluated pivot rules. This convention is maintained for subsequent tables. Dantzig’s rules, largely utilize local information, such as reduced costs. In contrast, our expert rules merge both global (the optimal basis) and local information (the steepest-edge score) to set the pivot direction. Here, we emphasize the pivotal role local information plays in optimizing expert rules. We introduce a variant of the EXP rule, called NOLOCAL, that omits local information. In this rule, the entering variable is randomly chosen from candidates in the optimal basis. Its efficacy is tested on the NETLIB subset. Table 4 reveals that NO-LOCAL underperforms EXP or EXP-II rules, and even lags behind the classical SE rule. The results underscore the diminished efficacy of the EXP rule when local insights are absent, leading to increased pivot numbers and fewer wins. Local information is evidently instrumental in optimizing pivot decisions. 5.2 Testing Learned Pivot Rule We conduct experiments with EXP-LEARN on CO benchmarks, employing the imitation learning approach to emulate the EXP rule. Setup Based on the guidelines from Gasse et al. (2019), we generate random instances for each CO benchmark. For the set covering problems, instances have 400 columns and 200 rows. Combinatorial auction problems have 100 items and 500 bids. For capacitated facility location problems, instances contain 20 facilities and 15 customers. Finally, maximum independent set problems have 150 nodes with an affinity value set to 2. Table 5 details the scales of these presolved CO instances. Training procedure In the training process, we utilize 5 unique seeds for both data generation and model training. We apply identical hyperparameters for each CO benchmark, drawing upon 50,000 pivot samples from 1,000 instances for training, and 10,000 samples from 200 instances for validation. To assess the model’s applicability in realworld scenarios, we test it on 200 new instances, underlining its ability to generalize on each benchmark. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8078 # train m n # valid m n # test m n SC 1000 200.0 (±0.0) 400.0 (±0.0) 200 200.0 (±0.0) 400.0 (±0.0) 200 200.0 (±0.0) 400.00 (±0.0) CA 1000 181.9 (±5.0) 427.4 (±17.0) 200 182.0 (±5.5) 428.2 (±18.7) 200 182.5 (±5.5) 427.8 (±18.6) FL 1000 336.0 (±0.0) 315.0 (±0.0) 200 336.0 (±0.0) 315.0 (±0.0) 200 336.0 (±0.0) 315.0 (±0.0) IS 1000 290.2 (±2.4) 150.0 (±0.0) 200 290.2 (±2.3) 150.0 (±0.0) 200 290.2 (±2.1) 150.0 (±0.0) Table 5: Average scales of presolved CO instances for EXP-LEARN to imitate EXP. Performance metrics Performance is assessed using the GCNN model’s validation accuracy, detailed in Table 6. We rely on Top 1, Top 3, and Top 5 accuracies. Top 1 Acc Top 3 Acc Top 5 Acc SC 0.533 (±0.004) 0.847 (±0.005) 0.927 (±0.003) CA 0.362 (±0.004) 0.656 (±0.002) 0.784 (±0.002) FL 0.499 (±0.007) 0.776 (±0.007) 0.870 (±0.005) IS 0.257 (±0.002) 0.420 (±0.002) 0.511 (±0.001) Table 6: Accuracy on the CO validation sets. Numerical results As displayed in Table 6, our model achieves a Top 1 accuracy over 25% for all problems. This suggests a greater than 25% chance of copying the expert action. Notably, while Top 3 and Top 5 accuracies are significant, they cannot be directly applied in the simplex method, marking a limitation. It also needs to be emphasized that comparing validation accuracy across different benchmarks is not meaningful. Moreover, we choose not to include test accuracy, as our primary focus is not on direct accuracy comparison but on pivot numbers. SE GI LD EXP-LEARN EXP EXP-II SC 419 990 468 336 (±6) 268 280 CA 266 1340 276 223 (±3) 115 112 FL 242 304 377 239 (±2) 224 227 IS 114 302 114 113 (±0) 113 113 Table 7: Geometric mean of pivot numbers on the CO test sets.3 Table 7 shows that EXP-LEARN consistently outshines its competitors, highlighting its smart choices. While EXPLEARN displays great performance among most benchmarks, it exhibits only a slight lead over the SE rule in certain benchmarks, primarily due to the foundational efficiency of the EXP rule it is built upon. This demonstrates that while imitation learning brings benefits, it does not fully bridge the performance gap between SE and EXP. Table 8 underscores the consistency of the EXP-LEARN rule. Its dominant performance is not due to a few outliers but is maintained across numerous instances in each benchmark. This indicates a robust and generalizable model, proving EXP-LEARN’s reliability in varied scenarios and highlighting its potential for broader applications in LP tasks. 3The results for EXP and EXP-II are for reference only as they cannot be directly used in practical applications. SE GI LD EXP-LEARN SC 8 (±3) 0 (±0) 0 (±0) 192 (±3) CA 7 (±2) 0 (±0) 2 (±2) 192 (±3) FL 81 (±2) 17 (±2) 17 (±2) 103 (±3) IS 171 (±1) 0 (±0) 171 (±1) 186 (±2) Table 8: Number of wins on the CO test sets with a total of 200 instances. SE GI LD EXP-LEARN EXP EXP-II SC 0.54 6.85 0.32 1.36 (±0.02) 0.32 0.83 CA 0.21 8.44 0.20 0.81 (±0.01) 0.08 0.18 FL 0.43 1.28 0.60 0.82 (±0.03) 0.39 0.72 IS 0.15 0.98 0.14 0.33 (±0.00) 0.16 0.37 Table 9: Geometric mean of solving time (in seconds) on the CO test sets. Table 9 quantifies the pivot efficiency across different benchmarks. EXP-LEARN tends to solve in longer time, which is a byproduct of its GCNN forward pass. With a more meticulous design of the graph architecture, the time can be further optimized. 6 Conclusion The simplex methods are time-honored with rich practical design and mystery complexity. Generating a short path is the key task for pivot rules. In this paper, we design two innovative and smart pivot experts for primal simplex that leverage both global and local information, i.e. optimal basis and steepest-edge score respectively. Experiments illustrate that these two experts overall outperform classical pivot rules significantly. To bridge theory to practical application, we integrate a GCNN model to mimic these experts. This imitation learning facilitates the circumvention of global information dependencies while preserving the performance in path generation. Empirical evidence confirms the learnability of our experts. The learned rule commendably surpasses classical pivot rules in generating shorter pivot paths, although not quite caught up with the experts. The value of our pivot experts extends beyond their standalone significance, serving both as benchmarks and generators of expert pivot labels. The pivot experts outpace predecessors like MCTS in swiftly constructing superior paths, especially Expert I. Modifying our method for dual or primaldual simplex methods, we anticipate, will be seamless with minimal adjustments. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8079 Acknowledgments We thank Qi Huangfu for the fruitful discussions. This research is partially supported by the National Natural Science Foundation of China (NSFC) [Grant NSFC72150001, 72225009, 72394360, 72394365]. References Achterberg, T. 2009. SCIP: solving constraint integer programs. Mathematical Programming Computation, 1: 1–41. Achterberg, T.; Koch, T.; and Martin, A. 2005. Branching rules revisited. Operations Research Letters, 33(1): 42–54. Adham, I.; De Loera, J.; and Zhang, Z. 2021. (Machine) learning to improve the empirical performance of discrete algorithms. arXiv preprint arXiv:2109.14271. Alvarez, A. M.; Louveaux, Q.; and Wehenkel, L. 2017. A machine learning-based approximation of strong branching. INFORMS Journal on Computing, 29(1): 185–195. Applegate, D.; D´iaz, M.; Hinder, O.; Lu, H.; Lubin, M.; O’Donoghue, B.; and Schudy, W. 2021. Practical largescale linear programming using primal-dual hybrid gradient. Advances in Neural Information Processing Systems, 34: 20243–20257. Avis, D.; and Chv´atal, V. 1978. Notes on Bland’s pivoting rule. Polyhedral Combinatorics: Dedicated to the memory of DR Fulkerson, 24–34. Bland, R. G. 1977. New finite pivoting rules for the simplex method. Mathematics of Operations Research, 2(2): 103– 107. Chen, Z.; Liu, J.; Wang, X.; Lu, J.; and Yin, W. 2022. On representing linear programs by graph neural networks. arXiv preprint arXiv:2209.12288. Dantzig, G. 1963. Linear programming and extensions. Princeton university press. Deng, Q.; Feng, Q.; Gao, W.; Ge, D.; Jiang, B.; Jiang, Y.; Liu, J.; Liu, T.; Xue, C.; Ye, Y.; et al. 2022. New developments of ADMM-based interior point methods for linear programming and conic programming. arXiv preprint arXiv:2209.01793. Di Liberto, G.; Kadioglu, S.; Leo, K.; and Malitsky, Y. 2016. DASH: dynamic approach for switching heuristics. European Journal of Operational Research, 248(3): 943–953. Ding, J.-Y.; Zhang, C.; Shen, L.; Li, S.; Wang, B.; Xu, Y.; and Song, L. 2020. Accelerating primal solution findings for mixed integer programs based on solution prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 1452–1459. Fan, Z.; Wang, X.; Yakovenko, O.; Sivas, A. A.; Ren, O.; Zhang, Y.; and Zhou, Z. 2023. Smart initial basis selection for linear programs. In International Conference on Machine Learning, 9650–9664. PMLR. Forrest, J. J.; and Goldfarb, D. 1992. Steepest-edge simplex algorithms for linear programming. Mathematical Programming, 57(1-3): 341–374. Gasse, M.; Ch´etelat, D.; Ferroni, N.; Charlin, L.; and Lodi, A. 2019. Exact combinatorial optimization with graph convolutional neural networks. Advances in neural information processing systems, 32. Gay, D. M. 1985. Electronic mail distribution of linear programming test problems. Mathematical Programming Society COAL Newsletter, 13: 10–12. Ge, D.; Huangfu, Q.; Wang, Z.; Wu, J.; and Ye, Y. 2023. Cardinal Optimizer (COPT) user guide. https://guide.coap. online/copt/en-doc. Goldfarb, D.; and Reid, J. K. 1977. A practicable steepestedge simplex algorithm. Mathematical Programming, 12: 361–371. Goldfarb, D.; and Sit, W. Y. 1979. Worst case behavior of the steepest edge simplex method. Discrete Applied Mathematics, 1(4): 277–285. Gupta, P.; Gasse, M.; Khalil, E.; Mudigonda, P.; Lodi, A.; and Bengio, Y. 2020. Hybrid models for learning to branch. Advances in neural information processing systems, 33: 18087–18097. Gurobi Optimization, LLC. 2023. Gurobi Optimizer reference manual. Harris, P. M. 1973. Pivot selection methods of the Devex LP code. Mathematical Programming, 5: 1–28. He, H.; Daume III, H.; and Eisner, J. M. 2014. Learning to search in branch and bound algorithms. Advances in neural information processing systems, 27. Huangfu, Q.; and Hall, J. J. 2018. Parallelizing the dual revised simplex method. Mathematical Programming Computation, 10(1): 119–142. Jeroslow, R. G. 1973. The simplex algorithm with the pivot rule of maximizing criterion improvement. Discrete Mathematics, 4(4): 367–377. Kalai, G. 1992. A subexponential randomized simplex algorithm. In Proceedings of the twenty-fourth annual ACM symposium on Theory of computing, 475–482. Karmarkar, N. 1984. A new polynomial-time algorithm for linear programming. In Proceedings of the sixteenth annual ACM symposium on Theory of computing, 302–311. Kelner, J. A.; and Spielman, D. A. 2006. A randomized polynomial-time simplex algorithm for linear programming. In Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, 51–60. Khalil, E.; Le Bodic, P.; Song, L.; Nemhauser, G.; and Dilkina, B. 2016. Learning to branch in mixed integer programming. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. Kitahara, T.; and Mizuno, S. 2011. Klee–Minty’s LP and upper bounds for Dantzig’s simplex method. Operations Research Letters, 39(2): 88–91. Kitahara, T.; and Mizuno, S. 2013. A bound for the number of different basic solutions generated by the simplex method. Mathematical Programming, 137: 579–586. Klee, V.; and Minty, G. J. 1972. How good is the simplex algorithm. Inequalities, 3(3): 159–175. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8080 Li, A.; Li, B.; Han, C.; and Guo, T. 2022. Rethinking optimal pivoting paths of simplex method. arXiv preprint arXiv:2210.02945. Maros, I. 2012. Computational techniques of the simplex method, volume 61. Springer Science & Business Media. Matouˇsek, J.; Sharir, M.; and Welzl, E. 1992. A subexponential bound for linear programming. In Proceedings of the eighth annual symposium on Computational geometry, 1–8. Nair, V.; Bartunov, S.; Gimeno, F.; Von Glehn, I.; Lichocki, P.; Lobov, I.; O’Donoghue, B.; Sonnerat, N.; Tjandraatmadja, C.; Wang, P.; et al. 2020. Solving mixed integer programs using neural networks. arXiv preprint arXiv:2012.13349. Nickel, S.; Steinhardt, C.; Schlenker, H.; and Burkart, W. 2022. IBM ILOG CPLEX Optimization Studio—a primer. In Decision Optimization with IBM ILOG CPLEX Optimization Studio: A Hands-On Introduction to Modeling with the Optimization Programming Language (OPL), 9–21. Springer. Pan, P.-Q. 2008. A largest-distance pivot rule for the simplex algorithm. European Journal of Operational Research, 187(2): 393–402. Paulus, M. B.; and Krause, A. 2023. Learning to dive in branch and bound. arXiv preprint arXiv:2301.09943. Roos, C. 1986. A pivoting rule for the simplex method which is related to Karmarkar’s potential function. Manuscript, Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands, 78–94. Roos, C. 1990. An exponential example for Terlaky’s pivoting rule for the criss-cross simplex method. Mathematical Programming, 46: 79–84. Sonnerat, N.; Wang, P.; Ktena, I.; Bartunov, S.; and Nair, V. 2021. Learning a large neighborhood search algorithm for mixed integer programs. arXiv preprint arXiv:2107.10201. Suriyanarayana, V.; Tavaslio˘glu, O.; Patel, A. B.; and Schaefer, A. J. 2022. Reinforcement learning of simplex pivot rules: a proof of concept. Optimization Letters, 16(8): 2513– 2525. Tamura, A.; Takehara, H.; Fukuda, K.; Fujishige, S.; and Kojima, M. 1988. A dual interior primal simplex method for linear programming method. Journal of the Operations Research Society of Japan, 31(3): 413–430. Todd, M. J. 1990. A Dantzig-Wolfe-like variant of Karmarkar’s interior-point linear programming algorithm. Operations Research, 38(6): 1006–1018. Vanderbei, R. J. 2020. Linear programming. Springer. Xpress, F. 2014. FICO Xpress Optimization Suite. Yang, Y. 2020. A double-pivot simplex algorithm and its upper bounds of the iteration numbers. Research in the Mathematical Sciences, 7(4): 34. Ye, Y. 2011. The simplex and policy-iteration methods are strongly polynomial for the Markov decision problem with a fixed discount rate. Mathematics of Operations Research, 36(4): 593–603. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8081
2024
898
18,737
Using Clustering to Strengthen Decision Diagram Bounds for Discrete Optimization Mohsen Nafar, Michael R¨omer Management Science and Business Analytics department, Bielefeld University [email protected], [email protected] Abstract Offering a generic approach to obtaining both upper and lower bounds, decision diagrams (DDs) are becoming an increasingly important tool for solving discrete optimization problems. In particular, they provide a powerful and often complementary alternative to other well-known generic bounding mechanisms such as the LP relaxation. A standard approach to employ DDs for discrete optimization is to formulate the problem as a Dynamic Program and use that formulation to compile a DD top-down in a layer-by-layer fashion. To limit the size of the resulting DD and to obtain bounds, one typically imposes a maximum width for each layer which is then enforced by either merging nodes (resulting in a socalled relaxed DD that provides a dual bound) or by dropping nodes (resulting in a so-called restricted DD that provides a primal bound). The quality of the DD bounds obtained from this top-down compilation process heavily depends on the heuristics used for the selection of the nodes to merge or drop. While it is sometimes possible to engineer problem-specific heuristics for this selection problem, the most generic approach relies on sorting the layer’s nodes based on objective function information. In this paper, we propose a generic and problem-agnostic approach that relies on clustering nodes based on the state information associated with each node. In a set of computational experiments with different knapsack and scheduling problems, we show that our approach generally outperforms the classical generic approach, and often achieves drastically better bounds both with respect to the size of the DD and the time used for compiling the DD. Introduction Solving discrete optimization problems is a challenging task which has kept busy generations of researchers from various fields such as Mathematics, Computer Science and and Operations Research. Given the impressive progress in the field of Artificial Intelligence (AI) and Machine Learning (ML) in the recent years, it seems natural that there is an ever-increasing amount of research that aims at leveraging the power of ML for solving optimization approaches, see e.g. the surveys (Bengio, Lodi, and Prouvost 2021; Kotary et al. 2021; Cappart et al. 2023). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. The vast majority of the research dedicated to employing ML approaches for discrete optimization either deals with speeding up exact optimization approaches, e.g. by learning to take better branching decisions in branch-andbound solvers (Khalil et al. 2016), or with using ML to replace human-designed algorithmic decisions within heuristic solution approaches. As an example, the recent years have seen tremendous improvements in so-called Deep Reinforcement Learning (DRL) approaches for routing problems which nowadays come very close to the best handcrafted heuristics that build upon decades of research (Hottung, Kwon, and Tierney 2021). While ML-based heuristics provide high-quality primal bounds for discrete optimization problems, there are relatively few works focusing on exploiting ML techniques for obtaining strong dual bounds, despite the fact that dual bounding mechanisms constitute a critical ingredient in exact discrete optimization solvers. Perhaps one of the first works aiming at strengthening dual bounds with the help of ML was (Cappart et al. 2019) who proposed to employ DRL to improve bounds obtained with so-called approximate Decision Diagrams (DDs). DDs, initially introduced for representing boolean circuits, are layered graphical data structures that can be used for compactly representing the solution space of a discrete optimization problem. In particular, one can construct two types of limited-size approximate DDs that provide optimization bounds: Restricted DDs in which certain feasible nodes are discarded in order to obtain an under-approximation of the solution space and relaxed DDs in which certain nodes are merged in order to obtain an over-approximation that provides a dual bound. As demonstrated by Bergman et al. (2016), these bounds can be used in a purely DD-based branch-and-bound algorithm that is able to achieve state-ofthe art performance for certain discrete optimization problems. In addition to their computational efficiency for certain problem types, one of the key benefits of DD-based approaches is that they are highly generic in the sense that in order to apply them, one basically only needs a Dynamic Programming (DP) formulation of the problem to be considered along with the specification of a so-called merge operator. For an excellent survey of the recent advancements in DD-based approaches for solving discrete optimization problems, we refer to (Castro, Cire, and Beck 2022). The quality of the bounds obtained with approximate DDs The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8082 depends on certain heuristic decisions to be taken during the DD compilation process. It thus seems natural to harness the power of modern ML approaches for guiding those decisions. Actually, in the above-mentioned paper, Cappart et al. (2019) use DRL to determine the so-called variable ordering, that is, the order in which decision variables are considered when compiling a DD layer by layer in a top-down fashion. They show that for a given maximum width of each layer, the ML-supported approach can substantially improve the bounds compared to the variable ordering heuristics considered in the literature. In two follow-up works (Parjadis et al. 2021; Cappart et al. 2022), the authors show that despite the fact that the ML-based compilation of approximate DDs is slower than the standard approaches, this bound improvement leads to a significant overall speed-up of an exact DD-based branch-and-bound solver. In this paper, we follow this line of research by using ML to guide taking another critical decision when compiling approximate DDs: The decision of which nodes to merge (in case of a relaxed DD) or to discard (in case of a restricted DD) in the case that the number of nodes in a DD layer exceeds the maximum permitted width. For the rest of this paper, we refer to this decision as node selection. Given the importance of this decision for the quality of the bounds, there has been considerable research on devising good node selection heuristics. These heuristics, which will be reviewed in more detail in the next section, rely on the information associated with each node and typically either rely on sorting the nodes according to some criterion and selecting the “worst” for discarding / merging, or on based similarity of nodes. While some existing node selection heuristics are highly generic (e.g. since they only rely on objective function information), most of the heuristics considered in the literature are tailored to the problem under consideration. The main contribution of this paper is to propose a new ML-based node selection heuristic that relies on clustering the nodes according to the state information associated with each node during the compilation process. One of the advantages of this approach is that it is highly generic: it does not require specifying any problem-dependent heuristic or strategy since it operates with feature information that can be inferred from the DP formulation of the problem under consideration. In a set of computational experiments with five different discrete optimization problems, our approach consistently outperforms the standard generic node selection approach from the DD literature, often achieving drastically better bounds both with respect to the size of the approximate DD and the time used for its compilation. Also using clustering within DD-based combinatorial optimization, the work most closely related to ours is (Copp´e, Gillard, and Schaus 2023). The authors use state clustering to obtain an aggregated DP problem that is exactly solved, and the results are used to guide their DD-based Branchand-Bound algorithm in various ways. In particular, they use the aggregate solution to assign scores to sort nodes in the node selection problem, which is very different from our approach that directly uses the node clusters for merging (relaxed DD) and for selecting one candidate per cluster to be kept (restricted DD). Decision Diagrams for Optimization A decision diagram D = (N, A) is a layered directed acyclic graph with node set N and arc set A. The paths in D represent solutions to a discrete optimization problem P with a maximization objective function f and an n-dimensional vector of decision variables x1, . . . , xn ∈ Z. The node set N is partitioned into n + 1 layers N1, . . . , Nn+1, where N1 = {r} and Nn+1 = {t} for a root node r and a terminal node t. Each arc a = (u, v) connects two consecutive layers, and is associated with a decision d(a) representing the assignment xu = d(a). This means that a path p = (a1, . . . , an) starting from r and ending at t represents the solution x(p) = (d(a1), . . . , d(an)). We denote the set of all r-t paths with P, and we refer to the solutions to P represented by P with Sol(D). Moreover, each arc a has length ℓ(a) and Pn i=1 ℓ(ai) provides the length ℓ(p) of path p. We refer to D as exact if Sol(D) = Sol(P) and for each path p ∈P we have ℓ(p) = f(x(p)). In that case, one can determine an optimal solution to P by determining longest r-t path in D. An aspect limiting the practical usefulness of exact DDs is their size which in general is exponential in the number of variables n. To address this issue, one can resort to smaller approximate DDs that can be used to obtain upper or lower bounds for the solutions of P, and that can be used e.g. in a DD-based branch-and-bound procedure. There are two types of approximate DDs: In a restricted DD D, one aims at considering only promising nodes and arcs, meaning that Sol(D) ⊆Sol(P), that is, not all feasible solutions to P are represented as paths in D, and thus, the longest path in a restricted DD provides a lower bound to P. The second type of approximate DD, the relaxed DD, provides an upper bound: In a relaxed DD, we have Sol(D) ⊇Sol(P), that is, the set of paths may contain paths associated with infeasible solutions to P. Regarding the objective function value, every path a relaxed DD needs to satisfy ℓ(p) ≥f(x(p)), that is, the length of a path in a relaxed DD is assumed not to underestimate the true objective function of its associated solution in P. In both restricted and relaxed DDs, a common approach to control the size of the DD is to impose a maximum width W for each layer. As explored in detail in (Hooker 2013), DDs exhibit a close link to Dynamic Programming (DP): From a DP perspective, an exact DD for a discrete optimization problem P is very similar to the state-transition graph of a DP formulation of P in which every node u is associated with a state Su and every arc a is associated with a state transition induced by the decision d(a) associated with a. Su is an element of the state space S; the state Sr associated with the root node r is the so-called initial state. The state Sv of the target node v of the arc depends on the state Su of the arc’s source node as well as on d and is computed by the statetransition function f(Su, d). The (possibly state-dependent) objective function contribution of a decision is computed by a reward function g(Su, d). Finally, the set of out-arcs of a node u is determined by the set feasible decisions X(Su) given state Su. Given a DP formulation DP comprising the definition of the state space S including the initial state Sr, the funcThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8083 tions X, f and g, one can compile a decision diagram D using a top-down compilation algorithm which is akin to a forward dynamic programming algorithm which is sometimes referred to as DP by reaching (Kellerer, Pferschy, and Pisinger 2004). A variant of such a top-down compilation procedure is displayed in Algorithm 1. The procedure takes a DP formulation DP, a DD D containing only the root node and the maximum width W. Calling the algorithm with an unlimited width W will yield an exact DD and depending on the operation performed in line 8, it will result in a restricted or relaxed DD. Algorithm 1: Generic Top-Down compilation algorithm 1: procedure COMPILETOPDOWN(DP, D, W) 2: for j = 1 to n do 3: for all u ∈Nj do 4: for all d ∈X(Su) do 5: v = GETORADDNODE(Nj+1, f(Su, d)) 6: ADDARC(u,v,d) 7: if |Nj+1| > W then 8: RELAXLAYER(Nj+1) or RESTRICTLAYER(Nj+1) The algorithm proceeds layer by layer, starting from the root and ending at layer n which is the last layer before the terminal node t. For each layer j, it iterates through all nodes Nj. For each node u, it considers all feasible decisions, and for each feasible decision d, the function GETORADDNODE is used to determine the target node v in the next layer j + 1 by either retrieving the node associated with the resulting state f(Su, d) if it already exists, or by creating a new one. If j = n, GETORADDNODE always returns the terminal node t. The algorithm then adds an arc associated with decision d from u to the target node v. After the creation of all nodes in layer j + 1, it is checked whether the number of nodes exceeds W. If that is the case, the size of the layer is reduced by discarding (in a restricted DD) or merging (in a relaxed DD) nodes. For the case of compiling a relaxed DD, the reduction of the size of the layer is achieved by merging nodes, that is by redirecting the incoming arcs of nodes that are being merged to the merged node. This way, the partial paths ending at the merged nodes are still available. In order to ensure that no feasible completion of any of the merged nodes is lost, one requires a problem-specific merge operator ⊕for the states associated with the two nodes, see (Hooker 2017) for a discussion of the conditions a valid merge operator needs to satisfy. In any case, the node selection strategy, that is, the strategy to select which nodes to discard or merge is critical for the quality of the bounds, and thus, has been subject to some amount of research. One of the most popular strategies is to sort the nodes according to some criterion, to keep the W −1 most promising states and to discard (in case of a restricted DD) or merge (in case of relaxed DD) the remaining states. A standard and problem-agnostic criterion is to sort according to the objective function value of the partial path ending at each of the nodes. This approach, which we will use as a baseline approach later, will be referred to as sortObj in the rest of this paper. Other criteria being used for sorting the nodes take the state information associated with the nodes into account. Example. Let us consider the problem of scheduling jobs on two identical machines with the objective of minimizing the total weighted job completion time. Using standard scheduling notation (see e.g. Graham et al. 1979), this problem can be written as P2|| P wjCj, and it was proven to be NP-hard (Bruno, Coffman Jr, and Sethi 1974) and (Lenstra, Kan, and Brucker 1977). An instance of this problem consists of n jobs each associated with a processing time pi and a weight wi. To formulate this problem as a DP, we assume that each stage is associated with a job i. In each stage, the decision di corresponds to assigning job i to either machine 1 or to machine 2. Each state S can be represented as a 2-tuple (s1, s2) where sm is the total processing time on machine m given a set of partial assignments, and the transition function f adds the processing time of job i to the state coordinate associated with the machine determined by the assignment decision. The merge operator ⊕that is required for merging states in a relaxed decision diagram corresponds to the element-wise minimum of the state tuple. In the following numerical example, we consider an instance with 4 jobs with the processing times and weights given by the vectors p = [4, 2, 5, 6] and w = [2, 3, 2, 2]. Figure 1: A relaxed DD compiled via sortObj for P2||ΣwjCj with W = 3 gives a solution with 16% gap. Figure 1 illustrates the construction of a relaxed DD with a maximum width W = 3 in which the relaxation of each layer is performed according to the standard approach that is based on sorting the nodes according to their objective function values. Solid and dotted arcs show assignment of the job to machine 1 and 2, respectively. The label of each arc corresponds to the cost contribution (corresponding to the completion time of the job under consideration). The node labels correspond to the state tuple and the objective function value (displayed in purple) of the partial solution ending at each node. A shortest path from the root to the terminal node is highlighted in red, its length is 40, which corresponds to 83 % of the optimal value of 48 of the instance under consideration. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8084 One of the disadvantages of sorting-based node selection approaches is that creating a single merged node can lead to the fact that highly different nodes are merged which can negatively affect the achievable dual bounds. To address this issue, some authors aim at grouping nodes to merge according to some similarity measure. As an example, (Horn et al. 2021) propose to use so-called collector nodes that aim at merging states that have the same value with respect to a labeling function. A similar approach was recently used by (de Weerdt, Baart, and He 2021) who merge nodes based on partitioning the state space for a single machine scheduling problem with release times, deadlines, setup times and rejection. The key intuition behind these similarity-based approaches is that similar nodes should have similar set of feasible completions, and thus, the risk of negatively impacting the bounds by merging very heterogeneous nodes is smaller. Clustering-Based Approximate DD Layers The approach proposed in this paper can also be considered as a similarity-based approach to node selection. However, while the papers mentioned above rely on partitioning the state space, e.g. by devising a labeling function, we propose to obtain sets of similar nodes by applying standard clustering approaches to group the nodes in the layer. This approach has several advantages: First, the modeler does not need to specify any problem-specific labeling function or partitioning of the state space S since the clustering algorithms are problem-agnostic and use information that is readily available in any DP-based DD compilation method. Note, however, that it is nonetheless possible to adapt the clustering in different ways by specifying distance functions or selecting and tuning clustering algorithms. Second, the approaches mentioned above rely on an “a priori” partitioning of the full state space S, which may lead to situations where similar states end up in different partitions. Since our clustering approach directly operates with the node states in the layer Nj+1, we can assume that the grouping of the nodes is better adapted to concrete set of nodes under consideration. Third, and related to the second point, an a priori partitioning approach may lead to many “empty buckets”, and thus one cannot directly control the number of nodes resulting from the partitioning. In our approach, we can control the size of the layer by employing clustering approaches such as k-means clustering in which we can directly control the width of the layer to be constructed. Observe that our work is not the first to employ ML in the context of node selection decisions for compiling relaxed DDs: (Frohner and Raidl 2019) used a binary classification approach based on information from a limited lookahead for dynamically determining the merge heuristic to use in a given layer. While they show that this approach can achieve better bounds than using only a single merge heuristic, the reliance on the lookahead results in a large computational overhead compared to non-ML based approaches. In our approach, however, ML plays a much more direct role in supporting node selection since the result of the clustering algorithm can be directly mapped to the groups of nodes to be merged. Our clustering approach to node selection can in principle work with almost any clustering algorithm that can operate with the state information defined in the DP model for the discrete optimization model under consideration. In this paper, and in our computational results, we resorted to a standard implementation of a general k-means clustering algorithm, allowing us to explicitly specify the number of clusters to be constructed. In case of relaxed DDs, the clustering-based version of RELAXLAYER proceeds by applying the clustering algorithm to all the associated nodes in the layer. Observe that this clustering is performed online, that is, without any pre-trained clustering model. For each of the k clusters, we apply the merge operation to obtain the merged state. In the case of a restricted DD, the same clustering logic applies. However, instead of applying the merge operator, we select the node with the best objective function value and discard the other nodes from the cluster. Example (continued). We illustrate our clustering-based approach for compiling a relaxed DDs for the example introduced in the previous section. Now, instead of using the classical sorting-based approach, we cluster the nodes according to their states and merge all the nodes being present in the same cluster. Figure 2 displays the progression of the top-down compilation based on the clustering for the same instance that was used in Figure 1. Specifically, the clustering is obtained using the k-means clustering algorithm. Different clusters are highlighted with different colors. Figure 2: A relaxed DD compiled via clustering-based approach for P2||ΣwjCj with W = k = 3 gives a solution with 8% gap. It turns out that the clustering leads to a different configuration of the layers that need to be relaxed, and that the bound is much stronger than the bound obtained with the standard “sortObj” approach illustrated in Figure 1. The reason for this is that in the sorting-based approach, one tends to merge very different states (e.g. the states (6, 0) and (0, 6), resulting in the merged state (0, 0)) which has a highly detrimental effect on the overall bound, while such a situation is avoided in the clustering approach to node merging. Our clustering-based node selection approach can be embedded in the top-down compilation for DDs in different ways. In the most natural variant, we assume that the number k of clusters to be created is equal to the maximum width W of the approximate DD. An alternative approach is to choose The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8085 k < W: Intuitively, this way the algorithm is able to explore more diverse parts of the solution space. Moreover, choosing k to be much smaller than W will lower the total running time because the clustering will not be used in every layer and because the final DD will be smaller. Computational Results In this section we present the results from computational experiments with our approach (i.e. clustering-based node selection) and compare it to that of minLP/maxSP approach (i.e. sortObj) for both variants discussed above, that is, k = W and k < W, on two knapsack problems and three machine scheduling problems. We also experimented with alternative generic node-selection heuristics, e.g. random selection, maxState (sort nodes in KP based on state value) and minState (sort nodes in the scheduling problems based on minimum value among the time-related state representation coordinates). The obtained bounds, however, were worse than sortObj, which is itself dominated by our proposed approach, and thus are not reported below. We implemented the approach in the Julia programming language (code is available here https://github.com/mnafar/ aaai2024 clustering DD), and we ran all the experiments on a Windows machine with processor 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz, 2.30 GHz and 16GB RAM. For the clustering part of the algorithm we use the implementation of the k-means algorithm from the Clustering package in Julia, setting the number of iterations to 50. We also performed supplementary experiments (results not reported below) with alternative clustering approaches such as k-medoids using Euclidean and squared Euclidean distance which yielded similar average results but showed a high variance in bound quality. Note that for every problem presented next, the reported bounds and running times are the average bound and time taken over all of the considered instances. Results for the First Variant: k = W We start with the main variant in which the number k of clusters equals the maximum width of the approximate DD for five different discrete optimization problems. We proceed by briefly describing the problems including a sketch of the state information used for compiling the DD (and for the clustering, if not mentioned otherwise), the merge operator needed for relaxation, and the instances that were used. For each problem, we compare the primal and dual bounds obtained with our clustering approach to those obtained with the standard sortObj approach. 0/1 Knapsack Problem (KP). Given n items each having weight wi and profit pi, the goal is to select items that maximize the total profit such that the accumulated sum of the weights of the selected items does not exceed knapsack’s capacity C. The state for compiling the DD is a positive integer representing the accumulated weight in a partial solution. A valid merge operator consists in choosing the state with minimum weight. The experiments are run on 100 KP instances with 200 items per instance taken from (Pisinger 2005). Figure 3 shows the average bounds obtained by sortObj versus those of obtained by clustering-based node selection using different maximum widths 3(a), and their running times in millisecond 3(b). In this figure, the green line displays the optimum, the red curves show the lower and upper bounds using sortObj and black dashed curves represent clusteringbased node selection results. As becomes clear from the graphs of the experiments, our approach outperforms sortObj in all aspects, meaning it provides substantially better primal and dual bounds with smaller W (implying a smaller DD) and in a smaller computation time. Moreover, experimenting with instances involving 10000 items even with a maximum width of 1000, did not show scalability issues. 0 100 200 300 400 500 Maximum Width 0.8 1.0 1.2 1.4 Bound sortObj optimum line cluster (a) bound vs W 0 500 1000 1500 2000 Average Time (millisecond) 0.8 1.0 1.2 1.4 Bound sortObj optimum line cluster (b) bound vs Time Figure 3: Clustering vs sortObj (KP) Multidimensional Knapsack Problem (MKP). MKP is a generalization of the KP with multiple capacity constraints. An instance of MKP with n items and m dimensions has a capacity bound (C1, · · · , Cm) for every dimension in which pi and (w1 i , · · · , wm i ) are profits and weights for an item i. Similar to KP, the goal is to select a subset of items whose sum of profits is maximized such that all the capacity bound constraints hold simultaneously. In a DP model used for compiling a DD for the MKP, a state is an m-tuple in which every coordinate is the sum of the corresponding coordinates of the weights of the items that have been selected in the partial solution associated with the node under consideration. A valid merge operator for MKP is to take the element-wise minimum of the coordinates of the states to be merged. In case of the MKP, we did not only use the state coordinates as features for the clustering, but also the objective function value associated with each node in the DD. All experiments are run on 10 MKP instances where each of them is a 5-dimensional MKP with 100 items (taken from (Chu and Beasley 1998), which are publicly available in ORLIBRARY at http://people.brunel.ac.uk/∼mastjjb/jeb/orlib/ mknapinfo.html). Figure 4 shows the average bounds obtained by sortObj versus those obtained using clusteringbased node selection for different maximum widths and their running times in milliseconds. Once again, it turns out that the clustering-based approach yields much better bounds than the standard sortObj approach both in relation to DD size and to the bound quality obtained in a certain amount of time. Sum of Cubed Job Completion Times on Two Identical Machines P2||ΣC3 j . In an instance of P2||ΣC3 j , we are given n jobs, a job i has processing time pi, and the goal The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8086 0 200 400 600 800 1000 Maximum Width 2.4 2.6 2.8 3.0 Upper Bound sortObj cluster (a) upper bound vs W 0 500 1000 1500 2000 Average Time (millisecond) 2.5 2.6 2.7 2.8 2.9 3.0 3.1 Upper Bound sortObj cluster (b) upper bound vs time 0 200 400 600 800 1000 Maximum Width 0.84 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 Lower Bound sortObj cluster (c) lower bound vs W 0 1000 2000 3000 4000 5000 6000 Average Time (millisecond) 0.84 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 Lower Bound sortObj cluster (d) lower bound vs time Figure 4: Clustering vs sortObj (MKP) is to schedule these jobs on two identical machines such that the sum of the cubes of the job completion times is minimized. Every state in a DD for P2||ΣC3 j is a 2-tuple where each of its coordinates represents the partial completion time on the corresponding machine. The merge operator is similar to that of MKP where a merged state is comprised of element-wise minimum of coordinates over the states that are being merged. The instances we ran the experiments on are from the set wt100 that is publicly available from the OR-LIBRARY (http://people.brunel.ac.uk/∼mastjjb/jeb/ info.html). They were originally generated for weighted tardiness jobs on a single machine problem. We took the processing time of the jobs from the instances, and thus, the experiments are run on 125 instances each of which contains 100 jobs. The results of using sortObj and clusteringbased node selection for building the corresponding DDs are shown in Figure 5. For this problem again the difference between performance of the two approaches is significant, i.e. the approach that uses clustering-based node selection is by far superior to the other approach. Total Weighted Job Completion Time on Two Identical Machines P2||ΣwjCj. A description of this problem is given in the example in Section . Once again, we performed experiments with the 125 instances from the OR Library used in P2||ΣC3 j . Figure 6 shows the performance of the two approaches for different maximum widths and their running times. We see that for this problem too our approach once again outperforms the baseline approach sortObj. Weighted Number of Tardy Jobs on a Single Machine 1||ΣwjUj. Given n jobs where pi, wi, and di are processing time, weight, and due date of a job i, the goal is to schedule the jobs on a single machine such that the weight of the tardy jobs is minimized. The state representation consists in a positive integer which measures the total processing time of the scheduled early jobs. Moreover, a merge operator for 0 100 200 300 400 500 Maximum Width 1.00 1.05 1.10 1.15 1.20 Upper Bound sortObj cluster (a) upper bound vs W 0 100 200 300 400 500 600 700 Average Time (millisecond) 1.00 1.05 1.10 1.15 1.20 Upper Bound sortObj cluster (b) upper bound vs time 0 100 200 300 400 500 Maximum Width 0.0 0.2 0.4 0.6 0.8 Lower Bound sortObj cluster (c) lower bound vs W 0 200 400 600 800 10001200 Average Time (millisecond) 0.0 0.2 0.4 0.6 0.8 Lower Bound sortObj cluster (d) lower bound vs time Figure 5: Clustering vs sortObj (P2||ΣC3 j ) 1||ΣwjUj consists in choosing the state with minimum processing time. We experimented with 25 large instances with 500 jobs from (Tanaka, Fujikuma, and Araki 2009). Average bound and running time of different approaches ran on these instances are shown in Table 1, and they confirm the superiority of our approach which yields much better bounds with much smaller DD widths compared to the sortObj approach. Moreover, we performed experiments to obtain alternative bounds. We formulated the problem as MILP with positional variables (Keha, Khowala, and Fowler 2009). In our computational experiments, after an imposed time limit of 2 minutes, Gurobi found feasible solutions for all tested instances, but the both the primal and the dual bounds were far inferior than those obtained with our approach in less than one and two seconds (see Table 1). clustering-based sortObj IP (Gurobi) W=100 W=500 W=100 W=1000 time primal time primal time primal time primal time primal 0.1 1.36 1.3 1.2 0.1 2.00 1.1 1.45 120 2.76 time dual time dual time dual time dual time dual 0.1 0.14 1.6 0.30 2 0.05 2.7 0.23 120 0.06 Table 1: Clustering, sortObj, and IP for 1||ΣwjUj Results for the Second Variant: k < W Next, we will present the variant in which the number of clusters k is smaller than maximum width W of the approximate DD to be compiled. We illustrate the results for 0/1 Knapsack problem, noting that the results for the other problems follow similar patterns. In the following figures, the solid red curves represent DDs built using the sortObj node selection heuristic, and the dashed colorful curves correspond to DDs compiled using clustering-based node selection; every dashed curve corresponds to a specific maxiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8087 0 200 400 600 800 1000 Maximum Width 1.0 1.5 2.0 2.5 3.0 3.5 Upper Bound sortObj cluster (a) upper bound vs W 0 25 50 75 100 125 150 175 Average Time (millisecond) 1.0 1.5 2.0 2.5 3.0 3.5 Upper Bound sortObj cluster (b) upper bound vs time 0 200 400 600 800 1000 Maximum Width 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Lower Bound sortObj cluster (c) lower bound vs W 0 200 400 600 800 10001200 Average Time (millisecond) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Lower Bound sortObj cluster (d) lower bound vs time Figure 6: Clustering vs sortObj (P2||ΣwjCj) mum width W and different number k of clusters (e.g. W = 200, k ∈{10, 50, 100, 200} or W = 50, k ∈{10, 20, 50}). Figure 7 shows the bounds obtained from using different W and k for the KP. In the sortObj case, the maximum width is set to W = 3000, and for the clustering approach different maximum widths (W ∈{10, 50, 100, 200, 500}) and different numbers k of clusters are taken. 0 100 200 300 400 500 Number of Clusters 0.80 0.85 0.90 0.95 1.00 Lower Bound sortObj_3000 cluster_500 cluster_200 cluster_100 cluster_50 cluster_10 (a) lower bound vs k 0 100 200 300 400 500 Number of Clusters 1.0 1.1 1.2 1.3 1.4 1.5 Upper Bound sortObj_3000 cluster_500 cluster_200 cluster_100 cluster_50 cluster_10 (b) upper bound vs k Figure 7: Clustering (dashed curves for various k) vs sortObj (red curves for W = 3000) for KP. 0 20 40 60 80 100 Average Size / 1000 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Lower Bound sortObj cluster_500 cluster_200 cluster_100 cluster_50 cluster_10 (a) lower bounds vs size 0 20 40 60 80 100 Average Size / 1000 1.0 1.1 1.2 1.3 1.4 1.5 Upper Bound sortObj cluster_500 cluster_200 cluster_100 cluster_50 cluster_10 (b) upper bounds vs size Figure 8: Comparison of the sizes of the DDs Figure 8 compares the sizes and the bounds for DDs compiled using clustering-based node selection and sortObj for KP. The range of maximum widths considered in the experiments is [10, 500]. 0 100 200 300 400 500 600 Average Time (millisecond) 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Lower Bound sortObj cluster_500 cluster_200 cluster_100 cluster_50 cluster_10 (a) lower bounds vs time 0 100 200 300 400 Average Time (millisecond) 1.1 1.2 1.3 1.4 1.5 Upper Bound sortObj cluster_500 cluster_200 cluster_100 cluster_50 cluster_10 (b) upper bounds vs time Figure 9: Comparison of the times of the DDs Figure 9 shows how changing the parameters for sortObj and clustering-based node selection affects their running times and compares these running times and the bounds for the two approaches applied on the KP. The ranges of the maximum widths are [10, 1000] and [10, 500] for sortObj and clustering-based node selection, respectively. The reason for having a larger W for sortObj is that we wanted to have the maximum running times be roughly equal for the sake of the readability of the graphs. As it is clear in Figures 7, 8, and 9, making the choice to set k < W does not decrease the quality of the bounds obtainable via DDs compiled using clustering-based node selection in a noticeable amount. However, it indeed decreases the required size and running time for providing bounds that are close to those achievable in the first variant. Therefore, this variant outperforms the sortObj baseline even more than the first variant in which k = W. Conclusion In this paper, we propose a novel and generic ML-based approach for node selection in the top-down compilation of approximate DDs that relies on clustering nodes according to their state information. We evaluated two variants of this approach on five different problem types, showing that it is able to provide substantially stronger bounds in relation to the size of the DD and the time needed to obtain the bounds than a similarly generic sorting-based approach that is commonly used in the literature. It is important to note that all the results presented in this paper were obtained with a standard k-means clustering approach. In general, if one aims at improving the performance for certain problems, a natural approach would be to experiment with alternative clustering approaches, and to tune the parameters of the selected clustering approach. Another natural extension of this research is to evaluate our approach within an exact DD-based solution approach, e.g. DD-based branch-and-bound. This would allow to see if the bound improvements achieved in this paper translate to a speed-up for exactly solving discrete optimization problems with DDs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8088 Acknowledgments This research was funded by the Return Programme of the Federal State of North Rhine Westphalia (NRW R¨uckkehrprogramm). References Bengio, Y.; Lodi, A.; and Prouvost, A. 2021. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2): 405–421. Bergman, D.; Cire, A. A.; Van Hoeve, W.-J.; and Hooker, J. N. 2016. Discrete optimization with decision diagrams. INFORMS Journal on Computing, 28(1): 47–66. Bruno, J.; Coffman Jr, E. G.; and Sethi, R. 1974. Scheduling independent tasks to reduce mean finishing time. Communications of the ACM, 17(7): 382–387. Cappart, Q.; Bergman, D.; Rousseau, L.-M.; Pr´emontSchwarz, I.; and Parjadis, A. 2022. Improving variable orderings of approximate decision diagrams using reinforcement learning. INFORMS Journal on Computing, 34(5): 2552–2570. Cappart, Q.; Ch´etelat, D.; Khalil, E. B.; Lodi, A.; Morris, C.; and Velickovic, P. 2023. Combinatorial optimization and reasoning with graph neural networks. J. Mach. Learn. Res., 24: 130–1. Cappart, Q.; Goutierre, E.; Bergman, D.; and Rousseau, L.M. 2019. Improving optimization bounds using machine learning: Decision diagrams meet deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 1443–1451. Castro, M. P.; Cire, A. A.; and Beck, J. C. 2022. Decision diagrams for discrete optimization: A survey of recent advances. INFORMS Journal on Computing, 34(4): 2271– 2295. Chu, P. C.; and Beasley, J. E. 1998. A genetic algorithm for the multidimensional knapsack problem. Journal of heuristics, 4: 63–86. Copp´e, V.; Gillard, X.; and Schaus, P. 2023. Boosting Decision Diagram-Based Branch-And-Bound by Pre-Solving with Aggregate Dynamic Programming. In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Schloss Dagstuhl-LeibnizZentrum f¨ur Informatik. de Weerdt, M.; Baart, R.; and He, L. 2021. Single-machine scheduling with release times, deadlines, setup times, and rejection. European Journal of Operational Research, 291(2): 629–639. Frohner, N.; and Raidl, G. R. 2019. Merging quality estimation for binary decision diagrams with binary classifiers. In International Conference on Machine Learning, Optimization, and Data Science, 445–457. Springer. Graham, R.; Lawler, E.; Lenstra, J.; and Kan, A. 1979. Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey. In Hammer, P.; Johnson, E.; and Korte, B., eds., Discrete Optimization II, volume 5 of Annals of Discrete Mathematics, 287–326. Elsevier. Hooker, J. N. 2013. Decision diagrams and dynamic programming. In Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems: 10th International Conference, CPAIOR 2013, Yorktown Heights, NY, USA, May 18-22, 2013. Proceedings 10, 94–110. Springer. Hooker, J. N. 2017. Job sequencing bounds from decision diagrams. In International Conference on Principles and Practice of Constraint Programming, 565–578. Springer. Horn, M.; Maschler, J.; Raidl, G. R.; and R¨onnberg, E. 2021. A*-based construction of decision diagrams for a prizecollecting scheduling problem. Computers & Operations Research, 126: 105125. Hottung, A.; Kwon, Y.-D.; and Tierney, K. 2021. Efficient active search for combinatorial optimization problems. arXiv preprint arXiv:2106.05126. Keha, A. B.; Khowala, K.; and Fowler, J. W. 2009. Mixed integer programming formulations for single machine scheduling problems. Computers & Industrial Engineering, 56(1): 357–367. Kellerer, H.; Pferschy, U.; and Pisinger, D. 2004. Knapsack Problems. Springer, Berlin, Germany. Khalil, E.; Le Bodic, P.; Song, L.; Nemhauser, G.; and Dilkina, B. 2016. Learning to branch in mixed integer programming. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. Kotary, J.; Fioretto, F.; Van Hentenryck, P.; and Wilder, B. 2021. End-to-end constrained optimization learning: A survey. arXiv preprint arXiv:2103.16378. Lenstra, J. K.; Kan, A. R.; and Brucker, P. 1977. Complexity of machine scheduling problems. In Annals of discrete mathematics, volume 1, 343–362. Elsevier. Parjadis, A.; Cappart, Q.; Rousseau, L.-M.; and Bergman, D. 2021. Improving branch-and-bound using decision diagrams and reinforcement learning. In Integration of Constraint Programming, Artificial Intelligence, and Operations Research: 18th International Conference, CPAIOR 2021, Vienna, Austria, July 5–8, 2021, Proceedings 18, 446–455. Springer. Pisinger, D. 2005. Where are the hard knapsack problems? Computers & Operations Research, 32(9): 2271–2284. Tanaka, S.; Fujikuma, S.; and Araki, M. 2009. An exact algorithm for single-machine scheduling without machine idle time. Journal of Scheduling, 12: 575–593. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8089
2024
899
18,738
Propagation Tree Is Not Deep: Adaptive Graph Contrastive Learning Approach for Rumor Detection Chaoqun Cui, Caiyan Jia* School of Computer and Information Technology & Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing 100044, China {21120341,cyjia}@bjtu.edu.cn Abstract Rumor detection on social media has become increasingly important. Most existing graph-based models presume rumor propagation trees (RPTs) have deep structures and learn sequential stance features along branches. However, through statistical analysis on real-world datasets, we find RPTs exhibit wide structures, with most nodes being shallow 1-level replies. To focus learning on intensive substructures, we propose Rumor Adaptive Graph Contrastive Learning (RAGCL) method with adaptive view augmentation guided by node centralities. We summarize three principles for RPT augmentation: 1) exempt root nodes, 2) retain deep reply nodes, 3) preserve lower-level nodes in deep sections. We employ node dropping, attribute masking and edge dropping with probabilities from centrality-based importance scores to generate views. A graph contrastive objective then learns robust rumor representations. Extensive experiments on four benchmark datasets demonstrate RAGCL outperforms state-of-theart methods. Our work reveals the wide-structure nature of RPTs and contributes an effective graph contrastive learning approach tailored for rumor detection through principled adaptive augmentation. The proposed principles and augmentation techniques can potentially benefit other applications involving tree-structured graphs. Introduction The unprecedented growth of the Internet in recent years has promoted the widespread applications of social media. Digital platforms like Weibo and Twitter have evolved into critical conduits for users to garner information and interact with each other. These platforms, while facilitating information dissemination and diverse opinion expression on a multitude of trending issues, are also breeding grounds for various rumors. Given the massive user base and the ease of use, rumors are disseminated extensively and swiftly via social media, wreaking substantial societal havoc. Therefore, there is an urgent need to establish efficacious and efficient strategies for automated rumor verification on social media. Currently, a plethora of studies concerning rumor detection exist. Certain studies (Bian et al. 2020; Wei et al. 2021) have demonstrated that the propagation structure of a claim, which fully encapsulates the interrelationship between posts *Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and harnesses the collective intelligence of the crowd, is invaluable for debunking rumors. In general, rumor detection models built upon rumor propagation structures glean discriminative features of rumors from the interrelation of comments, apprehending specific patterns of reply stances as the basis for claim classification, given that clear disparities exist between the comment stances of rumor claims and those of non-rumor claims. These models are adept at discerning these differences, which constitute one of the fundamental postulates of rumor detection methods based on rumor propagation trees (Ma, Gao, and Wong 2018). This supposition relies, to a certain degree, on the deep structures of rumor propagation trees (RPTs). But are RPTs really deep? Bearing this question in mind, we undertook a statistical analysis to explore the structural characteristics of RPTs. The findings indicate that, in commonly employed rumor detection datasets and real-world social media platforms, the tree structures of claims are typically shallow. The vast majority of a claim’s comments constitute 1-level replies, with the remainder primarily consisting of 2-level replies, and only a negligible number delve into deeper levels. This essentially implies that RPTs are not characterized by deep tree structures, but instead exhibit wide structures. As per our statistical analysis, the majority of nodes in RPTs are 1-level replies, all pointing directly to the root node (i.e., the source post), which highlights the significance of the root node. Further, it is plausible that the majority of the 1-level replies, due to their lack of deeper engagement, may contain less informational value in rumor identification compared to nodes with more extensive paths. Based on these findings, we propose the Rumor Adaptive Graph Contrastive Learning (RAGCL) method. RAGCL utilizes node centrality measures to generate augmented views of RPTs and leverages graph contrastive learning methods to facilitate graph neural networks (GNNs) in learning crucial rumor discriminative features from the deep sections of RPTs. Empirical studies demonstrate the effectiveness of RAGCL. In summary, the contributions of this study are as follows. • Our statistical survey has unveiled that RPTs primarily exhibit a wide tree structure, breaking the stereotype of a deep tree structure in previous studies. This shifts the understanding of information propagation processes on social media platforms. • Informed by the structural characteristics of RPTs and inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 73 spired by current research on graph self-supervised learning (Zhu et al. 2021), we propose the RAGCL method to learn discriminative features for rumor detection. • In light of the unique tree structure of RPTs, we propose three guiding principles to be followed when designing adaptive data augmentation methods for RPTs. • Our experimental results underscore the superior performance of RAGCL in comparison to the current state-ofthe-art (SOTA) methods and substantiate the validity of our three principles. Related Work In this section, we will review the related works on rumor detection and graph contrastive learning. Social Media Rumor Detection To debunk rumors, various efforts have been made. Among the existing studies, early methods mainly take advantage of traditional classification methods by using hand-crafted features (Castillo, Mendoza, and Poblete 2011; Kwon et al. 2013). In recent years, with the advent of deep learning, more effective approaches have emerged, resulting in significant improvements in rumor detection performance. These approaches can be broadly categorized into four classes, including time-series based methods (Ma et al. 2016; Yu et al. 2017; Liu and Wu 2018) which model text content or user profiles as time series, propagation structure learning methods (Ma, Gao, and Wong 2018; Bian et al. 2020; Wei et al. 2021; Sun et al. 2022b) which consider the propagation structures of source rumors and their replies, multi-source integation methods (Karimi et al. 2018; Birunda and Devi 2021) which combine multiple resources of rumors including post content, user profiles, heterogeneous relations between posts and users, multi-modal fusion methods (Wang et al. 2018; Jin et al. 2017) which incorporate both post content and related images to effectively debunk rumors. The significance of propagation structure information has been increasingly recognized in the field of rumor detection research. Numerous SOTA models bank on learning the representations of RPTs utilizing GNNs. Ma, Gao, and Wong (2018) designed a bottom-up and top-down tree-structured recursive neural network to extract information from RPTs. In a similar vein, Bian et al. (2020) applied a bidirectional GCN alongside a root node feature enhancement technique to address rumor detection tasks. Furthermore, Sun et al. (2022b) incorporated contrastive loss with adversarial training to learn representations robust to rumor noise. These studies bear testament to the efficacy of propagation structure learning in accurately identifying rumors. Graph Contrastive Learning The advancement of deep learning has instigated progress across numerous studies predicated on neural message passing algorithms (Gilmer et al. 2017). These algorithms learn graph representations in a supervised manner and have attained SOTA results across a wide array of tasks (Kipf et al. 2018; Xie and Grossman 2018; Chen et al. 2019). In recent (a) False-Rumor (b) True-Rumor Figure 1: The stances in rumor propagation trees. years, graph self-supervised learning methods have gradually emerged to leverage unlabeled data for addressing the problem of scarce labeled data, with most of them being graph contrastive learning methods. Contrastive learning methods have been widely applied in the domain of image representation learning (He et al. 2020; Chen et al. 2020), and subsequently extended to the realms of text (Giorgi et al. 2020; Shi et al. 2019; Fang et al. 2020) and graph data (Velickovic et al. 2019; Sun et al. 2019). Graph contrastive learning methodologies have evolved from initial methods premised on mutual information maximization (Velickovic et al. 2019; Sun et al. 2019) to contemporary methods based on graph augmentation (Hassani and Khasahmadi 2020; You et al. 2020, 2021). Graph contrastive learning methods based on graph augmentation first employ diverse graph augmentation strategies (such as node drop, edge perturbation, etc.) to acquire varying views of a given graph, thereafter constructing positive and negative samples in the contrastive loss. Ultimately, graph representations are learned by minimizing the contrastive loss. To accommodate different types of graph datasets, several research studies have focused on adaptive graph augmentation (Zhu et al. 2021; You et al. 2021; Yin et al. 2022). RAGCL represents a novel adaptive graph augmentation approach for rumor detection to learn robust and discriminative representations of RPTs. Analysis on Propagation Tree As shown in Figure 1, current propagation structure learning based rumor detection methods are devoted to collect support (S), deny (D), question (Q), comment (C) and other stances between a reply and its source post, and pairs of replies (Ma, Gao, and Wong 2018). For different classes of claims, there exist noticeable differences in their stance patterns, which can serve as discriminative features for rumor identification. For instance, the true stance of a D-D relation is S, whereas the true stance of a D-S relation is D. Current propagation structure learning methods exploit stance features between sequential nodes on the same branch of the tree structure for rumor detection (Bian et al. 2020; Sun et al. 2022b). Nonetheless, these features are dependent on the depth structure of RPTs. But, are RPTs truly deep? In the present study, we conduct a thorough exploration of the structural properties inherent to RPTs, deploying a statistical approach. The datasets under scrutiny consist of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 74 Figure 2: A rumor propagation tree. Weibo (Ma et al. 2016), DRWeibo1, Twitter15, and Twitter16 (Ma, Gao, and Wong 2017). Further, we survey two large-scale unlabeled public datasets, namely UWeibo2 and UTwitter3. Data from these datasets originate from popular posts on Weibo and Twitter platforms, mirroring the universal traits of claims within social media environments. The statistical results are shown in Table 1. The entries beneath the dotted line in Table 1 denote the mean count of replies, 1level replies, 2-level replies, deeper (>2) replies, and 1-level replies with subsequent replies per claim in the dataset, respectively. The statistics lead us to the ensuing conclusions. • RPTs resemble wide trees rather than deep ones. 1level replies constitute the majority of all replies within RPTs, with proportions of 65.1%, 77.8%, 70.7%, and 64.2% for the four labeled datasets respectively. • Only a minimal portion of 1-level replies within RPTs spawn subsequent replies. Amongst all 1-level replies in RPTs, only a fraction give rise to further replies, with percentages of 9.7%, 6.4%, 10.4%, and 10.8%. • Deep replies within a RPT are seldom observed. Deep replies make up a tiny fraction of all replies in RPTs, with percentages of 13.8%, 4.4%, 17.3%, and 23.4%. This infers that the model is constrained to learning the aforesaid stance features from a limited set of replies. Both UWeibo and UTwitter datasets also exhibit these three characteristics, signifying that these traits are pervasive attributes of claims on social media platforms. The above observations paint the generalized structure of a RPT, as illustrated in Figure 2. It’s noticeable that only nodes enclosed within the box in Figure 2 carry the aforementioned stance features, while most of the nodes in the tree are 1-level nodes without further reply (no deep structure). Based on the above observations, we can conceptualize a RPT as a highly imbalanced graph, with imbalances reflected in the following two aspects: • The root node of a RPT features highly dense connections, whereas connections at the remaining nodes are exceedingly sparse. • The intensive discussions and informative portions of a RPT are predominantly found within a limited number 1https://github.com/CcQunResearch/DRWeibo 2https://github.com/CcQunResearch/UWeibo 3https://github.com/CcQunResearch/UTwitter of 1-level replies (the two green nodes in Figure 2). In contrast, the majority of the 1-level replies that lack further deeper responses also lack discriminative features that can aid in rumor identification. Such characteristics are determined by users’ habits of using social media and the order in which platforms display comments. In general, users are inclined to reply directly to source posts rather than to other users’ comments. Additionally, platforms such as Weibo and Twitter tend to sort replies based on popularity rather than the chronological order of posting. This contributes to the imbalance in the information distribution within a propagation tree. With the aim of enhancing our model’s focus on the intense and informative discussions of RPTs and reducing the influence of a large number of unresponded 1-level replies, we put forward our RAGCL method. The objective of RAGCL is to stress the importance of comments within RPTs that have intensive replies, while also focusing on root nodes by directing the aggregation of information from other nodes towards these roots, considering the wide structures of RPTs. Method We will present the design of RAGCL in this section. Notation The rumor detection task can be defined as a graph-level classification task. Specifically, we denote a labeled claim dataset as C = {c1, c2, · · · , cm}, where ci represents the i-th claim and m represents the number of labeled claims. Each labeled claim c = (y, G) consists of its ground-truth label y ∈{N, R} (i.e., Non-rumor or Rumor) or finegrained label y ∈{N, F, T, U} (i.e., Non-rumor, False Rumor, True Rumor, Unverified Rumor) and its propagation structure G = (V, E), where V and E represent the set of nodes (a source post and comments of the claim) and edges (the relations between pairs of replies or source post and a reply), respectively. The set of propagation structure graphs corresponding to all claims is G = {G1, G2, · · · , Gm}. The goal of rumor detection task is to learn a classifier f : G −→Y (Y = {y1, y2 · · · ym}) from dataset C. Framework From the aforementioned analysis, it is important to learn discriminative features from nodes with deep structures (e.g., the nodes in the box in Figure 2). These nodes and their corresponding edges possess a conspicuously higher importance compared to the nodes located outside the box. Based on this idea, we introduce RAGCL, an adaptive graph contrastive learning framework purposefully engineered for rumor detection. RAGCL assigns varying levels of importance to nodes and edges within a RPT based on a selected node centrality measure. Subsequently, varying probabilities of drop or mask, informed by these scores, are employed to adaptively generate two graph augmented views of the RPT, utilizing node drop, attribute mask, or edge drop operators. The contrastive loss is subsequently minimized to learn the tree’s representation. A comprehensive illustration of the RAGCL process is presented in Figure 3. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 75 Statistic Weibo DRWeibo Twitter15 Twitter16 UWeibo UTwitter language zh zh en en zh en # claims 4664 6037 1490 818 209549 204922 # non-rumors 2351 3185 374 205 # false rumors 2313 2852 370 205 # true rumors 372 207 # unverified rumors 374 201 # avg reply 803.5 61.8 50.2 49.1 50.5 82.5 # avg 1-level reply 522.9(65%) 48.1(78%) 35.5(71%) 31.6(64%) 36.4(72%) 48.5(59%) # avg 2-level reply 169.3(21%) 11.0(17%) 5.9(12%) 6.0(12%) 10.2(20%) 21.5(26%) # avg deeper reply 111.2(14%) 2.7(5%) 8.7(17%) 11.5(24%) 4.0(8%) 12.5(15%) # avg responded 1-level reply 50.7 3.1 3.7 3.4 4.1 8.0 Table 1: Statistics of the datasets. Figure 3: The framework of RAGCL. Augmentation Principle Node centrality is an index to measure the importance of nodes in a graph. There are three recommended node centrality measures used by RAGCL, including degree centrality (Shaw 1954), betweenness centrality (Freeman 1977) and PageRank centrality (Brin and Page 1998). • Degree centrality takes the degree of nodes as the measure of node centrality. The idea is that a post with multiple replies is important in a RPT. RAGCL uses the node out-degree of top-down graph of a RPT as the measure of degree centrality. • Betweenness centrality calculates all shortest paths of any two nodes in a graph. A node becomes prominent in terms of betweenness centrality if a multitude of these paths transit through it. RAGCL utilizes either top-down or bottom-up graphs to ascertain betweenness centrality. • PageRank centrality is commonly used in web page ranking. Its basic idea is that the importance of a page on Internet depends on the quantity and quality of inbound links. RAGCL leverages the bottom-up graph of a RPT to compute PageRank centrality. Given our prior analysis on the structural characteristics of RPTs, we have summarized the following three principles for assigning importance scores to nodes and edges. • Principle 1: Given the pivotal role of source posts (Bian et al. 2020; Sun et al. 2022b), the root nodes of RPTs are exempt from the data augmentation procedure. • Principle 2: Nodes and edges with deep replies within RPTs (referenced within the boxed portion of Figure 2) should be preserved to the greatest extent feasible. • Principle 3: In the deep parts of RPTs, the nodes in lowlevel should be retained in data augmentation more than its deeper successor nodes, because the successor nodes are basically discussed around their parent nodes, so they should hold relatively lower importance. Other node centrality measures, such as eigenvector centrality (Bonacich 1972), Katz centrality (Katz 1953), and closeness centrality (Sabidussi 1966), are deemed unsuitable for RPTs due to inherent characteristics which preclude adherence to the aforementioned principles. The node colors in Figure 2 show the magnitude of node centrality that should be obtained according to the above principles. Furthermore, to ensure compliance with Principle 2, RAGCL assigns the root node of a RPT the minimum value from among all node centralities within the graph, given its dense characteristic. Throughout the data augmentation process, the importance of an edge in RAGCL is gauged by the centrality of the edge’s two constituting nodes. An excessively high root node centrality could artificially inflate the importance of edges connecting the root node with unresponded 1-level replies, thereby contravening Principle 2. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 76 Adaptive Graph Augmentation RAGCL conducts adaptive data augmentation according to node centrality, yielding two augmented views of the RPT. It primarily utilizes three unique data augmentation operators: node dropping, attribute masking, and edge dropping. During the training phase, two out of these three operators are selected. RAGCL employs node centrality to assign importance scores to nodes and edges, after which it computes the probability of dropping or masking for data augmentation. Node Dropping Consider a propagation graph G of any given claim within dataset C. Given a node centrality measure φc(·) : V →R+, V is the space where node v is located, and the final node centrality value of node v is represented by φc(v). Node dropping in RAGCL involves assigning a drop probability pn v to each node v and removing a portion of nodes (along with the edges connected to these nodes) from the node set V in accordance with this probability to yield an augmented view. It is noteworthy that the root node is never dropped in this operation. The node importance score wn v is set as node centrality value, that is, wn v = φc(v). Given that the value of node centrality might vary across several orders of magnitude, sn v = log wn v is set to alleviate the influence of densely connected nodes. The node drop probability is derived following the subsequent normalization procedure. pn v = sn max −sn v snmax −uns · pn, (1) where pn is a hyperparameter governing the overall probability of node dropping, and sn max and un s represent the maximum and mean values of sn v, respectively. Attribute Masking Attribute masking in RAGCL is defined as substituting the feature vectors of a fraction of the nodes in V with a zero vector. The root node is exempt from this operation. Attribute masking does not entail node removal; hence, edges connected to masked nodes are retained. The mask probability for a node v is also pn v. Edge Dropping We adopt the top-down graph of RPTs in RAGCL. Edge dropping involves setting a drop probability pe uv for each edge (u, v), subsequently utilizing this probability to remove certain edges from the edge set E to produce an augmented view. pe uv should reflect the edge’s importance, implying that the pe uv of an essential edge should be lower than that of a less critical edge. Note that the centrality of a root node is assigned the minimum centrality value among all nodes in a graph. The importance score we uv is defined as the mean centralities of its two connecting nodes. we uv = (φc(u) + φc(v))/2. (2) The drop probability is then derived based on the importance score of edge (u, v). Analogously, we set se uv = log we uv to mitigate the impact of densely connected nodes. The probability is then ascertained similarly as follows. pe uv = se max −se uv semax −ues · pe, (3) where pe is a hyperparameter utilized to regulate the overall probability of edge dropping, and se max and ue s represent the maximum and mean values of se uv, respectively. Contrastive Loss Optimization The data augmentation of a propagation graph G yields two augmented views, namely G1 and G2. These views are processed through a GCN (Kipf and Welling 2016) encoder to obtain two representations: hG1 and hG2. Within RAGCL, the unsupervised contrastive loss on the graph set G, corresponding to the dataset C, is formulated as follows: Lunsup = −EP[sim(hG1, hG2)] + EP[log(E˜Pexp(sim(hG1, h ′ G2))) + log(E˜Pexp(sim(h ′ G1, hG2)))], (4) where P denotes the distribution adhered to by G; G represents an input sample drawn from P; G ′ is a negative sample drawn from ˜P = P; sim(x1, x2) = xT 1 x2/||x1|| ||x2|| is the cosine similarity. RAGCL employs Lunsup as the regularization term of the supervised loss Lsup (calculated by hG), and optimizes the following loss function during the training phase. L = Lsup + λ · Lunsup, (5) where λ is an tunable hyperparameter. Experiments In this section, we present main experimental results. Experiments on the effects of hyperparameters are in the supplementary material. Experimental Configuration We conducted experiments on four real-world benchmark datasets, Weibo, DRWeibo, Twitter15 and Twitter16, to evaluate RAGCL’s performance. Weibo and DRWeibo are Chinese binary classification datasets, and Twitter15 and Twitter16 are English multiple classification datasets. Table 1 shows the statistics of the datasets. We make comparisons with the following baselines. PLAN (Khoo et al. 2020) is based on Transformer. Its StA-PLAN version incorporates RPT structural information. BiGCN (Bian et al. 2020) leverages two GCN encoders, a top-down and a bottom-up, and root node feature enhancement strategy to classify rumor. UDGCN is a variant of BiGCN, it takes undirected graph of a RPT as the model input, and only one GCN encoder that applies root node feature enhancement strategy is used. GACL (Sun et al. 2022b) performs rumor classification based on contrastive learning and adversarial training. DDGCN (Sun et al. 2022a) can model multiple types of information in one unified framework. The experiment setting details will be explained in the supplementary material. The experimental results are the average results of 10 random split of the datasets. We report the best performance of RAGCL that can be achieved with different node centrality and data augmentation combination. The source code of RAGCL is available at https: //github.com/CcQunResearch/RAGCL. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 77 Results and Discussion Results in Table 2 and 3 show that RAGCL outperforms the baselines on all datasets. PLAN performs relatively poorly on all datasets and consumes more GPU resources due to Transformer architecture, which points to the necessity of adopting GNN architecture. BiGCN is a typical model built on the deep structure of RPT, which presupposes that the information flow in RPTs presents as a top-down propagation and a bottom-up dispersion process. However, our research findings indicate that the RPT actually manifests as a wide structure. This suggests that, for tree structures like RPT, in addition to the depth-directional information flow, the imbalanced distribution of information in the width direction is also an important characteristic, which is currently overlooked by existing techniques. Although GACL uses BERT (Devlin et al. 2018) to extract initial feature vectors, it does not improve significantly over other baselines. This may suggest that rumor detection models are insensitive to the way initial features are extracted, and what is more crucial is the high-level model’s ability to learn the interactions between nodes. Additionally, GACL utilizes supervised contrastive learning to learn the claim representation, while RAGCL, which adopts unsupervised contrastive loss, also achieves superior performance. The application of unsupervised loss allows the model to learn good representations without relying on labels. This suggests that it is feasible to use RAGCL to further enhance the rumor detection capability of the model by pretraining on large-scale, unlabeled dataset from social media platforms (such as UWeibo and UTwitter). We leave this for future research. Ablation Study We conducted a series of ablation experiments to verify the influence of different factors on the model performance. Unresponded 1-level Replies In order to validate the impact of unresponded 1-level replies within RPTs, we conducted experiments on the four datasets depicted in Figure 4. We eliminated α% of unresponded 1-level replies in each RPT, subsequently utilizing BiGCN (Bian et al. 2020) for classification. With the increase of α, it is observed that the model performance remains steady, or even improves to some extent. This indicates that these unresponded 1-level replies, as we previously conjectured, have less significance or may even serve as noise within rumor classification process, thus RAGCL is justified in dropping them. Data Augmentation Combinations Table 4 presents the impact of different data augmentation combinations, where we report the accuracy for each dataset. The experimental results show that using attribute masking in Chinese datasets (Weibo and DRWeibo) will reduce the model performance. For English datasets, various data augmentation combinations have minimal effect on the results. Different data augmentation combinations all achieve significant performance gains over using only GCN for supervised classification without applying contrastive loss. Furthermore, the results also indicate that adaptive data augmentation outperforms random data augmentation, providing further validation of the reliability of our theory. Figure 4: The influence of unresponded 1-level replies. Node Centrality Measures We conducted the experiments in Table 5 to explore the influence of different node centrality measures. We report the accuracy that RAGCL achieves with different node centrality measures and the average time cost (in seconds) to calculate each RPT centrality. Degree centrality can be calculated rapidly, thus yielding efficient determination of node centrality within sizable datasets. However, its exclusive focus on edge number fails to satisfy Principle 3, thereby highlighting a limitation of degree centrality. For instance, a parent node and one of its children possessing identical reply counts will be assigned the same centrality. In fact, degree centrality also achieves relatively poor performance. Betweenness centrality aligns well with the three principles. For RPTs, the betweenness centrality is a very intuitive index to measure the importance of nodes. A node with numerous successor nodes will have many shortest paths traversing through it, leading to a correspondingly elevated betweenness centrality. However, the computation of betweenness centrality is more complex and time-intensive than the other measures. PageRank centrality, on the other hand, not only aligns well with the basic principles but also benefits from a relatively swift calculation process, making it more conducive to RAGCL’s training phase. We also examined the effect of eigenvector centrality, Katz centrality, and closeness centrality to verify the validity of our three guiding principles. Given their individual characteristics, these measures fail to meet Principle 2 and 3, resulting in subpar performance. Additionally, their computational complexity is relatively high. Therefore, we do not recommend using these centrality measures in RAGCL. Graph Direction RAGCL is compatible with top-down and bottom-up directed graphs as well as undirected graphs. We investigated the impact of different types of graph in Figure 5. The results show that using undirected graphs leads to a performance decline. This could be due to the fact that during the forward propagation process of GNNs, densely connected nodes at the root node will see each other in their neighboring field of view. These nodes mutually aggregate each other’s information, ultimately resulting in a loss of node feature uniqueness, causing an over-smoothing problem (Li, Han, and Wu 2018; Cai and Wang 2020; Oono and Suzuki 2019). On the other hand, top-down and bottom-up The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 78 Method Class Weibo DRWeibo Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 PLAN R 0.915±0.007 0.908 0.923 0.915 0.788±0.005 0.786 0.760 0.771 N 0.923 0.907 0.914 0.793 0.813 0.802 BiGCN R 0.942±0.008 0.919 0.968 0.942 0.866±0.010 0.869 0.849 0.858 N 0.967 0.918 0.942 0.863 0.882 0.872 UDGCN R 0.940±0.007 0.914 0.971 0.942 0.861±0.010 0.839 0.871 0.855 N 0.969 0.910 0.938 0.882 0.852 0.867 GACL R 0.938±0.006 0.936 0.940 0.938 0.870±0.009 0.865 0.856 0.860 N 0.940 0.936 0.938 0.874 0.882 0.878 DDGCN R 0.948±0.004 0.924 0.979 0.951 0.878±0.005 0.872 0.864 0.868 N 0.976 0.917 0.946 0.883 0.891 0.887 RAGCL R 0.962±0.005 0.956 0.968 0.962 0.894±0.004 0.893 0.877 0.885 N 0.969 0.957 0.963 0.895 0.909 0.902 Table 2: Experimental results on Weibo and DRWeibo dataset. Method Twitter15 Twitter16 Acc. N F T U Acc. N F T U F1 F1 F1 F1 F1 F1 F1 F1 PLAN 0.819±0.004 0.839 0.854 0.817 0.759 0.843±0.005 0.855 0.851 0.858 0.805 BiGCN 0.844±0.005 0.856 0.844 0.863 0.809 0.880±0.009 0.793 0.912 0.947 0.849 UDGCN 0.840±0.005 0.848 0.847 0.864 0.799 0.875±0.009 0.783 0.902 0.954 0.839 GACL 0.846±0.007 0.859 0.845 0.866 0.812 0.891±0.004 0.802 0.929 0.945 0.872 DDGCN 0.835±0.006 0.840 0.850 0.856 0.791 0.893±0.004 0.807 0.931 0.946 0.871 RAGCL 0.867±0.005 0.891 0.867 0.869 0.835 0.905±0.003 0.836 0.923 0.963 0.882 Table 3: Experimental results on Twitter15 and Twitter16 dataset. Aug1 Aug2 Weibo DRWeibo Twitter15 Twitter16 0.927 0.844 0.822 0.846 Node Dropping (random) Attr Masking (random) 0.940 0.861 0.837 0.865 Node Dropping Attr Masking 0.953 0.892 0.867 0.896 Node Dropping Edge Dropping 0.962 0.894 0.864 0.902 Attr Masking Edge Dropping 0.952 0.888 0.864 0.905 Table 4: The influence of combinations of data augmentation. Centrality T(n) Weibo Twitter15 Acc. Time Acc. Time Degree O(1) 0.953 0.82 0.860 0.13 Betweenness O(n3) 0.958 8.12 0.867 1.34 PageRank O(n) 0.962 1.37 0.865 0.22 Eigenvector O(n3) 0.939 9.23 0.850 1.62 Katz O(n3) 0.943 9.37 0.849 1.67 Closeness O(n3) 0.935 7.72 0.841 1.44 Table 5: The influence of node centrality measures. directed graphs are able to effectively block excessive information flow between nodes at the root node. Conclusion This study introduces RAGCL, an adaptive graph contrastive learning method specifically for rumor detection. By taking into consideration the structural characteristics of (a) Weibo (b) Twitter15 Figure 5: The impact of information flow direction. RPTs, we propose three adaptive data augmentation methods based on node centrality and provide guiding principles for designing these methods. Our experimental results demonstrate that RAGCL surpasses current SOTA methods on all datasets, showcasing its superior performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 79 Acknowledgments The authors would like to thank all the anonymous reviewers for their help and insightful comments. This work is supported in part by the National Key R&D Program of China (2018AAA0100302) and the National Natural Science Foundation of China (61876016). References Bian, T.; Xiao, X.; Xu, T.; Zhao, P.; Huang, W.; Rong, Y.; and Huang, J. 2020. Rumor detection on social media with bi-directional graph convolutional networks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 549–556. Birunda, S. S.; and Devi, R. K. 2021. A Novel Score-Based Multi-Source Fake News Detection using Gradient Boosting Algorithm. In 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), 406–414. IEEE. Bonacich, P. 1972. Factoring and weighting approaches to status scores and clique identification. Journal of mathematical sociology, 2(1): 113–120. Brin, S.; and Page, L. 1998. The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1-7): 107–117. Cai, C.; and Wang, Y. 2020. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318. Castillo, C.; Mendoza, M.; and Poblete, B. 2011. Information credibility on twitter. In Proceedings of the 20th international conference on World wide web, 675–684. Chen, C.; Ye, W.; Zuo, Y.; Zheng, C.; and Ong, S. P. 2019. Graph networks as a universal machine learning framework for molecules and crystals. Chemistry of Materials, 31(9): 3564–3572. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Fang, H.; Wang, S.; Zhou, M.; Ding, J.; and Xie, P. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. Freeman, L. C. 1977. A set of measures of centrality based on betweenness. Sociometry, 35–41. Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural message passing for quantum chemistry. In International conference on machine learning, 1263–1272. PMLR. Giorgi, J.; Nitski, O.; Wang, B.; and Bader, G. 2020. Declutr: Deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:2006.03659. Hassani, K.; and Khasahmadi, A. H. 2020. Contrastive multi-view representation learning on graphs. In International Conference on Machine Learning, 4116–4126. PMLR. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729–9738. Jin, Z.; Cao, J.; Guo, H.; Zhang, Y.; and Luo, J. 2017. Multimodal fusion with recurrent neural networks for rumor detection on microblogs. In Proceedings of the 25th ACM international conference on Multimedia, 795–816. Karimi, H.; Roy, P.; Saba-Sadiya, S.; and Tang, J. 2018. Multi-source multi-class fake news detection. In Proceedings of the 27th international conference on computational linguistics, 1546–1557. Katz, L. 1953. A new status index derived from sociometric analysis. Psychometrika, 18(1): 39–43. Khoo, L. M. S.; Chieu, H. L.; Qian, Z.; and Jiang, J. 2020. Interpretable rumor detection in microblogs by attending to user interactions. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 8783–8790. Kipf, T.; Fetaya, E.; Wang, K.-C.; Welling, M.; and Zemel, R. 2018. Neural relational inference for interacting systems. In International Conference on Machine Learning, 2688– 2697. PMLR. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Kwon, S.; Cha, M.; Jung, K.; Chen, W.; and Wang, Y. 2013. Prominent features of rumor propagation in online social media. In 2013 IEEE 13th international conference on data mining, 1103–1108. IEEE. Li, Q.; Han, Z.; and Wu, X.-M. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Liu, Y.; and Wu, Y.-F. 2018. Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Ma, J.; Gao, W.; Mitra, P.; Kwon, S.; Jansen, B. J.; Wong, K.-F.; and Cha, M. 2016. Detecting rumors from microblogs with recurrent neural networks. Ma, J.; Gao, W.; and Wong, K.-F. 2017. Detect rumors in microblog posts using propagation structure via kernel learning. Association for Computational Linguistics. Ma, J.; Gao, W.; and Wong, K.-F. 2018. Rumor detection on twitter with tree-structured recursive neural networks. Association for Computational Linguistics. Oono, K.; and Suzuki, T. 2019. Graph neural networks exponentially lose expressive power for node classification. arXiv preprint arXiv:1905.10947. Sabidussi, G. 1966. The centrality index of a graph. Psychometrika, 31(4): 581–603. Shaw, M. E. 1954. Group structure and the behavior of individuals in small groups. The Journal of psychology, 38(1): 139–149. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 80 Shi, J.; Liang, C.; Hou, L.; Li, J.; Liu, Z.; and Zhang, H. 2019. Deepchannel: Salience estimation by contrastive learning for extractive document summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 6999–7006. Sun, F.-Y.; Hoffmann, J.; Verma, V.; and Tang, J. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000. Sun, M.; Zhang, X.; Zheng, J.; and Ma, G. 2022a. DDGCN: Dual Dynamic Graph Convolutional Networks for Rumor Detection on Social Media. Sun, T.; Qian, Z.; Dong, S.; Li, P.; and Zhu, Q. 2022b. Rumor Detection on Social Media with Graph Adversarial Contrastive Learning. In Proceedings of the ACM Web Conference 2022, 2789–2797. Velickovic, P.; Fedus, W.; Hamilton, W. L.; Li`o, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep Graph Infomax. ICLR (Poster), 2(3): 4. Wang, Y.; Ma, F.; Jin, Z.; Yuan, Y.; Xun, G.; Jha, K.; Su, L.; and Gao, J. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th acm sigkdd international conference on knowledge discovery & data mining, 849–857. Wei, L.; Hu, D.; Zhou, W.; Yue, Z.; and Hu, S. 2021. Towards propagation uncertainty: Edge-enhanced bayesian graph convolutional networks for rumor detection. arXiv preprint arXiv:2107.11934. Xie, T.; and Grossman, J. C. 2018. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Physical review letters, 120(14): 145301. Yin, Y.; Wang, Q.; Huang, S.; Xiong, H.; and Zhang, X. 2022. Autogcl: Automated graph contrastive learning via learnable view generators. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8892–8900. You, Y.; Chen, T.; Shen, Y.; and Wang, Z. 2021. Graph contrastive learning automated. In International Conference on Machine Learning, 12121–12132. PMLR. You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; and Shen, Y. 2020. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33: 5812–5823. Yu, F.; Liu, Q.; Wu, S.; Wang, L.; Tan, T.; et al. 2017. A Convolutional Approach for Misinformation Identification. In IJCAI, 3901–3907. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2021. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021, 2069–2080. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 81
2024
9
18,739
Learning Generalized Segmentation for Foggy-Scenes by Bi-directional Wavelet Guidance Qi Bi, Shaodi You, Theo Gevers Computer Vision Research Group, University of Amsterdam, Netherlands {q.bi, s.you, th.gevers}@uva.nl Abstract Learning scene semantics that can be well generalized to foggy conditions is important for safety-crucial applications such as autonomous driving. Existing methods need both annotated clear images and foggy images to train a curriculum domain adaptation model. Unfortunately, these methods can only generalize to the target foggy domain that has seen in the training stage, but the foggy domains vary a lot in both urban-scene styles and fog styles. In this paper, we propose to learn scene segmentation well generalized to foggy-scenes under the domain generalization setting, which does not involve any foggy images in the training stage and can generalize to any arbitrary unseen foggy scenes. We argue that an ideal segmentation model that can be well generalized to foggy-scenes need to simultaneously enhance the content, de-correlate the urban-scene style and de-correlate the fog style. As the content (e.g., scene semantic) rests more in low-frequency features while the style of urban-scene and fog rests more in high-frequency features, we propose a novel bi-directional wavelet guidance (BWG) mechanism to realize the above three objectives in a divideand-conquer manner. With the aid of Haar wavelet transformation, the low frequency component is concentrated on the content enhancement self-attention, while the high frequency component is shifted to the style and fog self-attention for de-correlation purpose. It is integrated into existing masklevel Transformer segmentation pipelines in a learnable fashion. Large-scale experiments are conducted on four foggyscene segmentation datasets under a variety of interesting settings. The proposed method significantly outperforms existing directly-supervised, curriculum domain adaptation and domain generalization segmentation methods. Source code is available at https://github.com/BiQiWHU/BWG. Introduction Existing foggy-scene semantic segmentation methods usually follow the curriculum domain adaptation paradigm, where both well-annotated clear images and foggy images are involved in the training stage (Truong et al. 2021; Guo et al. 2021; Zhang et al. 2021), so that the scene representation can be progressively adapted to the target foggy domain that has been seen in the training stage (Tsai et al. 2018). Nonetheless, these techniques are solely tailored to adapt Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Foggy Images (b) CuDA-Net (c) SAW (d) Ours Figure 1: Foggy image (a) segmentation results by: (b) existing curriculum domain adaptation paradigm (e.g. CuDANet (Ma et al. 2022)); (c) generic domain generalization paradigm (e.g., SAW (Peng et al. 2022)); (d) our proposed generalization for foggy-scene method BWG. to foggy scenes within the target domain, which impose a significant limitation on practical road applications. In realworld scenarios, the need for generalizing to a wide array of unforeseen foggy scenes is important. In this paper, we shift the focus to the domain generalization setting, which does not involve any foggy target domain in the training stage. Ideally, such scene-segmentation model is able to generalize to any unseen foggy target domains. Predicting reliable scene-segmentation under domain generalization setting is plausible, given the effectiveness demonstrated by various domain-generalized segmentation methods in recent years. These methods usually assume that the content is stable while the urban style changes greatly (Choi et al. 2021; Peng et al. 2022; Bi, You, and Gevers 2023a). However, the challenge becomes more intricate when attempting to generalize to foggy scenes due to the complex nature of image conditions. This complexity can pose difficulties for existing domain-generalized scene segmentation methods (Fig. 1c). Specifically, not only the urban-scene styles but also the foggy styles vary greatly (Ma et al. 2022). Besides, the existence of fog occludes to the scene objects and negatively impairs the content representation (Dai et al. 2020; Wang et al. 2023). We focus on three key objectives to learn a scene segmentation that can be well generalized to foggy scenes (Ma et al. 2022): 1) decouple the urban-style variation, 2) decouple the foggy style variation, and 3) enhance the content representation caused by fog occlusion. Addressing each of these The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 801 Low-frequency Component High-frequency Component (a) Foggy Scenes (b) Low- & High- Frequency Space (c) Cross-domain Spatial Space Low-frequency Features High-frequency Features Foggy Driving Foggy Zurich ACDC-fog Foggy CityScapes Figure 2: (a) Foggy scenes from different domains. (b) Visualization of low- and high- frequency space, which rests in more content and style information, respectively. (c) t-SNE visualization of low- and high- frequency feature space. three objectives through a divide-and-conquer approach is straightforward. The pivotal concern lies in devising a viable solution to distinguish urban scene styles and foggy styles from the core urban scene content. In this paper, we propose to separate these components in a foggy-scene from the frequency domain by the Haar wavelet transformation (Porwik and Lisowska 2004). Handling the style and content separately in the frequency domain has been recognized effective (Yoo et al. 2019; Li et al. 2017). When considering an image representation, the content (such as scene semantics) tends to reside predominantly in the low-frequency components, whereas the style (such as urban landscape, lighting, and weather) is more prominent in the high-frequency components (Bi, You, and Gevers 2023b; Peng et al. 2022; Tjio et al. 2022). Technically, we propose a bi-directional wavelet-guided guidance (BWG) for this task (Fig. 1d). First, we represent the content, fog style and urban scene style by three independent self-attention modules. For each module, the Haar wavelet transformation allows us to decompose the high-frequency component from low-frequency component. Then, we concentrate all the low-frequency component to the content enhancement module, and shift all the highfrequency components to the fog-style and urban-style module. Afterwards, both high-frequency representations are implemented with instance normalization to decouple the impact of urban-style and fog-style variation. Extensive experiments are conducted to generalize to foggy scenes. Using CityScapes (Cordts et al. 2016) as source domain, the proposed method is compared with existing domain generalized and domain adaptation methods on four foggy benchmarks, namely, ACDC-fog (Sakaridis, Dai, and Van Gool 2021), Foggy-Zurich (Sakaridis et al. 2018), Foggy-Driving (Sakaridis, Dai, and Van Gool 2018) and Foggy-CityScapes (Sakaridis, Dai, and Van Gool 2018). Besides, some state-of-the-art directly-supervised and foundation model based segmentation methods (Kirillov et al. 2023; Wang et al. 2023) are also compared for reference. Rigorous ablation studies and generalization to other adverse weather conditions are also validated. Our contribution can be summarized as follows. • We propose to learn segmentation generalizable to foggy scenes under the domain generalization setting. It is more practical, general and applicable to real-world scenarios than prior curriculum domain adaptation works. • We propose a bi-directional wavelet guided self-attention (BWG) mechanism. It handles the content enhancement, urban-style de-correlation and fog de-correlation in a divide-and-conquer manner. • The proposed BWG is integrated into the mask-level Transformer segmentation models in a learnable fashion. • The proposed BWG outperforms existing state-of-theart domain generalized segmentation methods by upto 11.8% mIoU on Foggy Zurich and curriculum domain adaptation methods by upto 16.7% mIoU on ACDC-fog. Related Work Foggy-scene Semantic Segmentation has been extensively studied. Existing works tackle this problem under the paradigm of curriculum domain adaptation, which uses the clear images and foggy images as source domain and target domain, respectively. Some typical works include AdSegNet (Tsai et al. 2018), ADVENT (Vu et al. 2019), DISE (Chang et al. 2019), CCM (Li et al. 2020), SAC (Araslanov and Roth 2021), ProDA (Zhang et al. 2021), DMLC (Guo et al. 2021), DACS (Truong et al. 2021), CMAda3+ (Dai et al. 2020) and CuDA-Net (Ma et al. 2022). On the other hand, although some recent unsupervised domain adaptation techniques (e.g. DAFormer (Hoyer, Dai, and Van Gool 2022), Refign-DAFormer (Br¨uggemann et al. 2023)) have been proposed, they are not specially designed for foggy scenes. Fog Removal enhances visibility. Earlier de-fog works model the degraded image as a combination between the background image and weather effect layer (Li, Cheong, and Tan 2019; Li, Tan, and Cheong 2020). More recent works follow the all-in-one paradigm (Valanarasu, Yasarla, and Patel 2022; Yang et al. 2023). However, semantic segmentation on de-fogged images still shows an significant inferior performance compared with the curriculum domain adaptation methods (Dai et al. 2020; Ma et al. 2022). Domain Generalized Semantic Segmentation (Pan et al. 2019; Choi et al. 2021; Peng et al. 2022; Huang et al. 2023; Tjio et al. 2022; Lee et al. 2022; Ding et al. 2023; Bi, You, and Gevers 2023b) is more challenging than conventional semantic segmentation (Pan et al. 2022; Ji et al. 2021; Li et al. 2021; Ji et al. 2022), which focuses on the generalization ability of a segmentation model on unseen target domains. These methods usually assume the content is stable and the domain gap is caused by the style variation of urban landscape. However, this assumption does not fully describe the complexity of foggy-scene formulation. Despite the style variation caused by urban landscape, the foggy style also varies a lot. More importantly, the fog poses severe occlusion, which harms the completeness of content information. Preliminary Generalized Segmentation for Foggy Scenes Given clean scenes as source domain S, and foggy scenes as an unThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 802 seen target domain T . Given a semantic segmentation model parameterized by θ and the segmentation loss Lseg, generalized segmentation for foggy scenes can be formulated as min θ supp T :D(S,T )≤ρ ET [Lseg(θ; T )], (1) where D(S, T ) denotes the distance between the cleanscene source domain S and foggy-scene target domain T , and ρ denotes the constraint threshold. Theoretical Analysis from Frequency Domain Handling the style and content separately in the frequency domain has been recognized effective (Yoo et al. 2019; Li et al. 2017). We analyze the low-frequency and high- frequency component (Fig. 2b) from different foggy target domains (Fig. 2a). After transforming the low- and high- frequency component into the spatial space, the t-SNE visualization shows that the low-frequency features are more robust to handle style variations than high-frequency features. Crossdomain samples are more uniformly distributed when using low-frequency features (Fig. 2c). Haar Wavelet Transformation Haar wavelet pooling (Porwik and Lisowska 2004) enables the separation from the low-frequency component to high-component. It has four kernels, namely, LLT, LHT, HLT, HHT, given by LT = 1 √ 2[1 1], HT = 1 √ 2[−1 1]. (2) The low-frequency component LL preserves more content information (e.g. scene semantics) for foggy scenes. Instead, the high-frequency components LH, HL and HH contain more style information (e.g. urban landscape, foggy density) for foggy scenes. Difference from Existing Pipelines Fig. 3a summarizes the pipeline of existing foggy-scene segmentation methods under the domain adaptation setting. Both clear source domain and fog target domain are involved in training. Fig. 3b outlines the workflow of generic domain generalized segmentation. During training, only the clear source domain is utilized. While these methods demonstrate the ability to generalize to diverse, unseen target domains, their emphasis lies primarily in decoupling urban styles. They are not explicitly tailored to represent foggy scenes. Fig. 3c summarizes the proposed pipeline, which intends to learn generalized scene-segmentation for foggy-scenes. It only involves clear source domain in the training stage. The proposed framework implements urban style decoupling, content enhancement and foggy style decoupling. Methodology Triplet Self-attention Representing Three key objectives to learn segmentation that can be well generalized to foggy-scenes are content enhancement, fogstyle decoupling and urban-style decoupling. It is intuitive to realize these objectives in a divide-and-conquer manner. So, we use three self-attention modules to represent the content, fog style and urban style, respectively. The self-attention (a) source domain loss clear source domain fog target domain backbone target domain loss (b) source domain loss clear source domain target domains backbone urban style decoupling (c) source domain loss clear source domain backbone arbitrary fog target domains inference training urban style decoupling content enhance foggy style decoupling Figure 3: Difference of foggy-scene segmentation pipelines between: (a) existing curriculum domain adaptation setting; (b) generic domain generalization setting; and (c) the proposed generalized segmentation to foggy-scene setting. mechanism is adapted in our framework not only because its strong representation ability and its long-range dependency mining, but also because it can seamlessly integrated to Transformer segmentation backbones. Before our triplet self-attention representing, we use the mask attention to encode the foggy-scene features from backbone. Compared with conventional pixel-level segmentation methods, mask attention based segmentation (Cheng et al. 2022; Cheng, Schwing, and Kirillov 2021) has stronger scene representation. Given the image feature Fl ∈R(Wl·Hl)×CF to input into the lth layer of a Transformer decoder, its key, value, and query counterpart Kl ∈R(Wl·Hl)×C, Vl ∈R(Wl·Hl)×C and Ql ∈RN×C can be computed by linear transformations fK, fV and fQ, respectively. Then, the mask attention computes the features Xl ∈RN×C, given by Xl = softmax(Ml−1 + QlKT l )Vl + Xl−1, (3) where Ml−1 ∈{0, 1}N×HlWl is a binary mask attention matrix from the (l −1)th layer, with a threshold of 0.5. M0 is binarized and resized from X0. The mask can highlight the foreground regions and suppress the background of an image, which has been reported effective to enhance the feature representation for scene segmentation (Cheng et al. 2022; Cheng, Schwing, and Kirillov 2021). Then, the learnt mask query Xl is fed into three parallel self-attention components S, C and F to decouple the urban-style, enhance the content and decouple the fog-style. The output is denoted as Xs l , Xc l and Xf l , respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 803 queries LL image features … Urban-style Self-attention Content Self-attention Fog-style Self-attention mask attention Haar Haar Haar LL LL LH, HL, HH LH, HL, HH LH, HL, HH … output queries I N I N C C C Triplet Self-attention Representing High-Low Frequency Decomposition High-Low Frequency Interaction C concatenation & Haar unpooling adding fusion Haar Haar wavelet pooling IN instance norm Figure 4: Technique framework overview. The proposed bi-directional wavelet guided self-attention (BWG) implements content enhancement, style de-correlation and fog density de-correlation. Four low- and high- frequency components from the Haar wavelet transformation are denoted as LL, LH, HL and HH, respectively. High-Low Frequency Decomposition A nature divide-and-conquer way to learn segmentation that is well generalized to foggy scenes is to let the content enhancement representation Xc l focus the low-frequency component, and to let the foggy and urban-scene representation Xs l and Xf l focus on the high-frequency component. To realize this objective, it is necessary to at first separate the low frequency component from the high frequency component for Xc l , Xs l and Xf l , respectively. The Haar wavelet transformation allows us to decompose Xs l into one lowpass component LL and three high-pass component LH, HL, HH. Take Xs l as an example, the decomposition is: Xs,LL l = Xs l ⊗LLT, (4) Xs,LH l = Xs l ⊗LHT, (5) Xs,HL l = Xs l ⊗HLT, (6) Xs,HH l = Xs l ⊗HHT, (7) where ⊗denotes the filter operation. For Xc l and Xf l , similarly we can get Xc,LL l , Xc,LH l , Xc,HL l , Xc,HH l and Xf,LL l , Xf,LH l , Xf,HL l , Xf,HH l . For simplicity and clarity, in this paper, we directly use the notations of spatial domain to state the operations in frequency domain, which avoids to involve complicated notations and equations in spatial-frequency transformation. High-Low Frequency Interaction After the decomposition of high and low frequency information, the rest step is to: 1) allow the content enhancement branch to only focus on the low-frequency component, so that the scene semantics are more well-represented; 2) allow the fog branch and urban scene style branch to focus on the high-frequency component, so that the foggy style information and urban-scene style information is more depicted. For the content enhancement branch, all the low frequency components from the other two branches, namely, Xs,LL l and Xf,LL l , are merged together with its original low frequency component, given by Xc′ l = [Xc,LL l , Xc,LL l , Xs,LL l , Xf,LL l ], (8) where [·, ·] denotes the concatenation operation followed by Haar wavelet unpooling. For the style de-correlation branch, after shifting its low frequency component Xs,LL l to the content branch, the high frequency components from the content branch (Xc,LH l , Xc,HL l , Xc,HH l ) are fused into it, given by Xs′ l = [Xs,LH l , Xs,HL l , Xs,HH l , E[Xc,LH l , Xc,HL l , Xc,HH l ]]. (9) The implementation on the foggy branch is similar. After shifting its low frequency component Xf,LL l to the content branch, the high frequency components from the content branch (Xc,LH l , Xc,HL l , Xc,HH l ) are fused into it, given by Xf′ l = [Xf,LH l , Xf,HL l , Xf,HH l , E[Xc,LH l , Xc,HL l , Xc,HH l ]]. (10) Finally, the high frequency representation Xs′ l and Xf ′ l are implemented with the instance normalization, which has been reported effective to decouple the impact of styles. In this way, the segmentation representation can be more robust to the variance of fog and urban-scene landscape. Take Xs′ l ∈RN×C as an example, the instance normalization is implemented through channel-wise, given by Xs′′ l,N,c = Xs′ l,N,c −µ σ + ϵ · γ + β, (11) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 804 µ = 1 C C X c=1 Xs′ l,N,c, σ = v u u t 1 C C X i=1 (Xs′ l,N,c −µ)2, (12) where c = 1, 2, · · · , C. For the lth transformer layer, the normalized Xs′′ l , Xf ′′ l and Xc′ l are fused together by adding for the rest processing. Framework Overview & Implementation Details Fig. 4 gives an overview of the proposed framework. The overall framework follows the mask-level segmentation Transformer paradigm (Cheng et al. 2022; Cheng, Schwing, and Kirillov 2021). The image encoder uses a backbone of Swin-base Transformer (Liu et al. 2021), with a pre-trained weight on ImageNet. The image decoder is directly inherited from the Mask2Former (Cheng et al. 2022), where the image features are up-sampled from ×32 resolution to ×16, ×8 and ×4 resolution, respectively. For the Transformer decoder, it takes the ×32, ×16 and ×8 resolution image features from the decoder as input in a progressive way, which is of the same way as the original Mask2Former (Cheng et al. 2022). The Transformer decoder has nine of the proposed bi-directional wavelet guided self-attention (BWG) components. Finally, the learnt Transformer queries from the Transformer decoder are fused with the ×4 resolution image features for predictions. All the loss and hyper-parameter settings keep the same as the original Mask2Former (Cheng et al. 2022) without any additional fine-tuning. By default, the Adam optimizer is used with an initial learning rate of 1 × 10−4. The weight decay is set 0.05. The training terminates after 50 epochs. Experiments and Analysis Datasets CityScapes (Cordts et al. 2016) is a commonly-used semantic segmentation datasets for driving-scenes. It has 2965 training samples and 500 validation samples, with 19 common scene categories in driving-scenes. Clear CityScapes (Sakaridis, Dai, and Van Gool 2018) is a subset of CityScapes. It consists of 498 training samples from the clear condition. Foggy-CityScapes (Sakaridis, Dai, and Van Gool 2018) contains 550 synthetic foggy images in total, including 498 training images and 52 testing images. Each of the 498 training samples has three different types of synthetic fog layers, with light-, medium- and dense- density. Foggy Zurich (Sakaridis et al. 2018) contains 3,808 realworld foggy road scenes from the Zurich city. For light and medium foggy conditions, it has 1,552 images and 1,498 images, respectively. In addition, it has 40 images with labels that are compatible with Cityscapes. Foggy Driving (Sakaridis, Dai, and Van Gool 2018) has 101 real-world foggy road-scenes images. Among them, 33 images are finely annotated and the rest 68 images are coarsely annotated. Following (Sakaridis, Dai, and Van Gool 2018), they are only used for testing. Adverse Conditions Dataset with Correspondences (ACDC) (Sakaridis, Dai, and Van Gool 2021) has 4006 driving-scene segmentation samples under adverse conditions. 1000 of them are foggy images. Data split for training, validation and testing is 4:1:5. Comparison with Domain Generalization Methods The proposed method is compared with state-of-the-art domain generalized segmentation methods, including IBNet (Pan et al. 2018), Iternorm (Huang et al. 2019), SW (Pan et al. 2019), ISW (Choi et al. 2021), SHADE (Zhao et al. 2022), SAW (Peng et al. 2022), WildNet (Lee et al. 2022), SPC (Huang et al. 2023) and HGFormer (Ding et al. 2023). DIRL (Xu et al. 2022) and AdvStyle (Zhong et al. 2022) are not involved for comparison due to neither unavailable source code nor official performance report. In addition, three directly-supervised segmentation methods (RefineNet (Lin et al. 2017), SegFormer (Xie et al. 2021), Mask2Former (Cheng et al. 2022)) and two recent foundational model based segmentation methods (SAM-fine-tune (Kirillov et al. 2023), SAM-SSA-fine-tune (Wang et al. 2023)) are reported. Following the evaluation protocols of above domain generalized segmentation methods, CityScapes, under the clear imaging condition, is used as the source domain. Four foggy datasets, namely, Foggy-CityScapes, Foggy Zurich, Foggy Driving and ACDC-fog, are used as unseen target domains for only inference stage. Table 1 reports the performance. The proposed method outperforms the second-best by 6.8%, 11.8%, 7.2% and 9.6% on ACDC-fog, Foggy Zurich, Foggy Driving and Foggy-CityScapes, respectively. Besides, compared with the original Mask2Former, the proposed method leads to a performance gain of 3.4%, 1.9%, 3.1% and 3.6% on ACDCfog, Foggy Zurich, Foggy Driving and Foggy-CityScapes. Comparison with Domain Adaptation Methods The proposed method is also compared with existing foggyscene segmentation methods which are under the curriculum domain adaptation paradigm, namely, AdSegNet (Tsai et al. 2018), ADVENT (Vu et al. 2019), DISE (Chang et al. 2019), CCM (Li et al. 2020), SAC (Araslanov and Roth 2021), ProDA (Zhang et al. 2021), DMLC (Guo et al. 2021), DACS (Truong et al. 2021), CMAda3+ (Dai et al. 2020), CuDANet (Ma et al. 2022), FIFO (Lee, Son, and Kwak 2022) and DAFormer (Hoyer, Dai, and Van Gool 2022). Following the evaluation protocols of the above curriculum domain adaptation methods, Clear-CityScapes, which has 498 samples under the clear condition, is used as the source domain. For our BWG, ACDC-fog, Foggy-Zurich and Foggy-driving are used as unseen target domains in inference stage only. These domain adaptation methods take more advantage as they need Clear Zurich (CZ) and Foggy Zurich (FZ) as additional training data. Table 2 reports the performance. On ACDC-fog, it significantly outperforms existing cumulative domain adaptation methods by at least 12.2% mIoU. Also, it outperforms all these methods on Foggy-Zurich, e.g., 1.0% mIoU gain against CuDA-Net, 4.8% mIoU gain against DAFormer. On Foggy-driving, it outperforms all the methods except CMAda3+ (Dai et al. 2020), CuDA-Net (Ma et al. 2022) and CumFormer (Wang et al. 2023). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 805 Method Backbone Source Domain: Cityscapes (2965 images) →ACDC-Fog →Foggy Zurich →Foggy-driving →Foggy-CityScapes RefineNet (Lin et al. 2017) Res-101 46.4 34.6 35.8 SAM-fine-tune (Kirillov et al. 2023) ViT-L 41.7 37.2 38.5 SAM-SSA (Wang et al. 2023) ViT-L 46.5 35.8 50.9 SegFormer (Xie et al. 2021) MiT-B2 59.2 43.9 46.6 75.5 Mask2Former (Cheng et al. 2022) Swin-B 73.3 49.4 51.1 73.8 IBNet (Pan et al. 2018) Res-50 63.8 33.4 45.5 66.5 Iternorm (Huang et al. 2019) Res-50 63.3 35.2 44.6 66.9 SW (Pan et al. 2019) Res-50 62.4 34.1 45.8 66.4 ISW (Choi et al. 2021) Res-50 64.3 36.1 46.2 66.6 SHADE (Zhao et al. 2022) Res-50 61.4 39.5 42.0 65.8 SAW (Peng et al. 2022) Res-50 64.0 37.3 47.0 67.8 WildNet (Lee et al. 2022) Res-50 64.7 39.2 42.6 64.4 SPC (Huang et al. 2023) Res-50 68.0 39.3 43.5 64.7 HGFormer (Ding et al. 2023) Swin-L 69.9 ISSA (Li et al. 2023) MiT-B2 67.5 Ours Swin-B 76.7(+6.8) 51.3(+11.8) 54.2(+7.2) 77.4(+9.6) Table 1: Comparison with existing domain generalized segmentation methods and directly-supervised methods. Evaluation metric mIoU is in %. ’-’: either no official code or no performance report. Method Backbone Addition Data Source Domain: Clear-Cityscapes (498 images) CZ FZ Oth. →ACDC-Fog →Foggy Zurich →Foggy-driving AdSegNet (Tsai et al. 2018) Res-101 ✓ ✓ 31.8 26.1 37.6 ADVENT (Vu et al. 2019) Res-101 ✓ ✓ 32.9 24.5 36.1 DISE (Chang et al. 2019) Res-101 ✓ ✓ 42.4 40.7 45.2 CCM (Li et al. 2020) Res-101 ✓ ✓ 35.8 42.6 SAC (Araslanov and Roth 2021) Res-101 ✓ ✓ 37.0 43.4 ProDA (Zhang et al. 2021) Res-101 ✓ ✓ 38.4 37.8 41.2 DMLC (Guo et al. 2021) Res-101 ✓ ✓ 33.5 32.6 DACS (Truong et al. 2021) Res-101 ✓ ✓ 28.7 35.0 CMAda3+ (Dai et al. 2020) RefineNet ✓ ✓ ✓ 46.8 49.8 FIFO (Lee, Son, and Kwak 2022) RefineNet ✓ ✓ ✓ 54.1 48.4 50.7 CuDA-Net (Ma et al. 2022) Res-101 ✓ ✓ 55.6 48.2 52.7 DAFormer (Hoyer, Dai, and Van Gool 2022) MiT-B5 ✓ ✓ 48.9 44.4 CumFormer (Wang et al. 2023) MiT-B5 ✓ ✓ 60.7 56.2 Ours Swin-B 72.9 49.2 46.9 ✓ 73.7 50.7 49.3 ✓ ✓ 74.3 N/A 52.3 ✓ ✓ ✓ 77.4(+16.7) N/A 57.6(+1.4) Table 2: Comparison with foggy-scene cumulative domain adaptation methods. Evaluation metric mIoU is in %. ’-’: either no official code or no performance report. N/A: the result is not meaningful under our domain generalization setting. Components Trained on CityScapes C S F →ACDC-Fog →Foggy-driving ✓ 73.4 51.1 ✓ ✓ 75.2 52.9 ✓ ✓ ✓ 76.7 54.2 Table 3: Ablation studies on the content, style and foggy encoder C, S, F in BWG. Evaluation metric mIoU is in %. Ablation Studies On Each Branch The proposed BWG has three encoders to handle the content enhancement, style de-correlation and fog de-correlation, which we denote as C, S and F, respectively. Table 3 reports the impact of each encoder on the generalization performance. When there are both C and S encoders, to keep only one single variable, the wavelet transformations are kept in this experiment setting. The style deS2C F2C C2S C2F ACDC-Fog Foggy-driving 73.4 51.1 ✓ 74.4 52.5 ✓ ✓ 75.3 53.6 ✓ ✓ ✓ 76.1 53.9 ✓ ✓ ✓ ✓ 76.7 54.2 Table 4: Ablation studies on each frequency interaction. S2C and F2C: shift LL from S and F to C. C2S and C2F: shift HL, LH and HH from C to S and F. Metric mIoU. correlation encoder and the fog de-correlation encoder contribute to 1.8% and 1.5% mIoU gain on ACDC-fog, 1.8% and 1.3% mIoU gain on Foggy-driving. On Low-high Frequency Interaction The proposed BWG has four operations to interact the low and high freThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 806 Method Category Backbone Trained on Cityscapes (C) & Inferred on Foggy-CityScapes (FC) →FC-light →FC-medium →FC-dense mean DeepLabv3+ (Chen et al. 2018) DS ResNet-101 67.1 65.2 61.6 63.4 SegFormer (Xie et al. 2021) MiT-B2 70.5 66.1 62.3 65.3 Mask2Former (Cheng et al. 2022) Swin-B 76.2 74.5 70.8 73.7 IBNet (Pan et al. 2018) DG ResNet-50 72.4 67.9 59.5 66.6 Iternorm (Huang et al. 2019) ResNet-50 72.0 68.3 60.7 66.9 SW (Pan et al. 2019) ResNet-50 73.3 69.4 61.7 66.5 ISW (Choi et al. 2021) ResNet-50 72.1 67.9 60.1 66.7 Ours Swin-B 79.6(+6.3) 78.1(+8.7) 74.6(+12.9) 77.4 (+10.5) Table 5: Sensitivity analysis of the proposed method and the backbone on the foggy density. CityScapes as the source domain. Foggy CityScapes -light, -medium and -dense are used as unseen target domains, respectively. Evaluation metric mIoU is in %. The mean mIoU (denoted as mean) on Foggy-CityScapes is not a simple average of three kinds of densities. Unseen images Internorm Ground truth IBNet SAW Ours ISW Figure 5: Visualized segmentation predictions from the proposed method (denoted as Ours) and the state-of-the-art methods IBNet (Pan et al. 2018), Internorm (Huang et al. 2019), ISW (Choi et al. 2021) and SAW (Peng et al. 2022). quency information between the content, style and fog encoder, which we denote as S2C, F2C, C2S and C2F. Table 4 reports the impact of each operation. All these operations positively contribute to generalized representation for foggy-scenes, with 1.0%, 0.9%, 0.8% and 0.6% mIoU gain on ACDC-fog, and 1.4%, 1.1%, 0.3% and 0.3% mIoU gain on Foggy-driving. Generally, concentrating the lowfrequency information to the content branch (S2C, F2C) has a more significant impact than the operation on highfrequency information (C2S and C2F). On Foggy Density We further test the sensitivity of the proposed method on the foggy density. Foggy-CityScapes dataset, as the unseen target domain, provides foggy maps under the light-, medium- and dense- densities, which we denote as FC-light, FC-medium and FC-dense. CityScapes is used as the source domain. Table 5 reports the results. On all foggy densities, the proposed method outperforms the existing methods significantly. Also, it outperforms the original Mask2Former by 3.4%, 3.6% and 3.8% mIoU on the light, medium and dense fog densities, respectively. Visualization Fig. 5 provides some visualized segmentation predictions on ACDC-fog, Foggy-Zurich, Foggy-Driving and FoggyCityScapes dataset, when using CityScapes as the source domain. The proposed method shows a more reasonable and more reliable inference compared with existing methods. Conclusion Robust foggy-scene segmentation is crucial for autonomous driving, but existing curriculum domain adaptation methods can only adapt to the foggy domain seen in the training stage. In this paper, we tackle this challenge under the domain generalization setting. We aim to learn a segmentation Transformer that can be well generalized to arbitrary unseen foggy scenes. Technically, we propose a bidirectional wavelet guidance (BWG) mechanism, which simultaneously handles the content enhancement, style decorrelation and fog de-correlation for foggy scenes in a divide-and-conquer manner. Extensive experiments show that the proposed method significantly outperforms existing directly-supervised, domain adaptation and domain generalization segmentation methods under a variety of settings. Limitation Discussion. The fog de-correlation encoder is data-driven rather than physics-driven. However, its effectiveness has been demonstrated by the superior performance than the scenario when only using the rest two encoders for content enhancement and style de-correlation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 807 References Araslanov, N.; and Roth, S. 2021. Self-supervised augmentation consistency for adapting semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15384–15394. Bi, Q.; You, S.; and Gevers, T. 2023a. Interactive Learning of Intrinsic and Extrinsic Properties for All-Day Semantic Segmentation. IEEE Transactions on Image Processing, 32: 3821–3835. Bi, Q.; You, S.; and Gevers, T. 2023b. Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation. arXiv preprint arXiv:2307.00371. Br¨uggemann, D.; Sakaridis, C.; Truong, P.; and Van Gool, L. 2023. Refign: Align and refine for adaptation of semantic segmentation to adverse conditions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3174–3184. Chang, W.-L.; Wang, H.-P.; Peng, W.-H.; and Chiu, W.-C. 2019. All about structure: Adapting structural information across domains for boosting semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1900–1909. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In European Conference on Computer Vision, 801–818. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1290–1299. Cheng, B.; Schwing, A.; and Kirillov, A. 2021. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34: 17864–17875. Choi, S.; Jung, S.; Yun, H.; Kim, J.; Kim, S.; and Choo, J. 2021. RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11580–11590. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223. Dai, D.; Sakaridis, C.; Hecker, S.; and Van Gool, L. 2020. Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. International Journal of Computer Vision, 128: 1182–1204. Ding, J.; Xue, N.; Xia, G.-S.; Schiele, B.; and Dai, D. 2023. HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15413–15423. Guo, X.; Yang, C.; Li, B.; and Yuan, Y. 2021. Metacorrection: Domain-aware meta loss correction for unsupervised domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3927–3936. Hoyer, L.; Dai, D.; and Van Gool, L. 2022. Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9924–9935. Huang, L.; Zhou, Y.; Zhu, F.; Liu, L.; and Shao, L. 2019. Iterative Normalization: Beyond Standardization towards Efficient Whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4874– 4883. Huang, W.; Chen, C.; Li, Y.; Li, J.; Li, C.; Song, F.; Yan, Y.; and Xiong, Z. 2023. Style Projected Clustering for Domain Generalized Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3061–3071. Ji, W.; Li, J.; Bi, Q.; Liu, J.; Cheng, L.; et al. 2022. Promoting Saliency From Depth: Deep Unsupervised RGB-D Saliency Detection. In International Conference on Learning Representations. Ji, W.; Yu, S.; Wu, J.; Ma, K.; Bian, C.; Bi, Q.; Li, J.; Liu, H.; Cheng, L.; and Zheng, Y. 2021. Learning calibrated medical image segmentation via multi-rater agreement modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12341–12351. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Lee, S.; Seong, H.; Lee, S.; and Kim, E. 2022. WildNet: Learning Domain Generalized Semantic Segmentation from the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9936–9946. Lee, S.; Son, T.; and Kwak, S. 2022. Fifo: Learning foginvariant features for foggy scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18911–18921. Li, G.; Kang, G.; Liu, W.; Wei, Y.; and Yang, Y. 2020. Content-consistent matching for domain adaptive semantic segmentation. In European Conference on Computer Vision, 440–456. Li, J.; Ji, W.; Bi, Q.; Yan, C.; Zhang, M.; Piao, Y.; Lu, H.; et al. 2021. Joint semantic mining for weakly supervised RGB-D salient object detection. Advances in Neural Information Processing Systems, 34: 11945–11959. Li, R.; Cheong, L.-F.; and Tan, R. T. 2019. Heavy rain image restoration: Integrating physics model and conditional adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1633– 1642. Li, R.; Tan, R. T.; and Cheong, L.-F. 2020. All in one bad weather removal using architectural search. In Proceedings The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 808 of the IEEE/CVF conference on computer vision and pattern recognition, 3175–3185. Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; and Yang, M.-H. 2017. Universal style transfer via feature transforms. Advances in neural information processing systems, 30. Li, Y.; Zhang, D.; Keuper, M.; and Khoreva, A. 2023. IntraSource Style Augmentation for Improved Domain Generalization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 509–519. Lin, G.; Milan, A.; Shen, C.; and Reid, I. 2017. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1925–1934. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Ma, X.; Wang, Z.; Zhan, Y.; Zheng, Y.; Wang, Z.; Dai, D.; and Lin, C.-W. 2022. Both style and fog matter: Cumulative domain adaptation for semantic foggy scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18922–18931. Pan, J.; Bi, Q.; Yang, Y.; Zhu, P.; and Bian, C. 2022. Labelefficient hybrid-supervised learning for medical image segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2026–2034. Pan, X.; Luo, P.; Shi, J.; and Tang, X. 2018. Two at Once: Enhancing Learning and Generalization Capacities via IBNNet. In European Conference on Computer Vision, 464–479. Pan, X.; Zhan, X.; Shi, J.; Tang, X.; and Luo, P. 2019. Switchable Whitening for Deep Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1863–1871. Peng, D.; Lei, Y.; Hayat, M.; Guo, Y.; and Li, W. 2022. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2594–2605. Porwik, P.; and Lisowska, A. 2004. The Haar-wavelet transform in digital image processing: its status and achievements. Machine graphics and vision, 13(1/2): 79–98. Sakaridis, C.; Dai, D.; Hecker, S.; and Van Gool, L. 2018. Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In European Conference on Computer Vision, 687–704. Sakaridis, C.; Dai, D.; and Van Gool, L. 2018. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126: 973–992. Sakaridis, C.; Dai, D.; and Van Gool, L. 2021. ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10765–10775. Tjio, G.; Liu, P.; Zhou, J. T.; and Goh, R. S. M. 2022. Adversarial semantic hallucination for domain generalized semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 318–327. Truong, T.-D.; Duong, C. N.; Le, N.; Phung, S. L.; Rainwater, C.; and Luu, K. 2021. Bimal: Bijective maximum likelihood approach to domain adaptation in semantic scene segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8548–8557. Tsai, Y.-H.; Hung, W.-C.; Schulter, S.; Sohn, K.; Yang, M.H.; and Chandraker, M. 2018. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7472–7481. Valanarasu, J. M. J.; Yasarla, R.; and Patel, V. M. 2022. Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2353–2363. Vu, T.-H.; Jain, H.; Bucher, M.; Cord, M.; and P´erez, P. 2019. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2517–2526. Wang, Z.; Zhang, Y.; Ma, X.; Yu, Y.; Zhang, Z.; Jiang, Z.; and Cheng, B. 2023. Semantic Segmentation of Foggy Scenes Based on Progressive Domain Gap Decoupling. TechRxiv. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Xu, Q.; Yao, L.; Jiang, Z.; Jiang, G.; Chu, W.; Han, W.; Zhang, W.; Wang, C.; and Tai, Y. 2022. Dirl: Domaininvariant representation learning for generalizable semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2884–2892. Yang, Z.; Huang, J.; Chang, J.; Zhou, M.; Yu, H.; Zhang, J.; and Zhao, F. 2023. Visual Recognition-Driven Image Restoration for Multiple Degradation with Intrinsic Semantics Recovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14059–14070. Yoo, J.; Uh, Y.; Chun, S.; Kang, B.; and Ha, J.-W. 2019. Photorealistic style transfer via wavelet transforms. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9036–9045. Zhang, P.; Zhang, B.; Zhang, T.; Chen, D.; Wang, Y.; and Wen, F. 2021. Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12414–12424. Zhao, Y.; Zhong, Z.; Zhao, N.; Sebe, N.; and Lee, G. H. 2022. Style-hallucinated dual consistency learning for domain generalized semantic segmentation. In European Conference on Computer Vision, 535–552. Zhong, Z.; Zhao, Y.; Lee, G. H.; and Sebe, N. 2022. Adversarial Style Augmentation for Domain Generalized UrbanScene Segmentation. In Advances in Neural Information Processing Systems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 809
2024
90
18,740
On Partial Optimal Transport: Revising the Infeasibility of Sinkhorn and Efficient Gradient Methods Anh Duc Nguyen1, Tuan Dung Nguyen2, Quang Minh Nguyen3, Hoang H. Nguyen4, Lam M. Nguyen5, Kim-Chuan Toh1, 6 1Department of Mathematics, National University of Singapore, Singapore 2Department of Computer and Information Science, University of Pennsylvania, USA 3Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA 4School of Industrial and Systems Engineering, Georgia Institute of Technology, USA 5IBM Research, Thomas J. Watson Research Center, USA 6Institute of Operations Research and Analytics, National University of Singapore, Singapore anh [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract This paper studies the Partial Optimal Transport (POT) problem between two unbalanced measures with at most n supports and its applications in various AI tasks such as color transfer or domain adaptation. There is hence a need for fast approximations of POT with increasingly large problem sizes in arising applications. We first theoretically and experimentally investigate the infeasibility of the state-of-the-art Sinkhorn algorithm for POT, which consequently degrades its qualitative performance in real world applications like pointcloud registration. To this end, we propose a novel rounding algorithm for POT, and then provide a feasible Sinkhorn procedure with a revised computation complexity of e O(n2/ε4). Our rounding algorithm also permits the development of two first-order methods to approximate the POT problem. The first algorithm, Adaptive Primal-Dual Accelerated Gradient Descent (APDAGD), finds an ε-approximate solution to the POT problem in e O(n2.5/ε). The second method, Dual Extrapolation, achieves the computation complexity of e O(n2/ε), thereby being the best in the literature. We further demonstrate the flexibility of POT compared to standard OT as well as the practicality of our algorithms on real applications where two marginal distributions are unbalanced. Introduction Optimal Transport (OT) (Villani 2008; Kantorovich 1942), which seeks a minimum-cost coupling between two balanced measures, is a well-studied topic in mathematics and operations research. With the introduction of entropic regularization (Cuturi 2013), the scalability and speed of OT computation have been significantly improved, facilitating its widespread applications in machine learning such as domain adaptation (Courty et al. 2017), and dictionary learning (Rolet, Cuturi, and Peyr´e 2016). However, OT has a stringent requirement that the input measures must have equal total masses (Chizat et al. 2015), hindering its practicality in various other machine learning applications, which require an optimal matching between two measures with unbalanced Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. masses, such as averaging of neuroimaging data (Gramfort, Peyr´e, and Cuturi 2015) and image classification (Pele and Werman 2008; Rubner, Tomasi, and Guibas 2000). In response to such limitations, Partial Optimal Transport (POT), which explicitly constrains the mass to be transported between two unbalanced measures, was proposed. It has been studied from the perspective of partial differential equations by theorists (Figalli 2010; Caffarelli and McCann 2010). Practically, the relaxation of the marginal constraints, which are strictly imposed by the standard OT, and the control over the total mass transported grant POT immense flexibility compared to OT (Chapel, Alaya, and Gasso 2020) and more robustness to outliers (Le et al. 2021). POT has been deployed in various recent AI applications such as color transfer (Bonneel and Coeurjolly 2019), graph neural networks (Sarlin et al. 2019), graph matching (Liu et al. 2020), partial covering (Kawano, Koide, and Otaki 2021), point set registration (Wang et al. 2022), and robust estimation (Nietert, Cummings, and Goldfeld 2023). Despite its potential applicability, POT still suffers from the computation bottleneck, whereby the more intricate structural constraints imposed on admissible couplings have hindered the direct adaptation of any efficient OT solver in the literature. Currently, the literature (Chapel, Alaya, and Gasso 2020; Le et al. 2021) relies on reformulating POT into an extended OT problem under additional assumptions on the input masses, which can then be solved via existing OT methods, and finally retrieves an admissible POT coupling from the solution to the extended OT problem. This approach has two fundamental drawbacks. First, in the reformulated OT problem, the maximum entry of the extended cost matrix is increased (Chapel, Alaya, and Gasso 2020, Proposition 1), which will always worsen the computational complexity since most efficient algorithms for standard OT (Dvurechensky, Gasnikov, and Kroshnin 2018; Lin, Ho, and Jordan 2019; Guminov et al. 2021) depend on this maximum entry in their complexities. Second, we discover, more details in the Revisiting Sinkhorn section, that although Sinkhorn for POT proposed by (Le et al. 2021) achieves the best known complexity of e O(n2/ε2), it in fact always The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8090 Figure 1: Primal optimality gap against optimization rounds achieved by Sinkhorn (Le et al. 2021) and APDAGD for POT. The marginals are from a color transfer application in our Numerical Experiments section, and the red horizontal line depicts the pre-defined tolerance (ε) for both algorithms. outputs a strictly infeasible solution to the POT problem. In brief, by discarding the last row and column of the reformulated OT solution obtained by Sinkhorn, the POT solution, transportation matrix X, will violate the equality constraint 1⊤X1 = s, which controls the total transported mass. Violating this equality constraint can degrade the results of practical applications in robust regimes such as point cloud registration (Qin et al. 2022) and mini-batch OT (Nguyen et al. 2022) (refer to Remark 4 and our Point Cloud Registration experiment). We theoretically justify the ungroundedness of Sinkhorn for POT in the Revisiting Sinkhorn section and empirically verify this claim in several applications in the Numerical Experiment section. Here, in Figure 1, we specifically investigate Sinkhorn infeasibility in a color transfer example (detailed experimental setup in the later Numerical Experiment section). We show that the optimality gap produced by Sinkhorn is unable to reduce lower than ε, the tolerance of the problem (red line). In other words, Sinkhorn fails to produce an adequate ε-approximate POT solution. To the best of our knowledge, the invalidity of Sinkhorn means there is currently no efficient method for solving POT in the literature. We attribute this to the fact that while the equivalence between POT and extended OT holds at optimality, all efficient OT solvers instead only output an approximation of the optimum value before projecting it back to the feasible set. However, the well-known rounding algorithm by (Altschuler, Weed, and Rigollet 2017), which is specifically designed for OT, does not guarantee to respect the more intricate structural constraints of POT, resulting in the invalidity of Sinkhorn. Motivated by this challenge and the success of optimization literature for OT, we raise the following central question of this paper: Can we design a rounding algorithm for POT and then utilize it to develop efficient algorithms for POT that even match the best-known complexities of those for OT? We affirmatively answer this question and formally summarize our contributions as follows. • We theoretically and experimentally show the infeasibility of the state-of-the-art Sinkhorn algorithm for POT due to its incompatible rounding algorithm. We propose a novel POT rounding procedure ROUND-POT (Rounding Algorithm Section), which projects an approximate solution onto the feasible POT set in O(n2) time. • From our theoretical bounds of the Sinkhorn constraint violations and the newly introduced ROUND-POT, we provide a revised procedure for Sinkhorn which will return a feasible POT solution. We also establish the revised complexity of Sinkhorn for POT (Table 1). • Predicated on our novel dual formulation for entropic regularized POT objective, our proposed Adaptive Primal-Dual Accelerated Gradient Descent (APDAGD) algorithm for POT finds an ε-approximate solution in e O(n2.5/ε), which is better in ε than the revised Sinkhorn. Various experiments on synthetic and real datasets and with applications such as point cloud registration, color transfer, and domain adaptation illustrate not only our algorithms’ favorable performance against the pre-revised Sinkhorn but also the versatility of POT compared to OT. • Motivated by our novel rounding algorithm, we further reformulate the POT problem with ℓ1 penalization as a minimax problem and propose Dual Extrapolation (DE) framework for POT. We prove that DE algorithm can theoretically achieve e O(n2/ε) computational complexity, thereby being the best in the POT literature to the best of our knowledge (Table 1). Preliminaries Notation The set of non-negative real numbers is R+. We use bold capital font for matrices (e.g., A) and bold lowercase font for vectors (e.g., x). For an m × n matrix X, vec(X) denotes the (mn)-dimensional vector obtained by concatenating the rows of X and transposing the result. Entrywise multiplication and division for matrices and vectors are respectively denoted by ⊙and ⊘. For 1 ≤p ≤∞, let ∥·∥p be the ℓp-norm of matrix or vector. For matrices, ∥·∥p→q is the operator norm: ∥A∥p→q = sup∥x∥p=1 ∥Ax∥q. Three specific cases are considered in this paper: for q ∈{1, 2, ∞}, ∥A∥1→q is the largest ℓq norm of any column of A. We use ∥A∥max and ∥A∥min to denote the maximum and minimum entries in absolute value of a matrix A, respectively. The n-vectors of zeros and of ones are respectively denoted by 0n and 1n. The (n −1)-dimensional probability simplex is ∆n =  v ∈Rn + : v⊤1n = 1 . Partial Optimal Transport Consider two discrete distributions r, c ∈Rn + with possibly different masses. POT seeks a transport plan X ∈Rn×n + which maps r to c at the lowest cost. Since the masses at two marginals may differ, only a total mass s such that 0 ≤ s ≤min{∥r∥1 , ∥c∥1} is allowed to be transported (Chapel, Alaya, and Gasso 2020; Le et al. 2021). Formally, the POT problem is written as POT(r, c, s) = min⟨C, X⟩s.t. X ∈U(r, c, s), (1) where U(r, c, s) is defined as  X ∈Rn×n + : X1n ≤r, X⊤1n ≤c, 1⊤ n X1n = s , The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8091 Algorithm Regularizer Cost per iteration Iteration complexity Iterative Bregman Projections (Benamou et al. 2015) Entropic Unspecified Unspecified (Infeasible) Sinkhorn (Le et al. 2021) Entropic O(n2) e O(1/ε2) (Feasible) Sinkhorn (This paper) Entropic O n2 e O 1/ε4 APDAGD (This paper) Entropic O(n2) e O (√n/ε) Dual Extrapolation (This paper) Area-convex O(n2) e O (1/ε) Table 1: Type of regularizers and orders of complexity for four algorithms for POT approximation. i.e. the feasible set for the transport map X is and C ∈ Rn×n + is a cost matrix. The goal of this paper is to derive efficient algorithms to find an ε-approximate solution to POT(r, c, s), pursuant to the following definition. Definition 1 (ε-approximation). For ε ≥0, the matrix X ∈ Rn×n + is an ε-approximate solution to POT(r, c, s) if X ∈ U(r, c, s) and ⟨C, X⟩≤min⟨C, X′⟩+ ε s.t. X′ ∈U(r, c, s). To aid the algorithmic design in the following sections, we introduce two new slack variables p, q ∈Rn + and equivalently express problem (1) as min X≥0,p≥0,q≥0⟨C, X⟩ (2) s.t. X1n + p = r, X⊤1n + q = c, 1⊤ n X1n = s. (3) We also study a equivalent formulation of this problem: min x≥0 ⟨d, x⟩s.t. Ax = b, (4) where we perform vectorization with d⊤= (vec(C)⊤, 0⊤ 2n) and x⊤= (vec(X)⊤, p⊤, q⊤). The constraints in Equation 2 are encoded in Ax = b, where A ∈R(2n+1)×(n2+2n) and b ∈H := R2n+1 such that (Ax)⊤= ((X1+p)⊤, (X⊤1+ q)⊤, 1⊤X1) and b⊤= (r⊤, c⊤, s). In other words, the linear operator A has the form A =  A′ I2n 1⊤ n2 02n  , where A′ is the edge-incidence matrix of the underlying bipartite graph in OT problems (Dvurechensky, Gasnikov, and Kroshnin 2018; Jambulapati, Sidford, and Tian 2019). Revisiting Sinkhorn for POT We can reformulate Problem (1) by adding dummy points and extending the cost matrix as eC =  C 0n 0⊤ n A  ∈R(n+1)×(n+1) + , where A > max(Ci,j) (Chapel, Alaya, and Gasso 2020). Then the two marginals are augmented to (n + 1)dimensional vectors as er⊤= (r⊤, ∥c∥1 −s) and ec⊤= (c⊤, ∥r∥1−s). (Chapel, Alaya, and Gasso 2020, Proposition 1) show that one can obtain the solution POT by solving this extended OT problem with balanced marginals er,ec and cost matrix eC. In particular, if the OT problem admits an optimal solution of the form eX =  ¯X ep eq⊤ e Xn+1,n+1  ∈R(n+1)×(n+1) + , then ¯X ∈Rn×n + is the solution to the original POT. (Le et al. 2021) seeks an approximate solution to the extended OT problem using the Sinkhorn algorithm (see Algorithm 5 in the Appendix). Then the rounding procedure by (Altschuler, Weed, and Rigollet 2017) is applied to the solution to give a primal feasible matrix. While the two POT inequality constraints are satisfied, we discover in the following Theorem that the equality constraint 1⊤¯X1 = s is violated. The proof is in Appendix, Revisiting Sinkhorn for POT section. Theorem 2. For a POT solution ¯X from (Le et al. 2021), the constraint violation V := 1⊤¯X1 −s can be bounded as ˜O ∥C∥2 max A  ≥V ≥exp −12A log n ε −O(log n)  . Feasible Sinkhorn Procedure: With these bounds, we deduce that in order for Sinkhorn to be feasible, one needs to both utilize our ROUND-POT and choose a sufficiently large A (Theorem 3) as opposed to the common practice of picking A a bit larger than 1 (Le et al. 2021). We derive the revised complexity of Sinkhorn for POT as follows. Theorem 3 (Revised Complexity for Feasible Sinkhorn with ROUND-POT). We first derive the sufficient size of A to be O (∥C∥max/ε). With this large A and ROUNDPOT, Sinkhorn for POT has a computational complexity of ˜O n2∥C∥2 max/ε4 as opposed to ˜O n2∥C∥2 max/ε2 (Le et al. 2021). The detailed proof for this theorem is included in Appendix, Revisiting Sinkhorn for POT section. We also empirically verify this worsened complexity in section in Feasible Sinkhorn section in Appendix. Remark 4. Respecting the equality constraint is crucial for various applications that demand strict adherence to feasible solutions like point cloud registration (Qin et al. 2022) (for avoiding incorrect many-to-many correspondences) and mini-batch OT (Nguyen et al. 2022) (for minimizing misspecification). Hence, it is imperative for POT to transport the exact fraction of mass to achieve an optimal mapping, which is vital for the effective performances of ML models. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8092 Algorithm 1: ROUND-POT Input: x = (vec(X)⊤, p⊤, q⊤)⊤; marginals r, c; mass s. 1: ¯p = EP(r, s, p) 2: ¯q = EP(c, s, q) 3: g = min{1, (r −¯p) ⊘X1} 4: h = min{1, (c −¯q) ⊘X⊤1} 5: X′ = diag(g)Xdiag(h) 6: e1 = (r −¯p) −X′1, e2 = (c −¯q) −X′⊤1 7: ¯X = X′ + e1e⊤ 2 / ∥e1∥1 Output: ¯x = ( ¯X, ¯p, ¯q) Rounding Algorithm All efficient algorithms for standard OT (Dvurechensky, Gasnikov, and Kroshnin 2018; Lin, Ho, and Jordan 2019; Guminov et al. 2021) only output an infeasible approximation of the optimum value, and leverage the well-known rounding algorithm (Altschuler, Weed, and Rigollet 2017, Algorithm 2) to project it back to the set of admissible couplings. Nevertheless, its ad-hoc design tailored to the OT’s marginal constraints makes generalization to the case of POT with more intricate structural constraints non-trivial. In fact, we attribute the rather limited literature on efficient POT solvers to such lack of a rounding algorithm for POT. Specifically, previous works rely on imposing additional assumptions on the input masses to permit reformulation of POT into standard OT with an additional computational burden (Chapel, Alaya, and Gasso 2020; Le et al. 2021). Deviating from the vast literature, we address this fundamental challenge by proposing a novel rounding procedure for POT, termed ROUND-POT (Algorithm 1), to efficiently round any approximate solution to a feasible solution of (2). Given an approximate solution x = (X, p, q) ≥0 violating the POT constraints of (2) by a predefined error, ∥Ax −b∥1 ≤δ for some δ, ROUND-POT returns ¯x = ( ¯X, ¯p, ¯q) ≥0 strictly in the feasible set, i.e., A¯x = b, and close to x in ℓ1 distance. The Enforcing Procedure (EP) (Algorithm 2) is a novel subroutine to ensure 0 ≤¯p ≤r and ∥¯p∥1 = ∥r∥1 −s (Lemma 5). Equivalently, a similar procedure is applied to (c, s, q) in step 2 of Algorithm 1 with similar guarantees for ¯q. Step 1 transforms p (or q) to p′ (or q′) so that 0 ≤p′ ≤r (or 0 ≤q′ ≤c). The transformation in steps 2 and 3 ensures that ∥p′′∥1 ≤∥r∥1−s (or ∥q′′∥1 ≤∥c∥1−s). The rest of the EP steps ensure the other guarantee ∥¯p∥1 = ∥r∥1 −s. The proof is in Rounding Algorithm section in the Appendix. Lemma 5 (Guarantees for EP). We obtain in O(n) time 0 ≤ ¯p ≤r and ∥¯p∥1 = ∥r∥1 −s. For ROUND-POT, steps 3 through 7 check whether the solutions X violate each of the two equality constraints X1 = r −¯p and X⊤1 = c −¯q; if so, the algorithm projects X into the feasible set. It is noteworthy that these two constraints directly implies the last needed constraint 1⊤X1 = s. Finally, ROUND-POT returns an output that satisfies the required constraints in Equation (2). The following Theorem 6 characterizes the error guarantee of the rounded output ¯x. Its detailed proof can be found in Rounding Algorithm section in the Appendix. Algorithm 2: Enforcing Procedure EP Input: marginal r (or c); mass s; slack variable p (or q). 1: p′ = min{p, r} 2: if ∥p′∥1 = 0 or ∥r∥1 = s then 3: α = 1 4: p′′ = p′ 5: else 6: α = min {1, (∥r∥1 −s)/ ∥p′∥1} 7: p′′ = αp′ 8: end if 9: if ∥p′∥1 > (∥r∥1 −s) then 10: ¯p = p′′ 11: else 12: i = 0 13: while ∥p′′∥1 ≤∥r∥1 −s do 14: i = i + 1 15: p′′ i = ri 16: end while 17: p′′ i = p′′ i −(∥p′′∥1 −∥r∥1 + s) 18: ¯p = p′′ 19: end if Output: ¯p. Theorem 6 (Guarantees for ROUND-POT). Let A, x (consisting of X, p and q) and b be defined as in the preliminaries. If x satisfies that x ≥0 and ∥Ax −b∥1 ≤δ for some δ ≥0, Algorithm 1 outputs ¯x ≥0 (consisting of ¯X, ¯p and ¯q) in O(n2) time such that A¯x = b and ∥x −¯x∥1 ≤23δ. Adaptive Primal-Dual Accelerated Gradient Descent (APDAGD) Dual Formulation and Algorithmic Design Following a similar formulation to (Dvurechensky, Gasnikov, and Kroshnin 2018, Section 3.1), we have the following primal problem with entropic regularization min x≥0 {f(x) := ⟨d, x⟩+ γ⟨x, log x⟩} s.t. Ax = b, (5) where Ax = b is encoded as explained in Equation (4). Since problem (5) is a linearly constrained convex optimization problem, strong duality holds. Lemma 7. With a dual variable λλλ ∈H∗= R2n+1, the dual of (5) is given by min λλλ∈H∗  φ(λλλ) := ⟨λλλ, b⟩+ max x∈Q  −f(x) −⟨x, A⊤λλλ⟩  , or equivalently min y,z,t {−ts −⟨y, r⟩−⟨z, c⟩ −γ n X i,j=1 e−(Ci,j+yi+zj+t)/γ−1 + e−yi/γ−1 + e−zj/γ−1   , (6) where y, z, t are dual variables corresponding the POT constraints in (2) as λλλ = (y⊤, z⊤, t)⊤(which we simply refer as (y, z, t) from now on). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8093 Algorithm 3: Approximating POT by APDAGD Input: marginals r, c; cost matrix C. 1: γ = ε/(4 log(n)), eε = ε/(8∥C∥max) 2: if ∥r∥1 > 1 then 3: eε = min {eε, 8(∥r∥1 −s)/(∥r∥1 −1)} 4: end if 5: if ∥c∥1 > 1 then 6: eε = min {eε, 8(∥c∥1 −s)/(∥c∥1 −1)} 7: end if 8: er = (1 −eε/8) r + eε1n/(8n) 9: ec = (1 −eε/8) r + eε1n/(8n) 10: eX = APDAGD(C, γ,er,ec, eε/2) 11: ¯X = ROUND-POT( eX,er,ec, s) Output: ¯X. More details on the dual formulation and properties (strong convexity, smoothness, etc) of the primal and dual objectives are in Appendix. The APDAGD procedure is described in Algorithm 8 in Appendix. So as to approximate POT, we incorporate our novel rounding algorithm with a similar procedure to (Lin, Ho, and Jordan 2019, Algorithm 2), in Algorithm 3. Computational Complexity Now, we provide the computational complexity of APDAGD (Theorem 8) and its proof sketch. The detailed proof of this result will be presented in Complexity of APDAGD for POT Detailed Proof subsection in Appendix. Theorem 8 (Complexity of APDAGD). The APDAGD algorithm returns ε-approximation POT solution bX ∈ U(r, c, s) in e O n5/2 ∥C∥max /ε  . Proof sketch. Step 1: we present the reparameterization u = −y/γ −1, v = −z/γ −1 and w = −t/γ + 1 for the dual (6), leading to the equivalent dual form min u,v,w n X i,j=1 exp (−Ci,j/γ + ui + vj + w) + n X i=1 exp(ui) + n X j=1 exp(vj) −⟨u, r⟩−⟨v, c⟩−ws. This transformation will facilitate the bounding of the dual variables in later steps. Step 2: we proceed to bound the ℓ∞-norm of the transformed optimal dual variables ∥(u∗, v∗, w∗)∥∞. Conventional analyses for OT such as (Lin, Ho, and Jordan 2019) are inapplicable to the case of POT due to the addition of the third dual variable w and more intricate dependencies of the dual variables u, v. To this end, our novel proof technique establishes the tight bound of ∥(u∗, v∗, w∗)∥∞= e O (∥C∥max), which consequently translates to the final bound for original dual variables ∥(y∗, z∗, t∗)∥2 = e O (√n ∥C∥max) in Lemma 18. Bounding the ℓ2-norm (i.e. bounding ¯R) is crucial because it contributes to the APDAGD guarantees (Theorem 16) and the final complexity. Step 3: Combining the ¯R bound from Step 2 in view of (Lin, Ho, and Jordan 2019, Proposition 4.10) and the guarantees of ROUND-POT (Theorem 6), we conclude the final computational complexity of APDAGD of e O(n2.5/ε). Dual Extrapolation (DE) Our novel POT rounding algorithm permits the development of Dual Extrapolation (DE) for POT. From our analysis, DE is a first-order and parallelizable algorithm that can approximate POT distance up to ε accuracy with e O(1/ε) parallel depth and e O(n2/ε) total work. Setup For each feasible x, we have ∥x∥1 = ∥X∥1+∥p∥1+∥q∥1 = ∥r∥1 + ∥c∥1 −s. We can normalize x = x/(∥r∥1 + ∥c∥1 − s), b = b/(∥r∥1 + ∥c∥1 −s). These imply x ∈∆n2+2n. The POT problem formulation (4) is now updated as min x∈∆n2+2n ⟨d, x⟩s.t. Ax = b, (7) We then consider the ℓ1 penalization for the problem (7) and show that it has equal optimal value and ε-approximate minimizer to those of the POT formulation (7) (more details in ℓ1 Penalization subsection in Appendix). Through a primaldual point of view, the ℓ1 penalized objective (26) can be rewritten as min x∈X max y∈Y F(x, y) := d⊤x + 23 ∥d∥∞ y⊤Ax −y⊤b  , (8) with X = ∆n2+2n, Y = [−1, 1]2n+1. Note that the term 23 comes from the guarantees of ROUND-POT (Theorem 6, Lemma 20). Let Z = X × Y such that x ∈X and y ∈Y. For a bilinear objective F(x, y) that is convex in x and concave in y, it is natural to define the gradient operator g(x, y) = (∇xF(x, y), −∇yF(x, y)). Specifically for the objective (8), we have g(x, y) = (d + 23 ∥d∥∞A⊤y, −23 ∥d∥∞(Ax −b)). This minimax objective can be solved with the dual extrapolation (Nesterov 2007), which requires strongly convex regularizers. This setup can be relaxed with the notion of area-convexity (Sherman 2017, Definition 1.2), in the following definition. Definition 9 (Area Convexity). A regularizer r is κ-areaconvex w.r.t an operator g if for any x1, x2, x3 in its domain, κ 3 X i r(xi) −3κr P3 i xi 3 ! ≥⟨g(x2) −g(x1), x2 −x3⟩. The regularizer chosen for this framework is the Sherman regularizer, introduced in (Sherman 2017) r(x, y) = 2 ∥d∥∞ 10⟨x, log x⟩+ x⊤A⊤(y2)  , (9) in which y2 is entry-wise. While this regularizer has a similar form to that in (Jambulapati, Sidford, and Tian 2019), the POT formulation leads to a different structure of A. For instance, ∥A∥1 = 3 instead of 2. This following lemma shows that chosen r is 9-area-convex (its proof is in the Proof of Lemma 10 section in the Appendix). Lemma 10. Regularizer r (9) is 9-area-convex with respect to the gradient operator g, i.e., κ = 9. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8094 Algorithm 4: Dual Extrapolation for POT Input: linearized cost d; linear operator A; constraints b; area-convexity coefficient κ; initial states s0 x = 0n2+2n, s0 y = 02n+1; iterations M (Theorem 11) 1: ∇xr(¯z) = 20 ∥d∥∞(1 −log(n2 + 2n))1n2+2n 2: ∇yr(¯z) = 02n+1 3: for t = 0, 1, 2, . . . , T −1 do 4: v = st x −∇xr(¯z) 5: u = st y −∇yr(¯z) 6: (zt x, zt y) = AM(M, v, u) 7: v = v + (d + 23 ∥d∥∞A⊤zt y)/κ 8: u = u −23 ∥d∥∞(Azt x −b)/κ 9: (wt x, wt y) = AM(M, v, u) 10: st+1 x = st x + (d + 23 ∥d∥∞A⊤wt y)/(2κ) 11: st+1 y = st y −23 ∥d∥∞(Awt x −b)/(2κ) 12: end for Output: ¯wx = PT −1 t=0 wt x/T, ¯wy = PT −1 t=0 wt y/T. Algorithmic Development The main motivation is the DE Algorithm 7 in Appendix, proposed by (Nesterov 2007). This general DE framework essentially has two proximal steps per iteration, while maintaining a state s in the dual space. We follow (Jambulapati, Sidford, and Tian 2019) and update s with 1/2κ rather than 1/κ (Nesterov 2007). The proximal steps (steps 2 and 3 in Algorithm 7) needs to minimize: P(x, y) := ⟨v, x⟩+ ⟨u, y⟩+ r(x, y). (10) This can be solved efficiently with an Alternating Minimization (AM) approach (Jambulapati, Sidford, and Tian 2019). Details for AM are included in the Appendix. Combining both algorithms, we have the DE for POT Algorithm 4, where each proximal step is solved by the AM subroutine. Computational Complexities Firstly, we bound the regularizer r to satisfy the convergence guarantees (Jambulapati, Sidford, and Tian 2019) in Proof of Lemma 22 subsection in Appendix. In that same subsection, we also derive the required number of iterations T in DE with respect to Θ, the range of the regularizer. Next, we have this essential lemma that bounds the number of iterations to evaluate a proximal step. Theorem 11 (Complexity of AM). For T = ⌈36Θ/ε⌉iterations of DE, AM Algorithm 8 obtains additive error ε/2 in M = 24 log 840 ∥d∥∞/ε2 + 6/ε  Θ + 1336 ∥d∥∞/9  iterations. This is done in wall-clock time O(n2 log η) with η = log n ∥d∥∞/ε. The proof of this theorem can be found in Proof of Theorem 11 subsection in Appendix. The main proof idea is to bound the number of iterations required to solve the proximal steps. This explicit bound for the number of AM iterations is novel as in DE for OT (Jambulapati, Sidford, and Tian 2019), the authors runs a while loop and do not analyze its final number of iterations. We can now calculate the Figure 2: Primal optimality gap for solutions produced by APDAGD and Dual Extrapolation. Figure 3: Primal optimality gap against optimization rounds achieved by our revised Sinkhorn and APDAGD for POT. The marginal distributions are taken from the later color transfer application in this section, and the red horizontal line depicts the pre-defined tolerance (ε) for both algorithms. computational complexity of the DE algorithm. The proof is in Proof of Theorem 12 subsection in Appendix. Theorem 12 (Complexity of DE). In e O(n2 ∥C∥max /ε) wall-clock time, the DE Algorithm 4 returns ( ¯wx, ¯wy) ∈Z so that the duality gap (27) is less than ε. Numerical Experiments In this section, we provide numerical results on approximating POT and its applications using the algorithms presented above.1 In all settings, the optimal solution is found by solving the linear program (1) using the cvxpy package. Due to space constraint, we also include many extra experiments such as domain adaptation application, large-scale APDAGD, run time for varying ϵ comparison, and revised Sinkhorn performance in Appendix, further solidifying the efficiency and practicality of the proposed algorithms. Run Time Comparison APGAGD vs DE: We are able to implement the DE algorithm for POT while the authors of DE for OT faced numerical overflow and had to use mirror prox instead “for numerical stability considerations” (Jambulapati, Sidford, and Tian 2019). For the setup of Figure 2, we use images in the CIFAR-10 dataset (Krizhevsky and Hinton 2009) as the 1Implementation and numerical experiments can be found at https://github.com/joshnguyen99/partialot. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8095 marginals. More details are included in the Further Experiment Setup Details section in Appendix. In Figure 2, despite having a better theoretical complexity, DE has relatively poor practical performance compared to APDAGD. This is not surprising as previous works on this class of algorithms like (Dvinskikh and Tiapkin 2021) reach similar conclusions on DE’s practical limitations. This can be partly explained by the large constants that are dismissed by the asymptotic computational complexity. Thus, in our applications in the following subsection, we will use APDAGD. APGAGD vs Sinkhorn: For the same setting with CIFAR-10 dataset, we report that the ratio between the average per iteration cost of APDAGD and that of Sinkhorn is 0.68. Furthermore, in the Run Time for Varying ε section in Appendix, we reproduce the same result in Figure 6 as Figure 1 in (Dvurechensky et al., 2018), comparing the runtime of APDAGD and Sinkhorn for varying ε. Revised Sinkhorn Using the similar setting as the later example of color transfer, we can empirically verify that our revised Sinkhorn can achieve the required tolerance ε of the POT problem in Figure 3 (as opposed to pre-revised Sinkhorn in Figure 1). Point Cloud Registration We now present an application of POT in point set registration, a common task in shape analysis. Start with two point clouds in three dimensions R = {xi ∈R3 | i = 1, . . . , m} and Q = {yj ∈R3 | j = 1, . . . , n}. The objective is to find a transformation, consisting of a rotation and a translation, that best aligns the two point clouds. When the initial point clouds contain significant noise or missing data, the registration result is often badly aligned. Here, we consider a scenario where one set has missing values. In Figure 4 (a), the blue point cloud is set to contain the front half of the rabbit, retaining about 45% of the original points. A desirable transformation must align the first halves of the two rabbits correctly. Figure 4 compares the point clouds registration result using these methods. If T is simply the OT matrix, not subject to a total transported mass constraint, the blue cloud is clearly not well-aligned with the red cloud. This is because the points for which the blue cloud is missing (i.e., the right half of the rabbit) are in the red cloud. If the total mass transported is set to s = min{m,n} max{m,n}, then we end up with a POT matrix T with all entries summing to s. In Figures 4 (c) and (d), the POT solution leads to a much better result than the OT solution: the left halves of the rabbit are closer together. Importantly, the feasible solution obtained by APDAGD leads to an even better alignment, compared to the pre-revised Sinkhorn, due to its infeasibility. Color Transfer For color transfer, a popular application in computer vision, POT offers flexibility in transferring colors between two possibly different-sized images, in contrast to OT which requires two color histograms to be normalized (Bonneel et al. 2015). We follow the setup by (Blondel, Seguy, and Rolet 2018). Implementation details are in the Further Experiment Figure 4: Point cloud registration using POT. (a): Initial point sets with one set (in blue) missing 45% of the points. (b): Registration result obtained after transforming the blue point cloud using OT plan by Sinkhorn. (c): Registration result using POT plan by pre-revised Sinkhorn (Le et al. 2021). (d): Registration result using POT plan by APDAGD. 0 1000 2000 3000 4000 5000 10 3 10 2 10 1 Sinkhorn APDAGD Figure 5: Flexibility in color transfer using POT as opposed to OT. First row (left to right): source and target images (of different sizes), optimality gap at each iteration of Sinkhorn and APDAGD, and image derived from the exact solution by Gurobi when α = 0.9. Second row: images derived from the solutions by Sinkhorn at four levels of α: 0.3, 0.5, 0.7 and 0.9. Third row: images derived from the solutions by APDAGD at the same four levels of α. Setup Details section in Appendix. Our results are presented in Figure 5. The third row displays the result produced by APDAGD with different levels of α (or transported mass s). Increasing α makes the wall color closer to the lighter part of the kiwi in the target image but comes at a cost of saturating the window frames’ white. We emphasize that α is a tunable parameter, and the user can pick the most suitable transported mass. This can be more flexible than the vanilla OT which requires marginal masses to be normalized. We also emphasize that qualitatively, the solution produced by APDAGD closely matches the exact solution on top right corner, in contrast to pre-revised Sinkhorn’s (Le et al. 2021). Conclusion In this paper, we first examine the infeasibility of Sinkhorn for POT. We then propose a a novel rounding algorithm which facilitates our development of feasible Sinkhorn procedure with guarantees, APDAGD and DE. DE achieves the best theoretical complexity while APDAGD and revised Sinkhorn are practically efficient algorithms, as demonstrated in our extensive experiments. We believe the rigor and applicability of our proposed methods will further facilitate the practical adoptions of POT in AI applications. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8096 Acknowledgements We would like to thank the reviewers of AAAI 2024 for their detailed and insightful comments and Dr. Darina Dvinskikh for sharing the numerical implementations of their work (Dvinskikh and Tiapkin 2021). References Altschuler, J.; Weed, J.; and Rigollet, P. 2017. Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. In Advances in Neural Information Processing Systems, 1964–1974. Benamou, J.-D.; Carlier, G.; Cuturi, M.; Nenna, L.; and Peyr´e, G. 2015. Iterative Bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing, 37(2): A1111–A1138. Blondel, M.; Seguy, V.; and Rolet, A. 2018. Smooth and Sparse Optimal Transport. In AISTATS, 880–889. Bonneel, N.; and Coeurjolly, D. 2019. SPOT: Sliced Partial Optimal Transport. ACM Transactions on Graphics. Bonneel, N.; Rabin, J.; Peyr´e, G.; and Pfister, H. 2015. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1): 22–45. Caffarelli, L. A.; and McCann, R. J. 2010. Free boundaries in optimal transport and Monge-Amp`ere obstacle problems. Annals of Mathematics, 171(2): 673–730. Chapel, L.; Alaya, M. Z.; and Gasso, G. 2020. Partial Optimal Tranport with applications on Positive-Unlabeled Learning. In Advances in Neural Information Processing Systems 33. Chizat, L.; Peyr´e, G.; Schmitzer, B.; and Vialard, F.-X. 2015. Unbalanced Optimal Transport: Dynamic and Kantorovich Formulation. Courty, N.; Flamary, R.; Tuia, D.; and Rakotomamonjy, A. 2017. Optimal Transport for Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(9): 1853–1865. Cuturi, M. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, 2292–2300. Dvinskikh, D.; and Tiapkin, D. 2021. Improved Complexity Bounds in Wasserstein Barycenter Problem. In Banerjee, A.; and Fukumizu, K., eds., Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, 1738–1746. PMLR. Dvurechensky, P.; Gasnikov, A.; and Kroshnin, A. 2018. Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn’s Algorithm. In International conference on machine learning, 1367–1376. Figalli, A. 2010. The optimal partial transport problem. Archive for rational mechanics and analysis, 195(2): 533– 560. Gramfort, A.; Peyr´e, G.; and Cuturi, M. 2015. Fast Optimal Transport Averaging of Neuroimaging Data. CoRR, abs/1503.08596. Guminov, S.; Dvurechensky, P.; Tupitsa, N.; and Gasnikov, A. 2021. Accelerated Alternating Minimization, Accelerated Sinkhorn’s Algorithm and Accelerated Iterative Bregman Projections. arXiv:1906.03622. Jambulapati, A.; Sidford, A.; and Tian, K. 2019. A Direct eO(1/ε) Iteration Parallel Algorithm for Optimal Transport. ArXiv Preprint: 1906.00618. Kantorovich, L. V. 1942. On the translocation of masses. In Dokl. Akad. Nauk. USSR (NS), volume 37, 199–201. Kawano, K.; Koide, S.; and Otaki, K. 2021. Partial Wasserstein Covering. CoRR, abs/2106.00886. Krizhevsky, A.; and Hinton, G. 2009. Learning multiple layers of features from tiny images. Technical Report TR-2009. Le, K.; Nguyen, H.; Pham, T.; and Ho, N. 2021. On Multimarginal Partial Optimal Transport: Equivalent Forms and Computational Complexity. Lin, T.; Ho, N.; and Jordan, M. 2019. On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms. In International Conference on Machine Learning, 3982–3991. Liu, W.; Zhang, C.; Xie, J.; Shen, Z.; Qian, H.; and Zheng, N. 2020. Partial Gromov-Wasserstein Learning for Partial Graph Matching. CoRR, abs/2012.01252. Nesterov, Y. 2007. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2-3): 319–344. Nguyen, K.; Nguyen, D.; Pham, T.; Ho, N.; et al. 2022. Improving mini-batch optimal transport via partial transportation. In International Conference on Machine Learning, 16656–16690. PMLR. Nietert, S.; Cummings, R.; and Goldfeld, Z. 2023. Robust Estimation under the Wasserstein Distance. arXiv preprint arXiv:2302.01237. Pele, O.; and Werman, M. 2008. A Linear Time Histogram Metric for Improved SIFT Matching. In European Conference on Computer Vision. Qin, H.; Zhang, Y.; Liu, Z.; and Chen, B. 2022. Rigid Registration of Point Clouds Based on Partial Optimal Transport. In Computer Graphics Forum, volume 41, 365–378. Wiley Online Library. Rolet, A.; Cuturi, M.; and Peyr´e, G. 2016. Fast Dictionary Learning with a Smoothed Wasserstein Loss. In Gretton, A.; and Robert, C. C., eds., Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, 630–638. Cadiz, Spain: PMLR. Rubner, Y.; Tomasi, C.; and Guibas, L. J. 2000. The earth mover’s distance as a metric for image retrieval. International journal of computer vision, 40(2): 99–121. Sarlin, P.; DeTone, D.; Malisiewicz, T.; and Rabinovich, A. 2019. SuperGlue: Learning Feature Matching with Graph Neural Networks. CoRR, abs/1911.11763. Sherman, J. 2017. Area-convexity, ℓ∞regularization, and undirected multicommodity flow. In STOC, 452–460. ACM. Villani, C. 2008. Optimal transport: Old and New. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8097 Wang, Z.; Xue, N.; Lei, L.; and Xia, G.-S. 2022. Partial Wasserstein Adversarial Network for Non-rigid Point Set Registration. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8098
2024
900
18,741
An Eager Satisfiability Modulo Theories Solver for Algebraic Datatypes Amar Shah, Federico Mora, Sanjit A. Seshia University of California, Berkeley [email protected], [email protected], [email protected] Abstract Algebraic data types (ADTs) are a construct classically found in functional programming languages that capture data structures like enumerated types, lists, and trees. In recent years, interest in ADTs has increased. For example, popular programming languages, like Python, have added support for ADTs. Automated reasoning about ADTs can be done using satisfiability modulo theories (SMT) solving, an extension of the Boolean satisfiability problem with first-order logic and associated background theories. Unfortunately, SMT solvers that support ADTs do not scale as state-of-the-art approaches all use variations of the same lazy approach. In this paper, we present an SMT solver that takes a fundamentally different approach, an eager approach. Specifically, our solver reduces ADT queries to a simpler logical theory, uninterpreted functions (UF), and then uses an existing solver on the reduced query. We prove the soundness and completeness of our approach and demonstrate that it outperforms the state of the art on existing benchmarks, as well as a new, more challenging benchmark set from the planning domain. 1 Introduction Boolean satisfiability (SAT) solvers have been shown to efficiently solve a number of NP-hard problems in areas such as AI planning (Kautz, Selman et al. 1992), verification (Clarke et al. 2001), and software testing (Cadar et al. 2008). Satisfiability modulo theories (SMT) solvers are a natural extension to SAT solvers that can reason about firstorder structures with background theories (Barrett et al. 2021), allowing them to tackle more general problems or to accept more succinct inputs. For example, SMT solvers can reason about bit-vectors (Brummayer and Biere 2009), floating-point numbers (R¨ummer and Wahl 2010), strings (Bjørner et al. 2012), and algebraic data types (ADTs) (Barrett, Fontaine, and Tinelli 2017). The power behind ADTs lies in how they can succinctly express complex structures at a high-level of abstraction while avoiding common programming pitfalls, like null pointer dereferencing (Hoare 1975). For most of their history, ADTs lived exclusively inside functional programming languages, like NPL (Burstall 1977), Standard ML (Milner 1997), and Haskell (Hudak et al. 2007). Recently, however, Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the interest in ADTs has exploded with a number of mainstream languages being released with support for ADTs, e.g., Rust (Jung et al. 2021), or have added support, e.g., Python (Salgado 2023) and Java (Goetz 2022). Automated reasoning about ADTs is important because this construct appears in many different software applications. As the popularity of ADTs grows, the demand for efficient SMT solvers will continue to increase. Unfortunately, the state-of-the-art tools in this space are already struggling to keep up. We demonstrate this empirically by generating a new benchmark set and showing that existing solvers, working together, are only able to solve 56.2% of the new queries in under 20 minutes per query (Sec. 5.2). This imbalance between programming languages and SMT solvers is due to a gap in the SMT solving literature. Oppen (1980) was the first to give a decision procedure for the quantifier-free theory but ADTs do not seem to have permeated the community much further. In 2003, a consorted effort to unify the SMT community began with the first official input standard and competition, called SMT-LIB and SMT-COMP (Barrett et al. 2011), respectively. ADTs were not officially integrated into the standard until 2017, as part of version 2.6 (Barrett, Fontaine, and Tinelli 2017). In the most recent iteration of SMT-COMP, only two solvers participated in the ADT track, the least of any track, and both solvers use a variation of the same solving approach: a lazy SMT architecture combined with theory-specific reasoning based on the work by Oppen from 1980 (see Sec. 6). We propose a new solving technique that departs from the standard approach in the community. Instead of a lazy approach, we take an eager approach (Barrett et al. 2021) that translates the original SMT formula into an equi-satisfiable formula without ADT elements. Our work fills the gap in the literature on SMT solving for ADT queries, and, by doing so, solves more queries than existing solvers (see Sec. 5.1). More importantly, we make the largest empirical contribution to the solving community on SMT-COMP benchmarks, solving different queries than existing tools (see Sec. 5.2). 1.1 Overview and Contributions The rest of this paper is organized as follows. In Sec. 2 we describe ADTs, satisfiability, and our approach through an example planning problem called blocks world. In Sec. 3 we formally define ADTs and give the necessary background The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8099 B A (a) Initial configuration B A B A (b) First move B A B A (c) Second move B A B A (d) Third move B A (e) Target configuration Figure 1: Solution (1b, 1c, and 1d) to a simple blocks world puzzle. 1a is the initial configuration; 1e is the target configuration. on first-order logic and model theory to understand our approach. In Sec. 4 we describe our eager reduction from ADT queries to queries containing only uninterpreted functions (UF). Sec. 4 includes a proof of soundness and completeness along with a complexity analysis. In Sec. 5 we describe a prototype implementation of our approach, called Algaroba, and we evaluate it over two research questions. We find that Algaroba outperforms the state of the art overall and in terms of contribution to the solving community. Sec. 5 also describes a new benchmark set consisting of blocks world queries. This set addresses an important gap in the existing literature: the queries in this set contain all important kinds of ADTs, but are not easy to solve. We survey related work in Sec. 6 and then conclude in Sec. 7 with future work. Overall, we make the following contributions. 1. We define a notion of query depth and a finite, quantifierfree reduction from ADT queries to UF queries that uses depths. This is a new eager solving approach. 2. We prove the soundness and completeness of our approach and show that it generates at most a finite number of assertions. 3. We generate a new benchmark set that contains all important kinds of ADTs and is not trivial to solve. Existing benchmarks do not enjoy both these properties. 4. We implement our reduction approach in a prototype tool called Algaroba and we compare its performance to the state of the art. We find that Algaroba outperforms existing tools and that it makes the largest empirical contribution to the community of solvers. 2 Illustrative Example and New Benchmark To better understand ADTs, satisfiability queries, our approach, and the benchmark set that we generate in Sec. 5, consider the classic blocks world problem first proposed by Winograd (1971). We use a version of the problem that Sussman (1973) used to illustrate the Sussman anomaly (Russell 2010) and that Gupta and Nau (1992) used to show that the associated decision problem is NP-Hard. In the simplified version of the blocks world problem, there is a table with (possibly empty) stacks of blocks on it (an initial configuration), a desired way that the stacks should be arranged (a target configuration), and three rules: 1. blocks can only be taken from the top of a stack; 2. blocks can only be placed on the top of a stack; and 3. only one block can be moved at a time. The general problem is to find a sequence of legal moves that leads from the initial configuration to the target configuration. The associated decision problem is to determine if there is such a sequence of length less than some input k. Fig. 1 shows an example blocks world solution. The initial configuration is given in Fig. 1a, the target configuration in Fig. 1e, and Figs. 1b, 1c, and 1d show a sequence of three legal moves that solve the problem, where faded blocks denote the previous position of the block moved at that step. The blocks world problem is a useful illustrative example for an ADTs solver because the encoding uses three important kinds of ADTs: sum, product, and inductive types. The following OCaml code gives the required type definitions for the example in Fig. 1. 1 type block = A | B 2 type tower = 3 | Empty 4 | Stack of {top: block; rest: tower} 5 type config = 6 | table of {l:tower; c:tower; r:tower} Specifically, this code defines an enumerated type for blocks (block at line 1), a record type for table configurations (config at line 5), and an inductive type for stacks (tower at line 2). Variables of an enumerated type can take on any of the values listed in the type definition. For example, variables of type block can take on the values A or B. Variables of a record type can take on any combination of values of the type arguments listed in the type definition. For example, variables of type config can take on a triple of any three tower values. Enumerated types are the simplest form of a sum type, while records are the simplest form of a product type. ADTs allow for definitions that are both sum and product types. For example, variables of type tower can either be Empty or they can be a Stack but not both (sum). When these variables take on a Stack value, they are a pair of block and tower values (product). Notice that the definition of tower depends on itself. This makes tower an inductive type as well. The blocks world problem is a useful illustrative example for satisfiability queries for two reasons. First, satisfiabilitybased solutions for similar planning problems have been around for decades (Kautz and Selman 1996). Second, encoding the problem as a satisfiability problem is simple when using bounded model checking (Biere et al. 1999) for bounded-horizon planning (e.g., see Rintanen (2003)). Specifically, the bounded model checking-based encoding is given by a transition system and a specification. The transition system starts at the initial configuration, and, at each step, makes a non-deterministic choice of which legal The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8100 move to make. The specification is that the transition system never reaches the target configuration. Given this transition system, specification, and a bound k, bounded model checking will generate a satisfiability query whose solutions are assignments to the non-deterministic choices—concrete choices that get us to the target configuration from the initial configuration in less than k steps. SMT solvers, like our new solver, generate these solutions or prove that none exist. We use this encoding in Sec. 5 to generate a new benchmark set. The blocks world problem also gives a useful intuition for our approach. While variables of inductive data types, like tower, can take on arbitrarily large values (e.g., a stack of a million A blocks), there is a bound on the size of relevant values. For the blocks world problem in Fig. 1 it would never make sense to have a tower value of size greater than two. Such a bound exists for all quantifier-free ADT queries; the problem is that automatically inferring the bound and using it to speed up solving was non-trivial. In this paper, we give an automated procedure for computing an over-approximation of this bound, and then we use this over-approximation to replace ADT functions with uninterpreted functions along with quantifier-free axioms. 3 Background We assume a basic understanding of first-order logic. For a complete introduction, we refer the reader to Lubarsky (2008). Many-sorted first-order logic is like first-order logic but with the universe partitioned into different sorts (Barrett et al. 2021). We use many-sorted first-order logic and assume all terms are well-sorted, i.e. that we never apply a function to a term of the incorrect sort. In practice, standard type checking algorithms will catch these issues. A first-order theory is a set of formulas (axioms). For example Equality and Uninterpreted Functions (Burch and Dill 1994) (UF) uses axioms to restrict the possible interpretations of = (reflexivity, symmetry, transitivity) and all function symbols (congruence). A satisfiability modulo theories (SMT) query is a formula-theory pair. For a formula ϕ, there are free variables (which we will just call variables) and uninterpreted function symbols that we (usually) want to find an interpretation for. A structure is a universe along with a function that describes how all non-logical symbols must be interpreted over the universe. When a structure M satisfies a formula ϕ, then we say that M is a model of ϕ and we denote this as M |= ϕ. As a slight abuse of notation, for a set of formulas Φ = {ϕi}, we use M |= Φ to mean M |= Vn 1 ϕi. Similarly, we use ϕ |= ψ to mean every model of ϕ is a model of ψ. For a model M, we use the notation M[x] or M[f] to represent the variable x or the function f interpreted in the model M. We say that an SMT query (ϕ, T) is sat iff there exists a model M such that M |= T ∪{ϕ}. Otherwise we say the query is unsat. SMT solvers (Barrett et al. 2021) take an SMT query and return sat if there is an assignment to all functions and variables that satisfies the formula and the theory axioms. The distinctive aspect of SMT solvers is that they perform an encoding to SAT, either implicitly or explicitly. Eager SMT solvers perform a satisfiability-preserving reduction to SAT in a sin• ∀⃗s is-f(f(⃗s)) = True • ∀⃗r is-f(g(⃗r)) = False for constructors g ̸= f • ∀⃗t f i(f(⃗s)) = ⃗si for every selector f i of f • ∀t is-f(t) →∃⃗s f(⃗s) = t • ∀t ∀s if s is a descendant of t, then s ̸= t Figure 2: Axioms for corresponding constructors f, testers is-f, and selectors f i. The last axiom is acyclicality. gle phase (e.g., see Seshia (2005)) whereas lazy solvers perform an iterative encoding, on demand. A theory literal is a logical formula with no conjunctions (∧) or disjunctions (∨). These are the base units of SMT solving and are the equivalent of literals in SAT solving. Our approach will be easier to understand when queries are in negation normal form (NNF) and flat. A formula is in NNF if only theory literals are negated. It is flat if all theory literals are of the form ¬(x1 = x2), x1 = x2, x1 = g(x2, ..., xn) where xi are variables. We will transform ADT queries into UF queries through a theory reduction. Definition 3.1 (Theory Reduction). A theory T reduces to a theory R if there is a computable function m such that (ψ, T) is sat ↔(m(ψ), R) is sat 3.1 Theory of Algebraic Data Types We denote the theory of algebraic data types as ADT. It contains the full theory of UF and additional structure given by: Definition 3.2 (ADT (Barrett, Fontaine, and Tinelli 2017)). (1) An ADT A with sort σ is a tuple consisting of: • A finite set of constructor functions AC, where we say a function f : σ1 × ... × σl →σ has sort σ and arity l • A finite set of selectors AS, such that there are m selectors f 1, ..., f m with f i : σ →σi for each constructor f ∈AC with arity m. • A finite set of testers AT and a bijection p : AC → AT which sends f 7→ is-f where is-f : σ → {True, False} (2) Every ADT A satisfies the axioms given in Fig. 2, where for two terms s and t, if s can be obtained by applying a sequence of l selectors to t, then we say s is an lth descendant of t and t is the lth ancestor of s. An ADT term is any expression of an ADT sort. The set of normal ADT terms is the smallest set containing (1) constants (0-ary constructors), (2) constructors applied to only normal ADT terms. It is useful to think of normal terms as trees: constants are leaves and we can build larger trees by applying constructors to normal terms. As an example, the tower definition from earlier uses two constructors: Empty and Stack. These are the two possible ways to build a tower. Empty is a function that takes no inputs and outputs a tower. Stack is a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8101 Algorithm 1: Reduce(ψ) ψ1 ←NNF(ψ) ψ2 ←Flatten(ψ1) k ←Number of ADT variables in ψ2 ψ3 ←Apply Fig. 4a rewrite rules A & B to ψ2 ϕ1, ..., ϕm ←Apply Fig 4b 1, 2, & 3 using k to ψ3 ψ∗←ψ3 ∧ϕ1 ∧... ∧ϕm return UF-SMT-Solver(ψ∗) function that takes a block and a tower and outputs a tower. Each corresponding constructor has a set of selectors. Empty has no selectors and so it is a normal term, but Stack has two selectors, top and rest. In OCaml, we apply selectors using dot notation, e.g., x.top. Selectors can be thought of as de-constructors—we use them to get back the terms constructed a tower. The tower definition implicitly defines two testers: is-Empty and is-Stack These predicates take a tower and return true iff the argument was built using the matching constructor. 4 Eager Reduction of ADT to UF We propose a new SMT solver for the ADT theory. For a quantifier-free input formula ψ, our solver generates a quantifier-free formula ψ∗in UF and then calls an existing UF solver to get a final result. We cannot compute a quantifier-free reduction directly, since ADT axioms have universal quantifiers. Instead, we only instantiate the ADT axioms over terms that are relevant to the input query. When the universe of the input query is finite (e.g., it only contains enums), we instantiate the entire universe (see Sec. 4.1). Otherwise, we follow the procedure in Alg. 1. This procedure transforms the input query to NNF, flattens the result, and then applies the rewrite rules in Fig. 4a and adds the axioms Fig. 4b. The depth of a query is the number of variables in the flattened NNF version of the query. In Alg. 1, the depth is k. The depth is linear in the size of the input query because the NNF and flattening transformations introduce at most a linear number of variables. Rules A and B from Fig. 4a correspond to rewriting constructor and selector applications respectively so that they work well with other constructor, selectors and testers. Rule B contains existential quantifiers, but these are handled through Skolemization (replacing existentially bound variables by free variables). Axioms 1 and 2 from Fig. 4b ensure that only one tester returns true for each term, and that the this tester corresponds to a constant iff the term is a constant. Axiom 3 encodes the ADT acyclicality constraint. To better understand Axiom 3 and acyclicality, consider the following example query. Let x and y be of type tower defined in Sec. 2. The query 1 (is-Stack x) && (is-Stack y) 2 && (y = x.rest) && (x = y.rest) is unsat because any satisfying assignment would need to violate acyclicality. Fig. 3 illustrates this: there is a circular dependency between x and y. Therefore, to avoid spurious models, we must encode the acyclicality property in our reduction. The challenge is that we need to capture this x.top x.rest y.top y.rest x.rest = y y.rest = x x y Figure 3: Visual representation of an unsat query. seemingly infinite property using a only finite number of quantifier-free formulas. Our key result is that we only need to enforce acyclicality for all l < kth descendants, where k is the number of ADT variables in the flattened query. To see the intuition behind this, consider the following generalization of the previous example. Let x1, ..., xk be of type tower. The flat query (is-Stack x1) &&...&& (is-Stack xk) && (x2 = x1.rest) && (x3 = x2.rest) &&...&& (xk = xk−1.rest) && (x1 = xk.rest) asserts that there is a cycle of size k. Since, our reduction asserts acyclicality for all l < kth descendants, we correctly return unsat on this query. Furthermore, it is impossible to have a query with a cycle of size more than k using k or fewer variables (in the flat query), so our encoding is sufficient (see Sec. 4.3 for a full proof). 4.1 Finite Universe Instantiation If we recognize an ADT has a finite universe, we create constants for every term in the universe and instantiate the axioms over the entire, finite universe. This is a source of double exponential blowup, but ADTs with finite universes are rare and often small enough to prevent noticeable blowup. 4.2 Complexity Analysis In the finite universe case, we can have a doubly exponential blowup. One adversarial case is an ADT that is records of records of enums: 1 type enum = A | B 2 type rec1 = j of {l: enum; r: enum} 3 type rec2 = k of {m: rec1; s: rec1} Here, enum has a universe of size two, rec1 has a universe of size four, and rec2 has a universe of size 16. This gives a double exponential blowup since we are creating variables to represent every normal term of every datatype. In the infinite universe case, we have at worst an exponential blowup in the size of the query. We know the depth k is at most linear in the size of the query, however, for a term x of the tree type definition below, the number of sequences of selector applications of length up to k is 2k+1 −2, thus giving us an exponential blowup in the number of terms. 1 type tree = 2 | Leaf 3 | Node of {left: tree; right: tree} 4.3 Proof of Correctness In this proof when we refer to a rule or axiom, we mean those from Fig. 4a and Fig. 4b, respectively. We assume ψ is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8102 A. f(⃗s) = t =⇒f(⃗s) = t ∧is-f(t) ∧Vm i=1 f i(t) = ⃗si B. f j(t) = tj =⇒f j(t) = tj ∧ [is-f(t) →[∃⃗s[f(⃗s) = t ∧Vm i=1 f i(t) = ⃗si]]] (a) Rewrites 1. Add W|AT | i=1 [is-fi(t) ∧V|AT | j=1,j̸=i ¬is-fj(t)] 2. For any constant constructor c, add is-c(t) ↔c = t 3. If s is the lth descendant of t and l < k, add s ̸= t (b) Axioms Figure 4: Rewrite rules (a) and additional axioms (b) used in Alg. 1 flattened and in NNF (i.e., it is ψ2 in Alg. 1). For simplicity, we also assume that ψ does not contain any uninterpreted functions or sorts, but it is not difficult to see how to extend the proof to include them. Theorem 4.1. Alg. 1 shows that ADT reduces to UF. Specifically, (ψ, ADT) is sat ↔(ψ∗, UF) is sat. Proof. →: If (ψ, ADT) is sat, then there is a model M |= ψ ∪ADT. ψ∗is ψ modified according to rules A and B and axioms 1, 2, and 3. Each of these rules and axioms is consistent with the axioms of ADT in Definition 3.2 and thus M |= ADT ∪ψ∗. Since every model in ADT is a model in UF, M |= UF ∪ψ∗. Thus, (ψ∗, UF) is sat ←: If (ψ∗, UF) is sat, then we know that there is some model N |= UF ∪ψ∗. We will assume that ψ∗is a conjunction of theory literals. This is permissible since the modification from ψ to ψ∗involves either replacing a theory literal with a conjunction of theory literals or adding on conjunctions of theory literals to the end of the formula. Thus, the satisfying assignment to the propositional structure of ψ∗, will be a superset of a satisfying assignment to the propositional structure of ψ. We want to modify N to create a model M |= ADT ∪ψ. As we describe in Section 4.1, if the universe is finite, we manually instantiate all of the ADT normal terms in the query. Thus, in the finite universe case, it must be that ADT |= ψ. We now assume the ADT universe is infinite. Since we want M |= ADT, we set the universe of M to be all of the ADT normal terms. Consider the set of variables that appear in ψ∗which we call V = {x1, ..., xk}. We describe an algorithm that for all x ∈V , sets M[x] to some ADT normal term, such that we ultimately get M |= ψ. V is the set of variables in ψ∗, thus for each x ∈V , by axiom 1 there is exactly one tester is-f such that N |= is-f(x). There are two “base cases” for our construction of M. First, if f is some constant constructor, by axiom 2 of the reduction, we know that N[x] = N[f], so we set M[x] ≜ M[f]. Second, if x is some variable that is never set equal to some constructor application or selected from (either directly or transitively) then we set M[x] to an ADT normal term. Since our ADT universe is infinite, we will specifically pick an ADT normal term t such that it takes at least k + 1 selector applications to get to any of the ADT normal terms that we have already set. This will prevent any of our different ADT normal term assignments from interfering with each other—they are too far away in the infinite universe. If we are not in one of these base cases, we know that f is an m-ary constructor for some m > 0. Since x was either constructed or selected, there are variables y1, ..., ym in V such that N |= Vm i=1 f i(x) = yi. Note that these variables are from the original query if x is equal to a constructor application, or from Skolemization if x is selected from. Continuing our construction of M, we recurse on these yi that have not already been assigned in M. We will eventually hit a base case, since there are a finite number of selector/constructor applications in our original query. For each i, we set M[f i](M[x]) ≜M[yi]. Finally, we set M[x] ≜M[f](M[y1], ..., M[ym]). If it were possible to have ψ |= f(y1, ..., ym) ̸= x (1) hold, then we would have M ̸|= ψ and our current proof attempt would not go through. However, we will show that this is never the case. Note that since N |= ψ∗, if ψ∗asserts anything about selector applications, these selector applications must be consistent with N. Also, ψ∗must assert something about selector applications, since we know that x is either equal to a constructor application or is selected from in the query. Thus, ψ∗|= Vm i=1 f i(x) = yi, meaning that ψ∗asserts the correct selector behavior. We now use this to guarantee the correct constructor behavior. There are two ways that incorrect constructor behavior could occur in ψ∗: 1. If ψ |= f(y1, ..., ym) = x which contradicts Equation (1). 2. If ψ |= Vm i=1 yi = f i(x), but then by rule B, we would still have ψ∗|= f(y1, ..., ym) = x which also contradicts Equation (1) since ψ∗|= ψ. We iterate this construction of M for each variable in V for at most k rounds, since there are k total variables in V . Thus, since ψ∗has the acyclicality Axiom 3 instantiated up to a depth k, we do not create any cycles in M. We can also see that M |= ψ since each theory literal ψi in ψ = Vp i=1 ψi will be an equality x = y, disequality x ̸= y, selector application f j(y) = x, a tester application is-f(x), or a constructor application f(x1, ..., xm) = y. If it is any of these, then M |= ψi by how we defined M. Note that if it was a constructor application, then by rule A ψ∗ would have the respective selector applications and thus our construction of M would satisfy ψi. Thus, M |= ADT ∪ψ and so (ψ, ADT) is sat. 5 Empirical Evaluation In this section we empirically compare the performance of our approach to state-of-the-art solvers. Specifically, we aim The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8103 120 240 360 480 600 720 840 960 1080 1200 Time Elapsed (s) 0 19 38 57 76 95 114 133 152 171 190 209 Number of Queries Solved Bouvier (400 Queries, 1200s Timeout) 1. Algaroba (45.75% solved) 4. cvc5 (37.25% solved) 3. Princess (15.75% solved) 2. Z3 (37.25% solved) Timeout (a) Bouvier benchmark set. 0 120 240 360 480 600 720 840 960 1080 1200 Time Elapsed (s) 0 31 62 93 124 155 186 217 248 279 310 341 Number of Queries Solved Blocks world (500 Queries, 1200s Timeout) 1. Algaroba (61.4% solved) 3. cvc5 (34.8% solved) 4. Princess (16.8% solved) 2. Z3 (56.2% solved) Timeout (b) Blocks world benchmark set. Figure 5: Number of queries solved (y) in less than x seconds for Bouvier and blocks world benchmark sets using a 1200s timeout. Higher (more queries solved) left (in less time) points are better. The legend lists the contribution rank and percentage of queries solved for each solver. Algaroba solves the most queries and achieves the highest contribution rank for both sets. to answer the following research questions. RQ1 How does the overall performance of our approach compare to the state of the art? RQ2 How complementary is the performance of our approach to that of existing solvers? We implement a prototype of our approach, called Algaroba,1 in approximately 2900 lines of OCaml code. We use the Z3 API as the default UF back-end solver but we allow for any UF solver to be used instead. Algaroba takes inputs in the SMT-LIB language and includes a number of simple optimizations, like hash-consing (Ershov 1958), incremental solving, and theory-specific query simplifications. All experiments are conducted on an Ubuntu workstation with nine Intel(R) Core(TM) i9-9900X CPUs running at 3.50 GHz and with 62 GB of RAM. All solvers were given a 1200 second timeout on each query to be consistent with SMT-COMP. The state-of-the-art solvers in this space are cvc5 (we use version 1.0.6-dev.214.97a64fc16) and Z3 (we use version 4.12.2). We also include Princess (latest release as of 202306-19) in our evaluation since it is the most related approach. We describe all three solvers in Sec. 6. Our evaluation covers two existing benchmark sets from SMT-COMP, one originally from Bouvier (2021) and one originally from Barrett, Shikanian, and Tinelli (2007) (BST for short). These two benchmark sets are useful but limited: every solver succeeds on every BST query so it is difficult to draw performance conclusions; Bouvier queries are more challenging but only contain sum types. To address these limitations, we introduce a new benchmark set consisting of randomly generated blocks world queries.1 Blocks world queries, which we describe in Sec. 2, are more challenging to solve than those in BST and, unlike those from Bouvier, contain sum, product, and inductive types. To generate blocks world queries we use the same table configuration as in Sec. 2 (three places for towers), but 1available at https://github.com/uclid-org/algaroba/tree/aaai24 we randomly select a set of blocks (ranging between two to 26) and we randomly generate an initial and target configuration (two sets of three random block towers). We call these three random samples a blocks world setup. For each blocks world setup, we randomly sample a set of step numbers (ranging from one to two times the number of blocks) and generate a blocks world query for each step number. This process resulted in 500 individual queries that each ask “can we get from this initial configuration to this target configuration in exactly this number of steps?” 5.1 RQ1: Overall Performance To answer our first research question, we time the execution of Algaroba, cvc5, Princess, and Z3 on all queries in all three benchmark sets. When more than one solver terminates on a given query we compare the results to check for disagreements. There was not a single case where one solver returned sat and another returned unsat; therefore, we focus the remainder of our evaluation on execution times. For the BST benchmark set, which consists of 8000 queries, every solver successfully terminates on every query within the timeout. cvc5 performs the best on average with an average solve time of 0.05 seconds (compared to 0.08 seconds for Algaroba and 0.10 seconds for Z3). Z3 performed the most consistently with a standard deviation of 0.05 seconds (compared to 0.10 seconds for Algaroba and 0.15 seconds for cvc5). Given the magnitude of these values, we conclude that the performance differences between Algaroba, cvc5, and Z3 are negligible on this set. Princess is the slowest (2.20 seconds on average) and least consistent (standard deviation of 1.69 seconds) but still effective. Results are more interesting for the remaining benchmark sets. Fig. 5a shows the execution times for every solver on every query in the Bouvier benchmark set (excluding timeouts). No solver succeeds on more than half the queries in the set but Algaroba clearly outperforms the rest. In terms of number of queries solved, Algaroba succeeds on 8.5 percentage points more queries than the next best (45.75% verThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8104 sus cvc5 and Z3’s 37.25%). In terms of average run time on successful queries, Algaroba is 2.30 times faster than the next best (190.11 seconds versus Z3’s 437.18 seconds). In terms of standard deviation on successful queries, Algaroba is 1.30 times more consistent than the next best (267.13 seconds versus cvc5’s 346.83 seconds). Fig. 5b shows the execution times for every solver on every query in the blocks world benchmark set (excluding timeouts). Again, Algaroba outperforms the state-of-the-art solvers. Algaroba solves 5.2 percentage points more queries than the next best (61.4% versus Z3’s 56.2%) but the average and standard deviation results are more complicated. Princess is the fastest and most consistent solver on solved queries (31.89 seconds and 74.77 seconds, respectively) but it succeeds on 44.6 percentage points fewer queries than Algaroba. Compared with Z3, which solves the second most number of queries, Algaroba is 2.00 times faster (87.34 seconds on average compared to Z3’s 175.05 seconds) and 1.51 times more consistent in terms of standard deviation (198.67 seconds versus Z3’s 300.15 seconds). Across both interesting benchmarks, Algaroba solves the most sat queries (167 versus cvc5’s 70), and the second most unsat queries (322 versus Z3’s 406). The average (median) increase in query size from our reduction was 259x (146x). Given the overall success of Algaroba on both interesting benchmark sets, we answer RQ1 by concluding that our performance compares favorably to the state of the art. 5.2 RQ2: Contribution Rank Measuring overall performance is useful but it does not give an accurate perspective on how the community uses these tools. When faced with an SMT query, practitioners are likely to use multiple different solvers. This could be in parallel, like (Rungta 2022), or through algorithm selection, like (Pimpalkhare et al. 2021). Contribution ranks capture this practical perspective by evaluating solvers in terms of how complementary they are to other solvers. A higher rank means a higher contribution to the community of solvers. To evaluate how complementary our approach is to existing solvers, we use SMT-COMP’s contribution ranking. This ranking uses the notion of a virtual best solver, which is defined as vb(q, S) ≜s(q), where S is a set of solvers and s is the solver in S that terminates most quickly on q. Informally, the ranking answers, “which solver can I remove from the virtual best solver to hurt performance the most?” In terms of number of queries solved (the primary SMTCOMP metric), there is a four-way tie on the BST benchmark set—all solvers solve all queries. For both other benchmark sets, Algaroba is ranked highest. For blocks world, the virtual best solver without Algaroba succeeds on 56.2% of the queries, less than Algaroba on its own (61.4%). With Algaroba, the virtual best solver succeeds on 62.2%. For Bouvier, without Algaroba, the virtual best solver succeeds on 64.25% of the queries. With Algaroba, this number rises to 83.75%. These positive results are in part because Algaroba solves the most queries, but are mainly due to the uniqueness of our approach. cvc5 and Z3 use a similar underlying algorithm, so removing one does not affect the performance of the virtual best solver. On the other hand, while Princess is the most similar approach to our own, their reduction is different enough to not interfere with our ranking. In short, we solve many queries that no other solver can (108/900). Given the winning contribution rank of Algaroba on both interesting benchmark sets, we answer RQ2 by concluding that our performance is complementary to existing solvers— we solve many benchmarks that no other solver can. 6 Related Work Most solvers for quantifier-free ADT queries use a lazy SMT architecture, i.e., they use a theory specific solver to handle the data types and a core solver to handle the logical formula (Sebastiani 2007). A common theory solver will use a combination of congruence closure, syntactic unification, and acyclicality checks (Barrett, Shikanian, and Tinelli 2007; Oppen 1980; Reynolds and Blanchette 2017; Reynolds et al. 2018). This is the case for popular SMT solvers like cvc5 (Barbosa et al. 2022), SMTInterpol (Christ, Hoenicke, and Nutz 2012), and Z3 (de Moura and Bjørner 2008). cvc5 and SMTInterpol were the only two participants in the most recent SMT-COMP for quantifier-free ADT queries. We differ in that we take an eager approach. Princess (Hojjat and R¨ummer 2017) also takes an eager approach. However, Princess reduces queries to UF and Linear Integer Arithmetic (LIA). LIA makes keeping track of the depth of ADT terms easy, but their reduction results in queries that are more difficult to solve (see Sec. 5). The scope of our work is quantifier-free ADT queries. However, there is existing related work that deals with quantifiers. De Angelis et al. (2020) and Kostyukov, Mordvinov, and Fedyukovich (2021) provide approaches to solving ADT Constrained Horn Clauses (CHCs). Other approaches (Suter, Dotta, and Kuncak 2010; Pham and Whalen 2014) support restricted forms of recursive functions (called catamorphisms) via partially evaluating these functions. Kov´acs, Robillard, and Voronkov (2017) provides two decision procedures for quantified ADTs. 7 Conclusions As the popularity of ADTs continues to grow, the demand for efficient SMT solvers that can handle ADTs will increase. Unfortunately, there are few existing solvers in this space and the performance of these solvers can be improved. We introduced a reduction from quantifier-free ADT queries to quantifier-free UF queries. This approach is sound, complete, and eager, while most existing approaches are lazy. We implemented a prototype tool of our approach and compared with against existing solvers. We found that we can solve more queries using less time. More importantly, we found that we make the largest empirical contribution to the solving community. In the future, we intend to support proof generation, quantifiers, and hybrid eager and lazy approaches. We will also experiment with different back-end solvers and techniques for automatically selecting back-ends per input query. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8105 Acknowledgements We would like to thank Adwait Godbole, Ameesh Shah, Jiwon Park and Shangyin Tan for their insightful feedback. This work was supported in part by a Qualcomm Innovation Fellowship, NSF grant 1837132, DARPA contract FA875020-C-0156, an Amazon Research Award, Toyota under the iCyPhy center, a UC Berkeley Summer Undergraduate Research Fellowship, and by Intel under the Scalable Assurance program. References Barbosa, H.; Barrett, C.; Brain, M.; Kremer, G.; Lachnitt, H.; Mann, M.; Mohamed, A.; Mohamed, M.; Niemetz, A.; N¨otzli, A.; Ozdemir, A.; Preiner, M.; Reynolds, A.; Sheng, Y.; Tinelli, C.; and Zohar, Y. 2022. cvc5: A Versatile and Industrial-Strength SMT Solver. In Fisman, D.; and Rosu, G., eds., Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 415–442. Cham: Springer International Publishing. Barrett, C.; de Moura, L.; Ranise, S.; Stump, A.; and Tinelli, C. 2011. The SMT-LIB Initiative and the Rise of SMT: (HVC 2010 Award Talk). In Hardware and Software: Verification and Testing, 3–3. Springer. Barrett, C.; Fontaine, P.; and Tinelli, C. 2017. The SMT-LIB Standard Version 2.6. https://smtlib.cs.uiowa.edu/papers/ smt-lib-reference-v2.6-r2017-07-18.pdf. Barrett, C.; Sebastiani, R.; Seshia, S. A.; and Tinelli, C. 2021. Satisfiability Modulo Theories. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability, chapter 33, 1267–1329. IOS Press, second edition. Barrett, C.; Shikanian, I.; and Tinelli, C. 2007. An Abstract Decision Procedure for Satisfiability in the Theory of Recursive Data Types. Electronic Notes in Theoretical Computer Science, 174(8): 23–37. Combined Proceedings of the Fourth Workshop on Pragmatics of Decision Procedures in Automated Reasoning (PDPAR 2006) and the First International Workshop on Probabilistic Automata and Logics (PaUL 2006). Biere, A.; Cimatti, A.; Clarke, E.; and Zhu, Y. 1999. Symbolic model checking without BDDs. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 193–207. Springer. Bjørner, N.; Ganesh, V.; Michel, R.; and Veanes, M. 2012. An SMT-LIB format for sequences and regular expressions. SMT, 12: 76–86. Bouvier, P. 2021. The VLSAT-3 Benchmark Suite. INRIA Technical Report 516. Brummayer, R.; and Biere, A. 2009. Boolector: An Efficient SMT Solver for Bit-Vectors and Arrays. In Kowalewski, S.; and Philippou, A., eds., Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 174–177. Berlin, Heidelberg: Springer Berlin Heidelberg. Burch, J. R.; and Dill, D. L. 1994. Automatic verification of pipelined microprocessor control. In Computer Aided Verification (CAV), 68–80. Springer. Burstall, R. M. 1977. Design considerations for a functional programming language. Proc. Infotech State of the Art Conf. “The Software Revolution”’, 45–57. Cadar, C.; Ganesh, V.; Pawlowski, P. M.; Dill, D. L.; and Engler, D. R. 2008. EXE: Automatically generating inputs of death. ACM Transactions on Information and System Security (TISSEC), 12(2): 1–38. Christ, J.; Hoenicke, J.; and Nutz, A. 2012. SMTInterpol: An Interpolating SMT Solver. In Donaldson, A.; and Parker, D., eds., Model Checking Software, 248–254. Berlin, Heidelberg: Springer Berlin Heidelberg. Clarke, E.; Biere, A.; Raimi, R.; and Zhu, Y. 2001. Bounded model checking using satisfiability solving. Formal methods in system design, 19: 7–34. De Angelis, E.; Fioravanti, F.; Pettorossi, A.; and Proietti, M. 2020. Removing Algebraic Data Types from Constrained Horn Clauses Using Difference Predicates. In Peltier, N.; and Sofronie-Stokkermans, V., eds., Automated Reasoning, 83–102. Cham: Springer International Publishing. de Moura, L.; and Bjørner, N. 2008. Z3: An Efficient SMT Solver. In Ramakrishnan, C. R.; and Rehof, J., eds., Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 337–340. Berlin, Heidelberg: Springer Berlin Heidelberg. Ershov, A. P. 1958. On Programming of Arithmetic Operations. Commun. ACM, 1(8): 3–6. Goetz, B. 2022. JEP 360: Sealed Classes (Preview). https: //openjdk.org/jeps/360. Accessed: 2023-08-15. Gupta, N.; and Nau, D. S. 1992. On the complexity of blocks-world planning. Artificial intelligence, 56(2-3): 223– 254. Hoare, C. A. R. 1975. Recursive data structures. International Journal of Computer & Information Sciences, 4(2): 105–132. Hojjat, H.; and R¨ummer, P. 2017. Deciding and Interpolating Algebraic Data Types by Reduction. In 2017 19th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), 145–152. Hudak, P.; Hughes, J.; Peyton Jones, S.; and Wadler, P. 2007. A history of Haskell: being lazy with class. In Proceedings of the third ACM SIGPLAN conference on History of programming languages, 12–1. Jung, R.; Jourdan, J.-H.; Krebbers, R.; and Dreyer, D. 2021. Safe systems programming in Rust. Communications of the ACM, 64(4): 144–152. Kautz, H.; and Selman, B. 1996. Pushing the envelope: Planning, propositional logic, and stochastic search. In Proceedings of the national conference on artificial intelligence, 1194–1201. Kautz, H. A.; Selman, B.; et al. 1992. Planning as Satisfiability. In ECAI, volume 92, 359–363. Citeseer. Kostyukov, Y.; Mordvinov, D.; and Fedyukovich, G. 2021. Beyond the Elementary Representations of Program Invariants over Algebraic Data Types. In Programming Language Design and Implementation, PLDI 2021, 451–465. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8106 New York, NY, USA: Association for Computing Machinery. ISBN 9781450383912. Kov´acs, L.; Robillard, S.; and Voronkov, A. 2017. Coming to Terms with Quantified Reasoning. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL ’17, 260–270. New York, NY, USA: Association for Computing Machinery. Lubarsky, R. 2008. Ian Chiswell and Wilfrid Hodges. Mathematical logic. Oxford Texts in Logic, vol. 3. Oxford University Press, Oxford, England, 2007, 250 pp. Bulletin of Symbolic Logic, 14(2): 265–267. Milner, R. 1997. The definition of standard ML: revised. MIT press. Oppen, D. C. 1980. Reasoning About Recursively Defined Data Structures. J. ACM, 27(3): 403–411. Pham, T.-H.; and Whalen, M. W. 2014. An Improved Unrolling-Based Decision Procedure for Algebraic Data Types. In Cohen, E.; and Rybalchenko, A., eds., Verified Software: Theories, Tools, Experiments, 129–148. Berlin, Heidelberg: Springer Berlin Heidelberg. Pimpalkhare, N.; Mora, F.; Polgreen, E.; and Seshia, S. A. 2021. MedleySolver: Online SMT Algorithm Selection. In Li, C.; and Many`a, F., eds., Theory and Applications of Satisfiability Testing, volume 12831 of Lecture Notes in Computer Science, 453–470. Springer. Reynolds, A.; and Blanchette, J. C. 2017. A Decision Procedure for (Co)datatypes in SMT Solvers. Journal of Automated Reasoning, 58(3): 341–362. Reynolds, A.; Viswanathan, A.; Barbosa, H.; Tinelli, C.; and Barrett, C. 2018. Datatypes with Shared Selectors. In Galmiche, D.; Schulz, S.; and Sebastiani, R., eds., Automated Reasoning, 591–608. Cham: Springer International Publishing. Rintanen, J. 2003. Symmetry Reduction for SAT Representations of Transition Systems. In ICAPS, 32–41. R¨ummer, P.; and Wahl, T. 2010. An SMT-LIB theory of binary floating-point arithmetic. In International Workshop on Satisfiability Modulo Theories (SMT), 151. Rungta, N. 2022. A billion SMT queries a day. In Computer Aided Verification (CAV), 3–18. Springer. Russell, S. J. 2010. Artificial intelligence a modern approach. Pearson Education, Inc. Salgado, P. G. 2023. What’s New In Python 3.10. https: //docs.python.org/3.10/whatsnew/3.10.html\#summaryrelease-highlights. Accessed: 2023-08-15. Sebastiani, R. 2007. Lazy Satisfiability Modulo Theories. Journal on Satisfiability, Boolean Modeling and Computation, 3: 141–224. Seshia, S. A. 2005. Adaptive Eager Boolean Encoding for Arithmetic Reasoning in Verification. Ph.D. thesis, Carnegie Mellon University. Sussman, G. J. 1973. A Computational Model of Skill Acquisition. Technical report, Massachusetts Institute of Technology, USA. Suter, P.; Dotta, M.; and Kuncak, V. 2010. Decision Procedures for Algebraic Data Types with Abstractions. SIGPLAN Not., 45(1): 199–210. Winograd, T. 1971. Procedures as a representation for data in a computer program for understanding natural language. AI-TR. M.I.T. Project MAC. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8107
2024
901
18,742
An Approximate Skolem Function Counter* Arijit Shaw1,2, Brendan Juba3, Kuldeep S. Meel4 1 Chennai Mathematical Institute, India 2 IAI, TCG-CREST, Kolkata, India 3 Washington University in St. Louis, USA 4 University of Toronto, Canada Abstract One approach to probabilistic inference involves counting the number of models of a given Boolean formula. Here, we are interested in inferences involving higher-order objects, i.e., functions. We study the following task: Given a Boolean specification between a set of inputs and outputs, count the number of functions of inputs such that the specification is met. Such functions are called Skolem functions. We are motivated by the recent development of scalable approaches to Boolean function synthesis. This stands in relation to our problem analogously to the relationship between Boolean satisfiability and the model counting problem. Yet, counting Skolem functions poses considerable new challenges. From the complexity-theoretic standpoint, counting Skolem functions is not only #P-hard; it is quite unlikely to have an FPRAS (Fully Polynomial Randomized Approximation Scheme) as the problem of synthesizing a Skolem function remains challenging, even given access to an NP oracle. The primary contribution of this work is the first algorithm, SkolemFC, that computes an estimate of the number of Skolem functions. SkolemFC relies on technical connections between counting functions and propositional model counting: our algorithm makes a linear number of calls to an approximate model counter and computes an estimate of the number of Skolem functions with theoretical guarantees. Moreover, we show that Skolem function count can be approximated through a polynomial number of calls to a SAT oracle. Our prototype displays impressive scalability, handling benchmarks comparably to state-of-the-art Skolem function synthesis engines, even though counting all such functions ostensibly poses a greater challenge than synthesizing a single function. 1 Introduction Probabilistic inference problems arise throughout AI and are tackled algorithmically by casting them as problems such as model counting (Gomes, Sabharwal, and Selman 2021; Chakraborty, Meel, and Vardi 2021). In this work, we are interested in approaching inference questions for higher-order objects, specifically Skolem functions: that is, we wish to compute the number of possible Skolem functions for a given specification F(X, Y ). Counting Skolem functions is the *The full version of the paper: https://arxiv.org/abs/2312.12026 Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. natural analog of #SAT for Skolem functions, yet to our knowledge, it has not been previously studied. More precisely, recall that given two sets X = {x1, . . . , xn} and Y = {y1, . . . , ym} of variables and a Boolean formula F(X, Y ) over X ∪Y , the problem of Boolean functional synthesis is to compute a vector Ψ = ⟨ψ1, . . . , ψm⟩of Boolean functions ψi, often called Skolem functions, such that ∃Y F(X, Y ) ≡F(X, Ψ(X)). Informally, given a specification between inputs and outputs, the task is to synthesize a function vector Ψ that maps each assignment of the inputs to an assignment of the outputs so that the combined assignment meets the specification (whenever such an assignment exists). Skolem synthesis is a fundamental problem in formal methods and has been investigated by theoreticians and practitioners alike over the past few decades. The past few years have witnessed the development of techniques that showcase the promise of scalability in their ability to handle challenging specifications (Jiang, Lin, and Hung 2009; Tabajara and Vardi 2017; Rabe et al. 2018; Akshay et al. 2019; Golia et al. 2021). The scalability of today’s Skolem synthesis engines is reminiscent of the scalability of SAT solvers in the early 2000s. Motivated by the scalability of SAT solvers (Froleyks et al. 2021), researchers sought algorithmic frameworks for problems beyond satisfiability, such as MaxSAT (Ans´otegui, Bonet, and Levy 2013; Li and Manya 2021), model counting (Gomes, Sabharwal, and Selman 2021; Chakraborty, Meel, and Vardi 2021), sampling (Chakraborty, Meel, and Vardi 2014), and the like. The development of scalable techniques for these problems also helped usher in new applications, even though the initial investigation had not envisioned many of them. In a similar vein, motivated in part by this development of scalable techniques for functional synthesis, we investigate the Skolem counting problem. We observe in Section 1.2 that algorithms for such tasks also have potential applications in security and the engineering of specifications. Being a natural problem, we will see that our study also naturally leads to deep technical connections between counting functions and counting propositional models and the development of new techniques, which is of independent interest. Counting Skolem functions indeed raises new technical challenges. The existing techniques developed in the context of propositional model counting either construct (implicitly or explicitly) a representation of the space of all models (ThurThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8108 ley 2006; Dudek, Phan, and Vardi 2020; Sharma et al. 2019) or at least enumerate a small number of models (Chakraborty, Meel, and Vardi 2013, 2016; Soos and Meel 2019; Yang and Meel 2023), which in practice amounts to a few tens to hundreds of models of formulas constrained by random XORs. Such approaches are unlikely to work in the context of Skolem function counting, where even finding one Skolem function is hard, and there are no techniques that enable the enumeration of Skolem functions. 1.1 Technical Contribution The primary contribution of this work is the development of a novel algorithmic framework, called SkolemFC, that approximates the Skolem function count with a theoretical guarantee, using only linearly many calls to an approximate model counter and almost-uniform sampler. First, we observe that Skolem function counting can be reduced to an exponential number of model counting calls, serving as a baseline Skolem function counter. The core technical idea of SkolemFC is to reduce the problem of approximate Skolem function counting to only linearly many (in m = |Y |) calls to propositional model counters. Of particular note is the observation that SkolemFC can provide an approximation to the number of Skolem functions without enumerating even one Skolem function. As approximate model counting and almost-uniform sampling can be done by logarithmically many calls to SAT oracle, we show that Skolem function counting can also be reduced to polynomially many calls to a SAT oracle. To measure the impact of the algorithm, we implement SkolemFC and demonstrate its potential over a set of benchmarks arising from prior studies in the context of Skolem function synthesis. Out of 609 instances, SkolemFC could solve 375 instances, while a baseline solver could solve only eight instances. For context, the state-of-the-art Skolem function synthesis tool Manthan2 (Golia et al. 2021) effectively tackled 509 instances from these benchmarks, while its precursor, Manthan (Golia, Roy, and Meel 2020), managed only 356 instances with a timeout of 7200 seconds. 1.2 Applications This problem arises in several potential application areas. Specification engineering. The first and primary motivation stems from the observation that specification synthesis (Albarghouthi, Dillig, and Gurfinkel 2016; Prabhu et al. 2021) (i.e., the process of constructing F(X, Y )) and function synthesis form part of the iterative process wherein one iteratively modifies specifications based on the functions that are constructed by the underlying engine. In this context, one helpful measure is to determine the number of possible semantically different functions that satisfy the specification, as often a large number of possible Skolem functions indicates the vagueness of specifications and highlights the need for strengthening the specification. Note that the use of the count is qualitative here, and hence an approximate order of magnitude (log count) suffices. Diversity at the specification level. In system security and reliability, a classic technique is to generate and use a diverse variety of functionally equivalent implementations of components (Baudry and Monperrus 2015). Although classically, this is achieved by transformations of the code that preserve the function computed, we may also be interested in producing a variety of functions that satisfy a common specification. Unlike transformations on the code, it is not immediately clear whether a specification even admits a diverse collection of functions – indeed, the function may be uniquely defined. Thus, counting the number of such functions is necessary to assess the potential value of taking this approach, and again a rough order of magnitude estimate suffices. Approximate counting of the functions may also be a useful primitive for realizing such an approach. Uninterpreted functions in SMT. A major challenge in the design of counting techniques for SMT (Chistikov, Dimitrova, and Majumdar 2015; Chakraborty et al. 2016) lies in handling uninterpreted functions (Kroening and Strichman 2016). Since Skolem functions capture a restricted but large enough class of uninterpreted functions (namely, the case where a given uninterpreted function is allowed to depend on all X variables), progress in Skolem function counting is needed if we hope to make progress on the general problem of counting of uninterpreted functions in SMT. Evaluation of a random Skolem function. Although synthesis of Skolem functions remains challenging in general, we note that approximate counting enables a kind of incremental evaluation by using the standard techniques for reducing sampling to counting. More concretely, given a query input, we can estimate the number of functions that produce each output: this is trivial if the range is small (e.g., Boolean), and otherwise, we can introduce random XOR constraints to incrementally specify the output. Once an output is specified for the query point, we may retain these constraints when estimating the number of consistent functions for subsequent queries, thereby obtaining an approximately uniform function conditioned on the answers to the previous queries. 1.3 Organization The rest of the paper is organized as follows: we discuss related work in Section 2 and present notation and preliminaries in Section 3. We then present the primary technical contribution of our work in Section 4. We present the empirical analysis of the prototype implementation of SkolemFC in Section 5. We finally conclude in Section 6. 2 Related Work Despite the lack of prior studies focused on the specific problem of counting Skolem functions, significant progress has been made in synthesizing these functions. Numerous lines of research have emerged in the field of Skolem function synthesis. The first, incremental determinization, iteratively pinpoints variables with distinctive Skolem functions, making decisions on any remaining variables by adding provisional clauses that render them deterministic (Rabe 2019; Rabe and Seshia 2016; Rabe et al. 2018). The second line of research involves obtaining Skolem functions by eliminating quantifiers using functional composition and reducing The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8109 the size of composite functions through the application of Craig interpolation (Jiang, Lin, and Hung 2009; Jiang 2009). The third, CEGAR-style approaches, commence with an initial set of approximate Skolem functions, and proceed to a phase of counter-example guided refinement to improve upon these candidate functions (John et al. 2015; Akshay et al. 2017, 2018). Work on the representation of specification F(X, Y ) has led to efficient synthesis using ROBDD representation and functional composition (Balabanov and Jiang 2011), with extensions to factored specifications (Tabajara and Vardi 2017; Chakraborty et al. 2018). Notable advancements include the new negation normal form, SynNNF, amenable to functional synthesis (Akshay et al. 2019). Finally, a data-driven method has arisen (Golia, Roy, and Meel 2020; Golia et al. 2021), relying on constrained sampling to generate satisfying assignments for a formula F. A related problem in the descriptive complexity of functions definable by counting the Skolem functions for fixed formulas have been shown to characterize #AC0 (Haak and Vollmer 2019). By contrast, we are interested in the problem where the formula is the input. Our algorithm also bears similarity to the FPRAS proposed for the descriptive complexity class #Σ1 (Durand et al. 2021), which is obtained by an FPRAS for counting the number of functions satisfying a DNF over atomic formulas specifying that the functions must/must not take specific values at specific points. Nevertheless, our problem is fundamentally different in that it is easy to find functions satisfying such DNFs, whereas synthesis of Skolem functions is unlikely to be possible in polynomial time. The specifications for the functions are often expressed in terms of quantified Boolean formulas (QBFs). Another quantitative question on QBFs is AllQBF (Becker et al. 2012), finding all assignments of the free variables of a given QBF such that the formula evaluates to true. CountingQBF (Shukla et al. 2022) poses a similar query within a quantitative aspect. However, their relevance to counting functions isn’t clear. 3 Notation and Preliminaries We use lowercase letters (with subscripts) to denote propositional variables and uppercase letters to denote a subset of variables. The formula ∃Y F(X, Y ) is existentially quantified in Y , where X = {x1, . . . , xn} and Y = {y1, . . . , ym}. By n and m we denote the number of X and Y variables in the formula. Therefore, n = |X|, m = |Y |. For simplicity, we write a formula F(X, Y ) as F if the X, Y is clear from the context. A model is an assignment (true or false) to all the variables in F, such that F evaluates to true. Let Sol(F)↓S denote the set of models of formula F projected on S ⊆X ∪Y . If S = X ∪Y , we write the set as Sol(F). Let σ be a partial assignment for the variables X of F. Then Sol(F ∧(X = σ)) denotes the models of F where X = σ. NP Oracles and SAT oracles. Given a Boolean formula F, an NP oracle determines the satisfiability of the formula. A SAT solver is a practical tool solving the problem of satisfiability. Following definition of Delannoy and Meel (2022), a SAT oracle takes in a formula F and returns a satisfying assignment σ if F is satisfiable and ⊥otherwise. The SAT oracle model captures the behavior of the modern SAT solvers. Propositional Model Counting. Given a formula F and a projection set S, the problem of model counting is to compute |Sol(F)↓S|. An approximate model counter takes in a formula F, projection set S, tolerance parameter ε, and confidence parameter δ, and returns c such that Pr h |Sol(F )↓S| 1+ε ≤c ≤(1 + ε)|Sol(F)↓S| i ≥1 −δ. It is known that log(n) calls to a SAT oracle are necessary (Chakraborty et al. 2023) and sufficient (Chakraborty, Meel, and Vardi 2016) to achieve (ε, δ) guarantees for approximately counting the models of a formula with n variables. Propositional Sampling. Given a Boolean formula F and a projection set S, a sampler is a probabilistic algorithm that generates a random element in Sol(F)↓S. An almost uniform sampler G takes a tolerance parameter ε along with F and S, and guarantees ∀y ∈Sol(F)↓S, 1 (1+ε)|Sol(F )↓S| ≤ Pr[G(F, S, ε) = y] ≤ (1+ε) |Sol(F )↓S|. Delannoy and Meel (2022) showed, log(n) many calls to a SAT oracle suffices to generate almost uniform samples from a formula with n variables. Skolem Functions. Given a Boolean specification F(X, Y ) between set of inputs X = {x1, . . . , xn} and vector of outputs Y = ⟨y1, . . . , ym⟩, a function vector Ψ(X) = ⟨ψ1(X), ψ2(X), . . . , ψm(X)⟩is a Skolem function vector if yi ↔ψi(X) and ∃Y F(X, Y ) ≡F(X, Ψ). We refer to Ψ as the Skolem function vector and Ψi as the Skolem function for yi. We’ll use the notation Skolem(F, Y ) to denote the set of possible Ψ(X) satisfying the condition ∃Y F(X, Y ) ≡F(X, Ψ(X)). Two Skolem function vectors Ψ1 and Ψ2 are different, if there exists an assignment σ ∈Sol(F)↓X for which Ψ1(σ) ̸= Ψ2(σ). For a specification ∃Y F(X, Y ), the number of Skolem functions itself can be as large as 2n·2m, and the values of n and m are quite large in many practical cases. Beyond being a theoretical possibility, the count of Skolem functions is often quite big, and such values are sometimes difficult to manipulate and store as 64-bit values. Therefore, we are interested in the logarithm of the counts, and define the problem of approximate Skolem function counting as following: Problem Statement. Given a Boolean specification F(X, Y ), tolerance parameter ε, confidence parameter δ, let ℓ= log(|Skolem(F, Y )|), the task of approximate Skolem function counting is to give an estimate Est, such that Pr [(1 −ε)ℓ≤Est ≤(1 + ε)ℓ] ≥1 −δ. In practical scenarios, the input specification is often given as a quantified Boolean formula (QBF). The output of the synthesis problem is a function, which is expressed as a Boolean circuit. In our setting, even if two functions have different circuits, if they have identical input-output behavior, we consider them to be the same function. For example, let f1(x) = x and f2 = ¬(¬x). We’ll consider f1 and f2 to be the same function. Illustrative Example. Let’s examine a formula defined on three sets of variables X, Y0, Y1, where each set contains five Boolean variables, interpreted as five-bit integers: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8110 Algorithm 1: Stopping Rule (ε, δ) 1: t ←0, x ←0, s ←4 ln(2/δ)(1 + ε)/ε2 2: while x < s do 3: t ←t + 1 4: Generate Random Variable Zt 5: x ←x + Zt 6: Est ←s/t 7: return Est ∃Y0Y1F(X, Y0Y1), where F represents the constraint for factorization, X = Y0 × Y1, Y0 ≤Y1, Y0 ̸= 1. The number of Skolem functions of F gives the number of distinct ways to implement a factorization function for 5-bit input numbers. There exist multiple X’s for which there are multiple factorizations: A Skolem function S1 may factorize 12 as 4 × 3 and a function S2 may factorize 12 as 2 × 6. Stopping Rule Algorithm. Let Z1, Z2, . . . denote independently and identically distributed (i.i.d.) random variables taking values in the interval [0, 1] and with mean µ. Intuitively, Zt is the outcome of experiment t. Then the Stopping Rule algorithm (Algorithm 1) approximates µ as stated by Theorem 3.1 (Dagum et al. 1995; Dagum and Luby 1997). Theorem 3.1 (Stopping Rule Theorem). For all 0 < ε ≤ 2, δ > 0, if the Stopping Rule algorithm returns Est, then Pr[µ(1 −ε) ≤Est ≤µ(1 + ε)] > (1 −δ). FPRAS. A Fully Polynomial Randomized Approximation Scheme (FPRAS) is a randomized algorithm that, for any fixed ε > 0 and any fixed probability δ > 0, produces an answer that is within a factor of (1 + ε) of the correct answer, and does so with probability at least (1 −δ), in polynomial time with respect to the size of the input, 1/ε, and log(1/δ). 4 Algorithm In this section, we introduce the primary contribution of our paper: the SkolemFC algorithm. The algorithm takes in a formula F(X, Y ) and returns an estimate for log(|Skolem(F, Y )|). We first outline the key technical ideas that inform the design of SkolemFC and then present the pseudocode for implementing this algorithm. 4.1 Core Ideas Since finding even a single Skolem function is computationally expensive, our approach is to estimate the count of Skolem functions without enumerating even a small number of Skolem functions. The key idea is to observe that the number of Skolem functions can be expressed as a product of the model counts of formulas. A Skolem function Ψ ∈Skolem(F, Y ) is a function from 2X to 2Y . A useful quantity in the context of counting Skolem functions is to define, for every assignment σ ∈2X, the set of elements in 2Y that Ψ(X) can belong to. We refer to this quantity as range(σ) and formally define it as follows: Algorithm 2: SkolemFC(F(X, Y ), ε, δ) 1: εf ←0.6ε, δf ←0.4δ, s ←4 ln(2/δf)(1 + εf)/εf 2 2: εs ←0.2ε, δc = 0.4δ/ms, εc ←4 √ 2 −1 3: εg = 0.1ε, δg = 0.1δ 4: G(X, Y, Y ′) := F(X, Y ) ∧F(X, Y ′) ∧(Y ̸= Y ′) 5: while x < s do 6: σ ←AlmostUniformSample(G, X, εs) 7: c ←log(ApproxCount(F ∧(X = σ), εc, δc))/m 8: x ←x + c 9: t ←t + 1 10: g ←ApproxCount(G, X, εg, δg) 11: Est ←s/t × m × g 12: if (g log(1 + εc) > 0.1Est) then return ⊥ 13: return Est Definition 4.1. range(σ) = Sol(F ∧(X = σ))↓Y |Sol(F ∧(X = σ))| > 0 1 otherwise Lemma 4.2. |Skolem(F, Y )| = Q σ∈2X |range(σ)| Proof. First of all, we observe that ∀σ ∈2X, ∀π ∈range(σ), ∃Ψ s.t. Ψ(σ) = π which is easy to see for all σ ∈2X for which there exists π ∈2Y such that F(σ, π) = 1. As for σ ∈2X for which there is no π such that F(σ, π) = 1, Skolem functions that differ solely on inputs σ /∈Sol(F)↓X are regarded as identical. Consequently, such inputs have no impact on the count of distinct Skolem functions, resulting in range(σ) = 1 for these cases. Recall, Skolem(F, Y ) is the set of all functions from 2X to 2Y . It follows that |Skolem(F, Y )| = Q σ∈2X |range(σ)|. Lemma 4.2 allows us to develop a connection between Skolem function counting and propositional model counting. As stated in the problem statement, we focus on estimating log |Skolem(F, Y )|. To formalize our approach, we need to introduce the following notation: Proposition 4.3. log |Skolem(F, Y )| = X σ∈S2 log |Sol(F ∧(X = σ))| where, S2 := {σ ∈2X | |Sol(F ∧(X = σ))| ≥2} Proof. From Lemma 4.2, we have |Skolem(F, Y )| = Q σ∈2X |range(σ)|. Taking logs on both sides, partitioning 2X into S2 and 2X\S2, and observing that log |range(σ)| = 0 for σ /∈S2, we get the desired result. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8111 4.2 Algorithm Description The pseudocode for SkolemFC is delineated in Algorithm 2. It accepts a formula ∃Y F(X, Y ), a tolerance level ε, and a confidence parameter δ. The algorithm SkolemFC then provides an approximation of log |Skolem(F, Y )| following Proposition 4.3. To begin, SkolemFC almost-uniformly samples σ from S2 at random in line 6, utilizing an almostuniform sampler. Subsequently, SkolemFC approximates |Sol(F ∧(X = σ))| through an approximate model counter at line 7. The estimate Est is computed by taking the product of the mean of c’s and |S2|. In order to sample σ ∈S2, SkolemFC constructs the formula G whose solutions, when projected to X represent all the assignments σ ∈S2 (line 4). Finally, SkolemFC returns the estimate Est as logarithm of the Skolem function count. The main loop of SkolemFC (from lines 5 to 9) is based on the Stopping Rule algorithm presented in Section 2. The Stopping Rule algorithm is utilized to approximate the mean of a collection of i.i.d. random variables that fall within the range [0, 1]. The method repeatedly adds the outcomes of these variables until the cumulative value reaches a set threshold s. This threshold value is influenced by the input parameters ε and δ. The result yielded by the algorithm is represented as s/t, where t denotes the number of random variables aggregated to achieve the threshold s. In the context of SkolemFC, this random variable is defined as log(ApproxCount(F ∧(X = σ), εc, δc))/m. Line 12 asserts that the error introduced by the approximate model counting oracle is within some specific bound. Oracle Access. We assume access to approximate model counters and almost-uniform samplers as oracles. The notation ApproxCount(F, P, ε, δ) represents an invocation of the approximate model counting oracle on Boolean formula F with a projection set P, tolerance parameter ε, and confidence parameter δ. AlmostUniformSample(F, S, ε) denotes an invocation of the almost uniform sampler on a formula F, with projection set S and tolerance parameter ε. The particular choice of values of εs, εc, δc, εg, δg used in the counting and sampling oracle aids the theoretical guarantees. 4.3 Illustrative Example We will now examine the specification of factorization as outlined in Section 3, and investigate how SkolemFC estimates the count of Skolem functions meeting that specification. 1. In line 4, SkolemFC constructs G such that |Sol(G)↓X| = 7, Sol(G)↓X = {12, 16, 18, 20, 24, 28, 30}. 2. In line 6, it samples σ from Sol(G)↓X. Let’s consider σ = 30. Then Sol(F ∧(X = σ))↓Y0,Y1 = {(2, 15), (3, 10), (5, 6)}. Therefore, c = log(3) in line 7. 3. Suppose in the next iteration it samples σ = 16. Then Sol(F ∧(X = σ)) = {(2, 8), (4, 4)}. Therefore, c = log(2) in line 7. 4. Now suppose that the termination condition of line 5 is reached. At this point, the estimate Est returned from line 11 will be ≈(log(3)+log(2)) 2 × 7 ≈6. 5. Finally, SkolemFC will return the value Est= 6. Note that the approach is in stark contrast to the state-ofthe-art counting techniques in the context of propositional models, which either construct a compact representation of the entire solution space or rely on the enumeration of a small number of models. 4.4 Analysis of SkolemFC Let F(X, Y ) be a propositional CNF formula over variables X and Y . In the section we’ll show that SkolemFC works as an approximate counter for the number of Skolem functions. We create a formula G(X, Y, Y ′) = F(X, Y ) ∧F(X, Y ′) ∧ (Y ̸= Y ′) from F(X, Y ), where Y ′ is a fresh set of variables and m = |Y ′|. We show that, if we pick a solution from G(X, Y, Y ′) = F(X, Y ) ∧F(X, Y ′) ∧(Y ̸= Y ′), then the assignment to X in that solution to G will have at least two solutions in F(X, Y ). Lemma 4.4. Sol(G)↓X =  σ | σ ∈2X ∧|Sol(F ∧(X = σ))| ≥2 Proof. We can write the statement alternatively as, σ ∈Sol(G, X) ⇐⇒|Sol(F ∧(X = σ))| ≥2 ( =⇒) For every element σ ∈Sol(G), we write σ as ⟨σ↓X, σ↓Y , σ↓Y ′⟩. Now according to the definition of G, both ⟨σ↓X, σ↓Y ⟩and ⟨σ↓X, σ↓Y ′⟩satisfy F. Moreover, σ↓Y and σ↓Y ′ are not equal. Therefore, |Sol(F ∧(X = σ))| ≥2. ( ⇐= ) If |Sol(F ∧(X = σ))| ≥2, then F(X, Y ) has solutions of the form ⟨σ, γ1⟩and ⟨σ, γ2⟩, where γ1 ̸= γ2. Now ⟨σγ1γ2⟩satisfies G. Theorem 4.5. SkolemFC takes in input F(X, Y ), ε > 0, and δ ∈(0, 1], and returns Est such that Pr [(1 −ε)ℓ≤Est ≤(1 + ε)ℓ] ≥1 −δ where, ℓ= log(|Skolem(F, Y )|). Furthermore, it makes ˜O m ε2 ln 2 δ  many calls to a SAT oracle, where ˜O hides polylog factors in parameters m, n, ε, δ. We defer the proof to the full version due to space constraints. 5 Experiments We conducted a thorough evaluation of the performance and accuracy of results of the SkolemFC algorithm by implementing a functional prototype1 in C++. The following experimental setup was used to evaluate the performance and quality of results of the SkolemFC algorithm2. Baseline. A possible approach to count Skolem functions, following Lemma 4.2, is given in Algorithm 3. The Count(F) oracle denotes an invocation of exact model counter. We implemented that to compare with SkolemFC. In the implementation, we relied on the latest version of Ganak (Sharma et al. 2019) to get the necessary exact model counts. We use a modified version of the SAT solver CryptoMiniSat (Soos, Nohl, and Castelluccia 2009) as AllSAT solver to find all solutions of a given formula, projected on X variables. We call this implementation Baseline in the following part of the paper. 1Source code: https://github.com/meelgroup/skolemfc/ 2All benchmarks and experimental data are available at https://doi.org/10.5281/zenodo.10404174 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8112 Algorithm 3: Baseline(F(X, Y )) 1: Est ←0 2: G(X, Y, Y ′) := F(X, Y ) ∧F(X, Y ′) ∧(Y ̸= Y ′) 3: SolG ←AllSAT(G, X) 4: for each σ ∈SolG do 5: c ←log(Count(F ∧(X = σ))) 6: Est ←Est + c 7: return Est Environment. All experiments were carried out on a cluster of nodes consisting of AMD EPYC 7713 CPUs running with 2x64 real cores. All tools were run in a single-threaded mode on a single core with a timeout of 10 hrs, i.e., 36000 seconds. A memory limit was set to 32 GB per core. Parameters for Oracles and Implementation. In the implementation, we utilized different state-of-the-art tools as counting and sampling oracles, including UniSamp (Delannoy and Meel 2022) as an almost uniform sampling oracle, and the latest version of ApproxMC (Yang and Meel 2023) as an approximate counting oracle. SkolemFC was tested with ε = 0.8 and δ = 0.4. That gave the following values to error and tolerance parameters for model counting and sampling oracles. The almost uniform sampling oracle UniSamp is run with εs = 0.16. The approximate model counting oracle ApproxMC in line 7 was run with εc = 4 √ 2 −1 and δc = 0.32 m·s , where s comes from the algorithm, based on input (ε, δ) and m is number of output variables in the specification. We carefully select error and tolerance values εs, εc, δc for counting and sampling oracles to ensure the validity of final bounds for SkolemFC while also aiming for optimal performance of the counter based on these choices. The relationship between these values and the validity of bound of SkolemFC is illustrated in the proof of Theorem 4.5. In our experiments, we sought to evaluate the run-time performance and approximation accuracy of SkolemFC. Specifically, the following questions guided our investigation: RQ1. How does SkolemFC scale in terms of solving instances and the time taken in our benchmark set? RQ2. What is the precision of the SkolemFC approximation, and does it outperform its theoretical accuracy guarantees in practical scenarios? Benchmarks. To evaluate the performance of SkolemFC, we chose two sets of benchmarks. 1. Efficiency benchmarks. 609 instances from recent works on Boolean function synthesis (Golia, Roy, and Meel 2020; Akshay et al. 2017), which includes different sources: the Prenex-2QBF track of QBF Evaluation 2017 and 2018, disjunctive (Akshay et al. 2017), arithmetic (Tabajara and Vardi 2017) and factorization (Akshay et al. 2017). 2. Correctness Benchmarks. The benchmarks described in the paragraph above are too hard for the baseline algorithm to solve. As Section 5.1 reveals, the number of instances solved by the baseline is just eight out of the 609 instances. Therefore, to check the correctness of Algorithm # Instances solved Baseline 8 SkolemFC 375 Table 1: Instances solved (out of 609). 0 50 100 150 200 250 300 350 400 Benchmarks 0 5000 10000 15000 20000 25000 30000 35000 Runtime(s) SkolemFC Baseline Figure 1: Runtime performance of SkolemFC and Baseline. SkolemFC (RQ2), we used a set of 158 benchmarks from SyGuS instances (Golia, Roy, and Meel 2021). These benchmarks have very few input variables (m ≤8), and takes seconds for SkolemFC to solve. Summary of Results. SkolemFC achieves a huge improvement over Baseline by resolving 375 instances in a benchmark set consisting of 609, while Baseline only solved 8. The accuracy of the approximate count is also noteworthy, with an average error of a count by SkolemFC of only 21%. 5.1 Performance of SkolemFC We evaluate the performance of SkolemFC based on two metrics: the number of instances solved and the time taken to solve the instances. Instances Solved. In Table 1, we compare the number of benchmarks that can be solved by Baseline and SkolemFC. First, it is evident that the Baseline only solved 8 out of the 609 benchmarks in the test suite, indicating its lack of scalability for practical use cases. Conversely, SkolemFC solved 375 instances, demonstrating a substantial improvement compared to Baseline. Solving Time Comparison. A performance evaluation of Baseline and SkolemFC is depicted in Figure 1, which is a cactus plot comparing the solving time. The x-axis represents the number of instances, while the y-axis shows the time taken. A point (i, j) in the plot represents that a solver solved j benchmarks out of the 609 benchmarks in the test suite in less than or equal to j seconds. The curves for Baseline and SkolemFC indicate that for a few instances, Baseline was able to give a quick answer, while in the long run, SkolemFC could solve many more instances given any fixed timeout. Counter Call Comparison. We analyze the algorithms’ complexity in terms of counter calls, comparing Baseline and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8113 50 100 150 200 250 300 350 Instance ID 100 1028 1056 1084 10112 10140 10168 10196 10224 Counter Calls needed SkolemFC Baseline Figure 2: Counter calls needed by SkolemFC and Baseline to solve the benchmarks. 102 103 104 Number of Iterations 10−1 100 101 102 Time per Iteration Figure 3: Relation between number of iterations needed by SkolemFC and average time taken in each iteration. SkolemFC across benchmarks in Figure 2. The x axis represents benchmarks, and the y axis shows required counter calls, sorted by the increasing order of calls needed by Baseline. A red or green point (i, j) signifies that Baseline or SkolemFC, respectively, requires j counting oracle calls for the ith instance. Baseline requires up to a staggering 10230 counter calls for some instances, emphasizing the need for a scalable algorithm like SkolemFC, which incurs significantly fewer counter calls. We analyze the scalability of SkolemFC by examining the correlation between average time per iteration and total number of iterations, depicted in Figure 3. The point (i, j) means that if SkolemFC needs i counter calls, the average time per call is j seconds. The figure showcases diverse scenarios: some with fewer iterations and longer durations per call, others with high counts and minimal time per call. 5.2 Quality of Approximation In the experiments, 158 accuracy benchmarks were measured using Baseline, enabling comparison between Baseline and SkolemFC results, shown in Figure 4. The counts’ close alignment and error reduction below theoretical guarantees were observed. We quantify the SkolemFC performance with error e = |b−s| b , where b is the count from Baseline and s from SkolemFC. Analysis of all 158 cases found the average e to be 0.21, geometric mean 0.19, and maximum 0.496, contrasting sharply with a theoretical guarantee of 0.8. This signifies SkolemFC substantially outperforms its theoretical bounds. Our findings underline SkolemFC’s accuracy and potential as a dependable tool for various applications. 0 20 40 60 80 100 120 140 Instance ID 0 250 500 750 1000 1250 1500 1750 2000 log(# Skolem Funcions) ExactCount*(1.8) ExactCount*(0.2) SkolemFC Count Figure 4: SkolemFC’s estimate vs. the theoretical bounds. 6 Conclusion In conclusion, this paper presents the first scalable approximate Skolem function counter, SkolemFC, which has been successfully tested on practical benchmarks and showed impressive performance. Our proposed method employs probabilistic techniques to provide theoretical guarantees for its results. The implementation leverages the progress made in the last two decades in the fields of constrained counting and sampling, and the practical results exceeded the theoretical guarantees. These findings open several directions for further investigation. One such area of potential extension is the application of the algorithm to other types of functions, such as counting uninterpreted functions in SMT with a more general syntax. This extension would enable the algorithm to handle a broader range of applications and provide even more accurate results. In summary, this research contributes significantly to the field of Skolem function counting and provides a foundation for further studies. Acknowledgements We are thankful to Tim van Bremen and Priyanka Golia for providing detailed feedback on the early drafts of the paper and grateful to the anonymous reviewers for their constructive comments to improve this paper. We thank Martina Seidl, Andreas Plank and Sibylle M¨ohle for pointing out an error The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8114 in the first implementation of the code. This work was supported in part by National Research Foundation Singapore under its NRF Fellowship Programme [NRF-NRFFAI1-20190004], Ministry of Education Singapore Tier 2 Grant [MOET2EP20121-0011], Ministry of Education Singapore Tier 1 Grant [R-252-000-B59-114], and NSF awards IIS-1908287, IIS-1939677, and IIS-1942336. Part of the work was done during Arijit Shaw’s internship at the National University of Singapore. The computational work for this article was performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg). References Akshay, S.; Arora, J.; Chakraborty, S.; Krishna, S.; Raghunathan, D.; and Shah, S. 2019. Knowledge Compilation for Boolean Functional Synthesis. In Proc. of FMCAD. Akshay, S.; Chakraborty, S.; Goel, S.; Kulal, S.; and Shah, S. 2018. What’s hard about Boolean Functional Synthesis? In Proc. of CAV. Akshay, S.; Chakraborty, S.; John, A. K.; and Shah, S. 2017. Towards parallel Boolean functional synthesis. In Proc. of TACAS. Albarghouthi, A.; Dillig, I.; and Gurfinkel, A. 2016. Maximal specification synthesis. ACM SIGPLAN Notices. Ans´otegui, C.; Bonet, M. L.; and Levy, J. 2013. SAT-based MaxSAT algorithms. Artificial Intelligence. Balabanov, V.; and Jiang, J.-H. R. 2011. Resolution proofs and Skolem functions in QBF evaluation and applications. In Proc. of CAV. Baudry, B.; and Monperrus, M. 2015. The multiple facets of software diversity: Recent developments in year 2000 and beyond. ACM Computing Surveys (CSUR). Becker, B.; Ehlers, R.; Lewis, M.; and Marin, P. 2012. ALLQBF solving by computational learning. In Proc. of ATVA. Chakraborty, D.; Chakraborty, S.; Kumar, G.; and Meel, K. S. 2023. Approximate Model Counting: Is SAT Oracle More Powerful than NP Oracle? In Proc. of ICALP. Chakraborty, S.; Fried, D.; Tabajara, L. M.; and Vardi, M. Y. 2018. Functional synthesis via input-output separation. In Proc. of FMCAD. Chakraborty, S.; Meel, K.; Mistry, R.; and Vardi, M. 2016. Approximate probabilistic inference via word-level counting. In Proc. of AAAI. Chakraborty, S.; Meel, K. S.; and Vardi, M. Y. 2013. A scalable approximate model counter. In Proc. of CP. Chakraborty, S.; Meel, K. S.; and Vardi, M. Y. 2014. Balancing Scalability and Uniformity in SAT-Witness Generator. In Proc. of DAC. Chakraborty, S.; Meel, K. S.; and Vardi, M. Y. 2016. Algorithmic Improvements in Approximate Counting for Probabilistic Inference: From Linear to Logarithmic SAT Calls. In Proc. of IJCAI. Chakraborty, S.; Meel, K. S.; and Vardi, M. Y. 2021. Approximate model counting. In Handbook of Satisfiability. Chistikov, D.; Dimitrova, R.; and Majumdar, R. 2015. Approximate counting in SMT and value estimation for probabilistic programs. In Proc. of TACAS. Dagum, P.; Karp, R.; Luby, M.; and Ross, S. 1995. An optimal algorithm for Monte Carlo estimation. In Proc. of FOCS. Dagum, P.; and Luby, M. 1997. An optimal approximation algorithm for Bayesian inference. Artificial Intelligence. Delannoy, R.; and Meel, K. S. 2022. On Almost-Uniform Generation of SAT Solutions: The power of 3-wise independent hashing. In Proc. of LICS. Dudek, J. M.; Phan, V. H.; and Vardi, M. Y. 2020. ADDMC: Weighted model counting with algebraic decision diagrams. In Proc. of AAAI. Durand, A.; Haak, A.; Kontinen, J.; and Vollmer, H. 2021. Descriptive complexity of #P functions: A new perspective. Journal of Computer and System Sciences. Froleyks, N.; Heule, M.; Iser, M.; J¨arvisalo, M.; and Suda, M. 2021. SAT competition 2020. Artificial Intelligence. Golia, P.; Roy, S.; and Meel, K. S. 2020. Manthan: A datadriven approach for Boolean function synthesis. In Proc. of CAV. Golia, P.; Roy, S.; and Meel, K. S. 2021. Program Synthesis as Dependency Quantified Formula Modulo Theory. In Proc. of IJCAI. Golia, P.; Slivovsky, F.; Roy, S.; and Meel, K. S. 2021. Engineering an efficient boolean functional synthesis engine. In Proc. of ICCAD. Gomes, C. P.; Sabharwal, A.; and Selman, B. 2021. Model counting. In Handbook of satisfiability. Haak, A.; and Vollmer, H. 2019. A model-theoretic characterization of constant-depth arithmetic circuits. Annals of Pure and Applied Logic. Jiang, J.-H. R. 2009. Quantifier elimination via functional composition. In Proc. of CAV. Jiang, J. R.; Lin, H.; and Hung, W. 2009. Interpolating functions from large Boolean relations. In Proc. of ICCAD. John, A. K.; Shah, S.; Chakraborty, S.; Trivedi, A.; and Akshay, S. 2015. Skolem functions for factored formulas. In Proc. of FMCAD. Kroening, D.; and Strichman, O. 2016. Decision procedures. Springer. Li, C. M.; and Manya, F. 2021. MaxSAT, hard and soft constraints. In Handbook of satisfiability. Prabhu, S.; Fedyukovich, G.; Madhukar, K.; and D’Souza, D. 2021. Specification synthesis with constrained Horn clauses. In Proc. of PLDI. Rabe, M. N. 2019. Incremental Determinization for Quantifier Elimination and Functional Synthesis. In Proc. of CAV. Rabe, M. N.; and Seshia, S. A. 2016. Incremental Determinization. In Proc. of SAT. Rabe, M. N.; Tentrup, L.; Rasmussen, C.; and Seshia, S. A. 2018. Understanding and extending incremental determinization for 2QBF. In Proc. of CAV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8115 Sharma, S.; Roy, S.; Soos, M.; and Meel, K. S. 2019. GANAK: A Scalable Probabilistic Exact Model Counter. In Proc. of IJCAI. Shukla, A.; M¨ohle, S.; Kauers, M.; and Seidl, M. 2022. Outercount: A first-level solution-counter for quantified boolean formulas. In Proc. of CICM. Soos, M.; and Meel, K. S. 2019. BIRD: engineering an efficient CNF-XOR SAT solver and its applications to approximate model counting. In Proc. of AAAI. Soos, M.; Nohl, K.; and Castelluccia, C. 2009. Extending SAT Solvers to Cryptographic Problems. In Proc. of SAT. Tabajara, L. M.; and Vardi, M. Y. 2017. Factored Boolean functional synthesis. In Proc. of FMCAD. Thurley, M. 2006. sharpSAT–counting models with advanced component caching and implicit BCP. In Proc. of SAT. Yang, J.; and Meel, K. S. 2023. Rounding Meets Approximate Model Counting. In Proc. of CAV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8116
2024
902
18,743
Optimizing ADMM and Over-Relaxed ADMM Parameters for Linear Quadratic Problems Jintao Song1,2, Wenqi Lu3,4, Yunwen Lei5, Yuchao Tang6, Zhenkuan Pan2, Jinming Duan1† 1 School of Computer Science, University of Birmingham, UK 2 College of Computer Science and Technology, Qingdao University, China 3 Department of Computing and Mathematics, Manchester Metropolitan University, UK 4 Centre for Computational Science and Mathematical Modelling, Coventry University, UK 5 Department of Mathematics, University of Hong Kong, HK 6 School of Mathematics and Information Science, Guangzhou University, China Abstract The Alternating Direction Method of Multipliers (ADMM) has gained significant attention across a broad spectrum of machine learning applications. Incorporating the overrelaxation technique shows potential for enhancing the convergence rate of ADMM. However, determining optimal algorithmic parameters, including both the associated penalty and relaxation parameters, often relies on empirical approaches tailored to specific problem domains and contextual scenarios. Incorrect parameter selection can significantly hinder ADMM’s convergence rate. To address this challenge, in this paper we first propose a general approach to optimize the value of penalty parameter, followed by a novel closed-form formula to compute the optimal relaxation parameter in the context of linear quadratic problems (LQPs). We then experimentally validate our parameter selection methods through random instantiations and diverse imaging applications, encompassing diffeomorphic image registration, image deblurring, and MRI reconstruction. 1 Introduction ADMM is a versatile algorithm with applications spanning various domains, including compressed sensing (Hou, Li, and Zhang 2022; Liu et al. 2023), image processing (Chan, Wang, and Elgendy 2016; Yazaki, Tanaka, and Chan 2019), and machine learning (Li et al. 2022; Zhou and Li 2023). Although introduced in the 1970s for optimization, its roots can be traced back to the 1950s as a method to solve elliptic and parabolic partial difference equations (Boyd et al. 2011). ADMM leverages the convergence strengths of the method of multipliers and the decomposability property of dual ascent. It is particularly useful in addressing convex optimization of considerable scale, beyond the capacity of conventional solvers. The ongoing research and outstanding algorithmic performance have significantly contributed to its widespread adoption, highlighting the growing importance of exploring its theoretical properties, particularly regarding parameter selection (Ghadimi et al. 2014; Wang et al. 2019). ADMM, from a technical viewpoint, decomposes complex optimization problems into manageable sub-problems, †Corresponding author: [email protected] Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. often solvable using point-wise, closed-form solvers (Cand`es et al. 2011; Lu et al. 2016; Thorley et al. 2021; Jia et al. 2021; Duan et al. 2023). It proceeds by iteratively updating these sub-problems alternately until a solution meeting the original problem’s objectives and constraints is attained. Within ADMM, the augmented Lagrange function incorporates penalty terms associated with the constraints. The penalty parameters determine the strength of these penalty terms. As highlighted in (Deng and Yin 2016), the convergence rate of ADMM is directly impacted by these penalty parameters. The optimal selection of such parameters can significantly enhance the algorithm’s convergence rate. However, the lack of a universal method to compute these parameters optimally remains a challenge. The convergence rate of ADMM can be further accelerated by leveraging information from prior iterations during the computation of subsequent iterations. Such a technique is known as over-relaxation and often used in conjunction with ADMM (De Pierro and Iusem 1986; Zhang et al. 2020). Numerous research endeavors have been devoted to defining appropriate values for the resultant relaxation parameter. Notably, in the study conducted by (Eckstein 1994), the authors proposed a widely acknowledged empirical range of values, typically falling within [1.5, 1.8], which however is not always the case according to our findings in this paper. Despite a multitude of papers presenting specific guidelines for selecting this parameter, many real-world application papers (Stellato et al. 2020; Duan et al. 2023) still resort to empirically determined values. This reliance on empirical choices is due to the absence of a straightforward and efficient method that can promptly and optimally determine this relaxation parameter. The objective of this paper is to introduce novel methods for the selection of optimal parameters within both ADMM and over-relaxed ADMM. As an example, we focus on linear quadratic problems (LQPs), particularly with applications tailored to image processing. The theories developed in this paper could offer valuable insights for addressing other nonquadratic problems, such as non-smooth L1 optimization. More specifically, we have identified four key contributions of this paper, summarized as follows: • We perform a comprehensive convergence analysis of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8117 the ADMM algorithm as applied to LQPs, effectively demonstrating its unconditional convergence within the context of LQPs. This is achieved by initially converting the ADMM iterations into its fixed-point iterations, which facilitates the derivation of the iteration matrix. Subsequently, we theoretically show that the spectral radius of the iteration matrix is bounded by 1, regardless of the value of the penalty parameter. • We propose a general optimization method for the selection of the optimal penalty parameter in ADMM. We achieve this by utilizing numerical gradient descent to minimize the spectral radius of the iteration matrix. Moreover, in specific scenarios like image deblurring and MRI reconstruction, we show the existence of an closed-form solution for accurately determining the optimal penalty parameter within ADMM. • We establish, for the first time, the existence of an closedform solution for determining the relaxation parameter in over-relaxed ADMM. We find that for any arbitrary value of the penalty parameter, there exists a corresponding relaxation parameter, computed from the closed-form solution, that minimizes the spectral radius of the iteration matrix. Consequently, we can transform the original joint optimization problem, with respect to both penalty and relaxation parameters, into a single-variable optimization problem focused only on the penalty parameter. • We verify our proposed parameter selection methods through random instantiations and practical real-world imaging applications, encompassing diffeomorphic image registration, image deblurring, MRI reconstruction. This approach sets us apart from previous methods, e.g., (Ghadimi et al. 2014), that only depend on simulated data for validation purpose. 2 Related Works (Boley 2013) studied the convergence rate of ADMM for both quadratic and linear programs via the spectral analysis based on a novel matrix recurrence. While acknowledging that the penalty parameters of ADMM can influence its convergence rate, they did not offer guidance on how to select these parameters. To address this issue, (Ghadimi et al. 2014) reformulated ADMM into a fixed-point iteration system to analyze the impact of parameters on the convergence rate of ADMM and over-relaxed ADMM. By minimizing the spectral radius of the iteration matrix, they successfully derived optimal penalty and relaxation parameters for quadratic programming. (Teixeira et al. 2015) extended the applicability of Ghadimi’s theory by transforming the distributed quadratic programming into an equivalent constrained quadratic programming. (Franc¸a and Bento 2016) introduced a method that determines the relaxation parameter for semi-definite programming through the analysis of the problem’s condition number. (Boyd et al. 2011) suggested an empirical parameter update strategy for ADMM’s penalty parameters. The idea is to maintain a proportional relationship between the norms of primal and dual residuals, ensuring their convergence to zero within a specified factor. (Xu, Figueiredo, and Goldstein 2017) proposed an adaptive ADMM approach by applying the Barzilai-Borwein spectral method to the original ADMM algorithm. Their method allows to dynamically update penalty parameters in each iteration based on primal and dual residuals. Inspired by this work, (Mavromatis, Foti, and Vavalis 2020) introduced a weighted penalty parameter ADMM algorithm for solving optimal power flow problems. Their approach involves the computation of absolute values from the admittance matrix and the Hessian matrix in each ADMM iteration. These values are then used to recalibrate the penalty parameters, aiming to refine the accuracy of parameter estimation. However, certain limitations exist in the current research landscape. Firstly, many methods (Boyd et al. 2011; Xu, Figueiredo, and Goldstein 2017; Wohlberg 2017; Mhanna, Verbiˇc, and Chapman 2018) rely on primal and dual residuals for estimating optimal parameters during iterations, but there often lack closed-form or explicit pre-iteration parameter selection approaches. Secondly, existing parameter selection techniques, based on the spectral analysis of the iteration matrix (Ghadimi et al. 2014; Franc¸a and Bento 2016), predominantly focus on specific problem types (e.g., standard quadratic problem with L being an identity matrix). These methods requiring the spectral radius of the iteration matrix to be computable in an explicit form, which restricts their applicability and generalization ability (Stellato et al. 2020). In this paper, we will propose effective methods to address these two challenges. 3 Methodology This section starts with the introduction of essential notations utilized in the subsequent formulations. We proceed by presenting the concept of fixed-point iterations, which serves as a foundational element for both the convergence analysis and parameter selection processes. Following this, we proceed to apply both ADMM and its over-relaxed variant to address LQPs. In the final stages, we propose novel methods for selecting the penalty and relaxation parameters. This is accomplished through the conversion of ADMM and over-relaxed ADMM into the form of fixed-point iterations, followed by the utilization of spectral radius analysis. 3.1 Notations and Fixed-Point Iterations Let R and C denote respectively the set of real and complex numbers, R++ denote the set of positive numbers, Sn×n denote the set of n × n matrices, and In (or I) be the n × n identity matrix. For the square matrix T and its corresponding eigenvalues λ′s, we define the nth smallest eigenvalue of T as λn(T), and the spectral radius of T as ρ(T). Fixed-point iterations involve the iterative process below uk+1 = Tuk + c, where T ∈Sn×n is known as the iteration matrix, u ∈Rn, and c ∈Rn. It was shown in (Ghadimi et al. 2014) that the convergence factor ζ of this fixed-point iteration system is equal to ρ(T). Here, the convergence factor ζ is defined as ζ ≜ sup k:uk̸=u∗ ∥uk+1 −u∗∥ ∥uk −u∗∥, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8118 where ∥· ∥represents the L2 norm, and u∗denotes the optimal solution (i.e., so-called ground truth). The sequence {uk} is Q-sublinear if ζ = 1, Q-linear if ζ < 1, and Q-superlinear if ζ = 0. Throughout this paper, the letter Q has been omitted when referring to the convergence rate. For linearly convergence sequences with ζ ∈(0, 1), if we define tϵ as the smallest iteration count to ensure ∥uk+1 −u∗∥< ε for all k > tε, then tε can be calculated by (log(ε) −log(σ)) / log(ζ), where σ denotes the worst case distance between u0 and u∗, i.e., ∥u0 −u∗∥< σ. This suggests that by reducing the value of the the convergence factor ζ, the iteration count can be decreased, leading to a faster convergence rate. 3.2 ADMM for LQPs The LQPs for image processing we study in this paper have the following structure min u µ 2 ∥Au −f∥2 + 1 2∥Lu∥2, (1) where µ ∈R++ is the regularization parameter; A ∈Rm×n or Cm×n (m ≤n) is an encoding matrix; u ∈Rn or Cn is the unknown vector; f ∈Rm or Cm is the input vector; and L ∈Rn×n is a regularization matrix. The value of µ determines the output quality, whereas smaller values of µ tend to yield smoother results. By differentiating (1) with respect to u and setting the respective derivative to zero, we have the following linear system µAT A + LT L  u = µAT f. (2) When addressing the solution of Equation (2), two primary challenges arise: Firstly, in certain scenarios like our MRI reconstruction and diffeomorphic image registration, where µAT A + LT L  may be positive semi-definite, the process of inverting such a matrix becomes unfeasible. Secondly, in the context of higher-dimensional cases like 3D medical image registration (Thorley et al. 2021), even if the matrix µAT A + LT L  remains positive definite, the process of matrix inversion becomes computationally expensive. To address these two issues, we propose to use ADMM to handle the original problem (1), as an alternative to using the normal equation to solve (2). To apply ADMM, we introduce an auxiliary variable w ∈ Rn, a Lagrangian multiplier b ∈Rn, and a penalty parameter θ ∈R++, transforming (1) into the following augmented Lagrange function L(u, w; b) = µ 2 ∥Au−f∥2+ 1 2∥Lw∥2+ θ 2∥w−u−b∥2. (3) To optimize (3) with ADMM, we need to decompose it into two sub-problems with respect to u and w and then update the Lagrangian multipliers b until the process converges. The following Algorithm 1 outlines the optimization process using ADMM. In Algorithm 1, we have wk+1 = (LT L + θI)−1(θuk + θbk) and uk+1 = (µAT A + θI)−1(θwk+1 − θbk + µAT f). It is worth noting that while matrix inversion is applied to both variables wk+1 and uk+1, fast solvers exist in specific cases due to the distinctive structure of AT A and LT L. For instance, in diffeomorphic image registration, Algorithm 1: ADMM for LQPs Input: matrices A and L; parameter µ and θ Initialize: u0 and b0 Repeat: wk+1 = arg min w 1 2∥Lw∥2 + θ 2∥w −uk −bk∥2 uk+1 = arg min u µ 2 ∥Au −f∥2 + θ 2∥wk+1 −u −bk∥2 bk+1 = bk + uk+1 −wk+1 until some stopping criterion is met AT A takes on a rank-1 form, allowing efficient inversion through the Morris-Sherman equation (Bartlett 1951; Thorley et al. 2021). Similarly, in MRI reconstruction and diffeomorphic image registration, LT L can be effectively diagonalized using the discrete Fourier transformation basis functions (Goldstein and Osher 2009; Duan et al. 2023). Consequently, the application of ADMM to solve LQPs offers distinct advantages. Theorem 1. In order to determine the optimal penalty parameter θ∗in ADMM automatically, we need to transform the ADMM iterations in Algorithm 1 into the following fixedpoint iteration system, solely with respect to the variable u uk+1 = (I + Q) uk − µAT A + θI −1 θµAT f  , (4) where I + Q is the iteration matrix with Q defined as Q = θ(µAT A + θI)−1((LT L + θI)−1(θI −µAT A) −I). (5) Next, given a value of µ, we can prove ρ (I + Q) ≤1, (6) regardless of the value of θ. As per Section 3.1, we know that the convergence factor ζ of Algorithm 1 is equal to the spectral radius of the iteration matrix. As such, ζ is bounded by 1, meaning Algorithm 1 or (4) is unconditionally convergent. Proof. Detailed derivations proving the equivalence between Algorithm 1 and the fixed-point iteration system (4), as well as Inequality (6), have been provided in Appendix 1 of the arXiv version of this paper. Next, we search the optimal parameter θ∗that minimizes the convergence rate of Algorithm 1. Since ζ is dependent on the penalty parameter θ, the objective is to identify a value for θ that minimizes the convergence factor ζ. For this, we define the following minimization problem min θ ζ (θ) , (7) where ζ (θ) = ρ (I + Q(θ)). From Inequality (6) in Theorem 1 we have λi (Q(θ)) ∈[−1, 0], and we can also easily derive λi (I + Q(θ)) = 1 + λi (Q(θ)). As such, we have ρ (I + Q(θ)) = 1 + λn (Q(θ)), with which the minimization problem (7) can be converted to min θ λn (Q(θ)) . (8) Though the minimization problem (8) is a onedimensional optimization problem with respect to only θ, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8119 Algorithm 2: Over-relaxed ADMM for LQPs Input: matrices A and L; parameter µ, θ and α Initialize: u0 and b0 Repeat: wk+1 = arg min w 1 2∥Lw∥2 + θ 2∥w −uk −bk∥2 uk+1 = arg min u µ 2 ∥Au −f∥2 + θ 2∥αwk+1 −αuk −bk∥2 bk+1 = bk + uk+1 −αwk+1 −(1 −α) uk until some stopping criterion is met computing θ∗directly is however not trivial. This is the reason why a general applicable method for optimizing θ is still lacking. Previous works (Ghadimi et al. 2014; Teixeira et al. 2015) were based on the assumption that λn(Q) can be explicitly written for spectral analysis. However, in practical applications such as diffeomorphic registration in Section 4.2, this is a significant limitation. To address this challenge, we propose to use numerical gradient descent to optimize θ θk+1 = θk −t∇λn Q(θk)  , (9) where t denotes the step size. In this study, we employed the central finite difference scheme to compute gradients. Compared to the one-sided finite difference method, this scheme offers better numerical stability. It also provides more accurate estimation of gradients. It is important to note that this gradient descent method is general, as it does not need to know the explicit form of the eigenvalues of matrix Q. The definition for the central finite difference is given by ∇λn Q(θk)  ≈λn Q(θk + η)  −λn Q(θk −η)  2η , where η represents a small value. In our experiments, we set this value within the range of 10−5 to 10−3, which led to a satisfactory convergence of the gradient descent (9). 3.3 Over-Relaxed ADMM Over-relaxation technique can be used in the ADMM algorithm and further accelerate the convergence rate of ADMM. This method is achieved by introducing an additional relaxation parameter α and replacing wk+1 in Algorithm 1 with αwk+1 + (1 −α) uk. Algorithm 2 outlines the optimization process of the augmented Lagrange function (3) using overrelaxed ADMM. To investigate the influence of relaxation parameter α on convergence, we transform Algorithm 2 into its fixed-point iteration system. Such a conversion approach is in line with Proof of Theorem 1 in Appendix. The resulting fixed-point iteration system is given as follows uk+1 = (I + αQ) uk −α µAT A + θI −1 θµAT f  . Then we can analyze the spectral radius of this matrix to determine the optimal relaxation parameter α∗. Theorem 2. The optimal α∗can be directly calculated using the following closed-form formula α∗= − 2 λ1 (Q (θ)) + λn (Q (θ)), (10) where Q(θ) is a matrix whose entries reply on the value of θ. As per Equation (10), we can compute the optimal relaxation parameter α∗as long as a value of θ is given. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 |1 + i(Q)| * |1 + 1(Q)| |1 + i(Q)| i {2, . . . , n 1} |1 + n(Q)| Reflection Point Intersection Point Figure 1: Relationship between |1 + αλi(Q)| and the value of α. The slope of each line before reflection is λi (Q). The spectral radius before the intersection point is governed by the green line, while after the reflection, it is determined by the reflected red line. The intersection point corresponds to the optimal α∗as well as the minimum spectral radius of the iteration matrix I + αQ. Proof. To prove Theorem 2, we begin with the following two-dimensional joint optimization problem min θ,α ζ (θ, α) , (11) where ζ (θ, α) = ρ(I+αQ(θ)). In order to express the spectral radius in terms of the eigenvalue structure, we first derive the equality λi (I + αQ(θ)) = 1 + αλi (Q(θ)), and the spectral radius ρ(I + αQ(θ)) is then defined as max i |1 + αλi (Q(θ)) |, ∀i ∈{1, ..., n}. (12) From Inequality (6) in Theorem 1, we know λi (Q(θ)) ∈ [−1, 0]. Based on this and (12), we plot Figure 1 to demonstrate the correlation between the absolute eigenvalue of the iteration matrix and the relaxation parameter. From this figure, it is straightforward to express the spectral radius as the following piecewise function ρ = 1 + αλn(Q), if −1 −αλ1(Q) ≤1 + αλn(Q) −1 −αλ1(Q), if −1 −αλ1(Q) > 1 + αλn(Q) , (13) where with a slight abuse of notation, we use ρ to represent ρ(I + αQ(θ)). Once we have (13), our objective is to minimize it in order to enhance the convergence rate. From Figure 1 again, it becomes evident that the smallest spectral radius is located at the intersection point where the following equality holds −1 −αλ1(Q) = 1 + αλn(Q), from which α can be computed using a closed-form solution as follows α = − 2 λ1 (Q (θ)) + λn (Q (θ)), which exactly verifies the validity of Equation (10). If now we plug the optimal α∗into (13), we can convert the joint minimization problem (11) into the following minimization problem min θ,α ζ (θ, α) ⇒min θ λ1(Q (θ)) −λn(Q (θ)) λ1 (Q (θ)) + λn (Q (θ)), (14) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8120 which is a single-variable optimization problem with respect to only θ. This problem can be minimized with a numerical gradient descent method similar to Equation (9). Once θ∗ is found, α∗can be computed using the closed-form solution (10). It is worth noting that even if θ is not optimal, α computed via (10) can still accelerate convergence. 4 Experiments In this section, we will first test the generalization ability of our proposed parameter selection method through random instantiations. Following that, we will apply the proposed parameter selection methods to diffeomorphic image registration, image deblurring, and MRI reconstruction. We will compare our optimal ADMM algorithm and over-relaxed variant (oADMM) with gradient descent (GD), gradient descent with Nesterov’s acceleration (GD-N) (Nesterov 1983; Bartlett and Duan 2021), gradient descent with Nesterov’s acceleration and restart (GD-NR) (O’donoghue and Candes 2015; Bartlett and Duan 2021), as well as conjugate gradient (CG). In all of our experiments, we chose the step size in gradient-based methods using the Lipschitz constant of the corresponding problem. It is worth noting that optimal values for penalty parameters can be determined analytically for image deblurring and MRI reconstruction problems. However, for image registration numerical gradient descent is required to compute these parameters. 4.1 Generalization Ability Emphasizing that our approach is model-driven, the selection of parameters1 relies on the matrices A and L in the minimization problem (1). As such, the measure of generalization ability lies in how effectively our method performs as A and L undergo variations, which is in contrast to datadriven methods, where the generalization ability is often examined using multiple different datasets. We presented Figure 2 to demonstrate the generalization ability of our approach, where the analysis is based on 50 random instantiations of A ∈R200×50 and L ∈R200×50 while keeping f and µ fixed. For ADMM, we employed numerical gradient descent to minimize (8) with respect to θ. For oADMM, we utilized numerical gradient descent to minimize (14) with regard to θ, whilst the optimal value of α was calculated using (10) once θ∗was found. We note that the optimal values of θ for both ADMM and oADMM are similar and that the optimal values of α are not within [1.5, 1.8] as suggested in (Eckstein 1994). As evident from Figure 2, the calculated optimal values consistently result in faster convergence rates for both ADMM and oADMM, reaffirming the generalization ability of our proposed parameter selection methods. 4.2 Diffeomorphic Image Registration Computing a diffeomorphic deformation can be treated as modelling a dynamical system (Beg et al. 2005), given by an ordinary differential equation (ODE): ∂ϕ/∂t = vt(ϕt), 1The parameter selection also relies on the regularization parameter µ which we fix as a constant in this paper. 0 50 100 150 200 250 300 350 Iteration = 1, = * [460, 574] = 1, = * + 200 = 1, = * + 400 = 1, = * 200 = 1, = * 400 = * [2.5, 2.7], = * 0 25 50 75 100 125 150 175 200 Iteration 35 30 25 20 15 10 5 0 log(||u k u * ||) = 1, = * = 512 = 1, = * + 60 = 1, = * + 120 = 1, = * 60 = 1, = * 120 = * = 2.6, = * Figure 2: Left: Convergence rates of different methods and parameter values based on 1 random instantiation of A and L. Right: Convergence rates based on 50 random instantiations of A and L. The solid lines represent the average over 50 instantiations. The algorithm is ADMM when α = 1, and oADMM when α = α∗. where ϕ0 = Id is the identity transformation and vt indicates the velocity field at time t (∈[0, 1]). The ODE can be solved by Euler integration, in which the deformation field ϕ is calculated as the compositions of a series of small deformations, defined as ϕ = (Id+ vtN−1 N )◦· · ·◦(Id+ vt1 N )◦ (Id + vt0 N ). If the velocity fields vti are sufficiently small whilst satisfying some smoothness constraints, the resulting composition is a diffeomorphic deformation. To compute the velocity fields whilst satisfying these diffeomorphic constraints, we minimize the following linear quadratic problem (Thorley et al. 2021) min vx,vy µ 2 ∥⟨Ix, vx⟩+⟨Iy, vy⟩+It∥2 + 1 2∥∇vx∥2 + 1 2∥∇vy∥2, (15) where Ix, Iy ∈Rn denote the spatial derivatives of the image; It ∈Rn represents the temporal derivative of the image; and vx, vy ∈Rn denote the velocity field in x and y directions. In this case, by setting AT A =  diag(⟨Ix, Ix⟩) diag(⟨Ix, Iy⟩) diag(⟨Iy, Ix⟩) diag(⟨Iy, Iy⟩)  ∈R2n×2n and LT L =  ∇T ∇ 0 0 ∇T ∇  ∈R2n×2n, we can use numerical gradient descent to compute optimal parameters for both ADMM and oADMM. In Figure 3, we show results obtained through the introduced diffeomorphic registration technique. We examine the impact of the penalty parameter θ in both ADMM and oADMM, and then evaluate the convergence efficiency of different algorithms. Given a pair of images (depicted as source and target in the figure), we can compute a deformation (shown in the bottom left panel) that ensures a positive Jacobian determinant (shown in the bottom middle panel) for all pixel positions. In the top right panel, we show the correlation between the spectral radius of the iteration matrix and θ in both ADMM and oADMM. As can be seen, there exists an unique optimal value where the spectral radius is minimized. As such, when using numerical gradient descent, it is possible to find the optimal value of θ that can considerably reduce iteration counts. This panel also illustrates that θ∗, producing the smallest spectral radius for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8121 Source Target Spectral Radius * 1st outer iter. 6th outer iter. 12th outer iter. 18th outer iter. 1st outer iter. 6th outer iter. 12th outer iter. 18th outer iter. log(||u k u * ||) ADMM oADMM GD GD-N GD-NR CG 0.0 0.5 1.0 1.5 2.0 Figure 3: Illustration of diffeomorphic image registration results, visualization of the correlation between spectral radius and θ, and comparison of convergence rates of algorithms. The x-axes of the two plots in the third column represent the values of θ and iteration numbers, respectively. oADMM, closely aligns with that of ADMM. Furthermore, due to the two-loop2 iterative nature of diffeomorphic image registration, the data term of (15) undergoes slight changes at each iteration of the outer loop. These changes however do not significantly influence the value of θ∗, as evident from the top right panel. Therefore, given a specific value of µ, it is sufficient to use gradient descent to search θ∗for each outer iteration. Finally, in the bottom right panel, convergence rates among different algorithms are compared. As is evident, the parameter-optimized oADMM algorithm remains the fastest in terms of convergence rate. 4.3 Image Deblurring In this application, we look at a phantom test image. The image went through a Gaussian blur of size 7 × 7 and standard deviation 2, followed by an additive zero-mean white Gaussian noise with standard deviation 10−4. The top left and middle panels of Figure 4 depict the original and blurred images, respectively. To deblur the image we minimize the following problem min u µ 2 ∥Ku −f∥2 + 1 2∥u∥2, (16) where K ∈Sn×n is the matrix representing the blur operator, u ∈Rn is the vectorized unknown clean image, and f ∈Rn is the vectorized input image. By setting A = K and L = I, the matrix Q in (5) for this application has the form of Q = θ(µKT K + θI)−1((I + θI)−1(θI −µKT K) −I). Since K is a convolution matrix derived from the Gaussian kernel function, the eigenvalues of KT K can be calculated using the two-dimensional discrete Fourier transform (Capus and Brown 2003). With λ KT K  , we can derive the maximum eigenvalues of Q as λn(Q) = − θ + θµλi(KT K) θ2 + µλi(KT K) + θ + θµλi(KT K), (17) 2We did not use the pyramid implementation as in (Thorley et al. 2021), so we ended up with a two-loop algorithm comprising inner ADMM/oADMM iterations and outer warping iterations. Original Blurred Recovered * = 1 * 0.1 * 0.2 * + 0.1 * + 0.2 * = 2, * = 1 = 1.6, = 1 = 1.7, = 1 = 1.8, = 1 = 1.9, = 1 ADMM oADMM GD GD-N GD-NR CG 0.2 0.4 0.6 0.8 1.0 Figure 4: Demonstration of image deblurring effects and convergence rates of different algorithms. The x-axis and yaxis of each plot in the second row represent iteration numbers and log(∥uk −u∗∥), respectively. where i is either 1 or n. Since λn(Q) in this case can be explicitly written, we can derive closed-form solutions for the parameters in ADMM and over-relaxed ADMM. In Theorem 3, we give their optimal parameters. Theorem 3. Firstly, to tackle the optimization problem (16) using ADMM, given a regularization parameter µ ∈R++, the optimal value of the penalty parameter in ADMM can be expressed in closed form θ∗= √µ, if µ ≤1 1, otherwise , which was derived by minimizing the value of λn(Q) in (17). If over-relaxed ADMM is used to tackle the optimization problem (16), the optimal penalty and relaxation parameters are given by θ∗= 1 and α∗= 2, among which θ∗was determined by minimizing the problem (14) with λ1(Q) and λn(Q) defined in (17), and then α∗was computed using (10) with θ∗. Proof. Detailed derivations have been given in Appendix 2 of the arXiv version of this paper. In Figure 4, the top right panel displays the deblurred image from oADMM (comparable results were achieved with ADMM), which closely resembles the original image. Note that we set the regularization parameter µ to 103 for this experiment. The bottom left and middle panels demonstrate that with the optimal θ∗and α∗there is a clear enhancement over the model’s convergence, and that the optimal values of θ for both ADMM and oADMM are the same in this case. The bottom right panel shows that ADMM, CG, and oADMM exhibit superior performance than GD, GD-N, and GD-NR. Upon a detailed examination from the zoomed-in window, CG addresses this quadratic problem very well, albeit still needing multiple iterations to attain convergence. In contrast, oADMM achieves convergence in a single step3, outperforming all compared algorithms. 3The spectral radius in this case is close to zero, leading to a superlinear convergence rate. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8122 4.4 MRI Reconstruction To reconstruct MR images we minimize the problem min u µ 2 ∥DFu −f∥2 + 1 2∥∇u∥2, (18) where D ∈Rm×n (m < n) is the sampling matrix; F ∈ Cn×n is the Fourier transform matrix; u ∈Cn is a complexvalued MR image stacked as a column vector; f ∈Cm is the undersampled k-space data; ∇denotes the first-order gradient operator. By setting A = DF and L = ∇, the matrix Q in (5) for this application has the form of Q = θ(µM1+θI)−1((∇T ∇+θI)−1(θI−µM1)−I), (19) where M1 = FT DT DF. Due to the use of periodic boundary conditions, ∇T ∇can be efficiently diagonalized in the form of FT GF, where G is a diagonal matrix. Equation (19) can be simplified to FT M2F, where M2 is given as M2 = θ(µDT D + θI)−1((G + θI)−1(θI −µDT D) −I), which is a diagonal matrix. The eigenvalues of FT M2F are simply the values along the diagonal of M2. If we define λi(G) as the ith smallest eigenvalue of G, and di as the diagonal value of DT D at the position where λi(G) is indexed from G, the maximum eigenvalues of Q can be derived as λn(Q) = − θλi(G) + θµdi θ2 + µdiλi(G) + θλi(G) + θµdi , where i ∈{1, ..., n}. Since λn(Q) can be written explicitly, we can derive the closed-form solution for θ in ADMM. In Theorem 3, we present the optimal value for this parameter. If over-relaxed ADMM is used to solve (18), a closedform solution still exists for θ. It is however too cumbersome to derive them in this case. As such, the penalty parameter θ in over-relaxed ADMM was searched by gradient descent, and once θ∗was found the optimal relaxation parameter α∗ can be directly obtained using Equation (10) with θ∗. Theorem 4. To tackle the optimization problem (18) using ADMM, given a regularization parameter µ ∈R++, the optimal value of the penalty parameter in ADMM can be expressed in closed form θ∗=            √µa, if µ ≤2b −a q µab µ+a−b, if 2b −a < µ ≤a µ, if a < µ ≤b q µcb µ+c−b, otherwise , where a, b and c are defined as follows a = λ1((DT D) ⊙G) b = λ1((1 −DT D) ⊙G), c = λx((DT D) ⊙G) where ⊙is the hadamard product; λ1((DT D) ⊙G) denotes the smallest eigenvalue of (DT D) ⊙G, excluding the eigenvalues corresponding to zero entries on the diagonal of DT D; λ1((1 −DT D) ⊙G) denotes the smallest Original Undersamped Recovered * = . 0066 * . 002 * . 004 * + . 002 * + . 004 * = 1.43, * = . 0066 = 1.6, * = . 0066 = 1.7, * = . 0066 = 1.8, * = . 0066 = 1.9, * = . 0066 ADMM oADMM GD GD-N GD-NR CG 0.2 0.4 0.6 0.8 1.0 Figure 5: Demonstration of MRI reconstruction results and comparison of convergence rates among algorithms. The xaxis and y-axis of each plot in the second row represent iteration numbers and log(∥uk −u∗∥), respectively. eigenvalue of (1 −DT D) ⊙G, excluding the eigenvalues corresponding to zero entries on the diagonal 1 −DT D; and λx((DT D) ⊙G) represents the largest eigenvalue of (DT D) ⊙G, excluding the eigenvalues corresponding to zero entries along the diagonal of DT D. Proof. Detailed derivations have been given in Appendix 3 of the arXiv version of this paper. In Figure 5, we reconstruct a cardiac MR image from kspace. The original image (displayed in the top left panel) was first transformed into k-space using the Fourier transformation. Then 50% of the data there was taken using a cartesian sampling mask, displayed in the original image. This undersampled data was then corrupted by an additive zero-mean white Gaussian noise with standard deviation 1 to form f in (18). The reconstruction (top right), despite some slight blurring due to the smooth regularization, clearly enhances image quality compared to that displayed in the top middle panel, which is a direct reconstruction of f using the inverse Fourier transformation. The bottom left and middle panels of this figure illustrate that the choice of θ and α has a significant impact on the convergence rate and that our proposed methods result in faster convergence. Thanks to the utilization of these optimal parameters, we observe a clear superiority of our ADMM and oADMM over GD, its accelerated variants, and CG in terms of convergence efficiency. 5 Conclusion In this paper, we presented automated techniques for selecting optimal penalty and relaxation parameters within the framework of ADMM and over-relaxed ADMM for linear quadratic problems. Our approaches involve a numerical gradient descent method for estimating the penalty parameter and a novel closed-form solution for determining the optimal relaxation parameter. We verified the generalizability and efficacy of these approaches through random instantiations and real-world imaging applications. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8123 Acknowledgements Jintao Song is partially supported by the Chinese Scholarship Council (project number: 202208370130). References Bartlett, J.; and Duan, J. 2021. Accelerated first order methods for variational imaging. arXiv preprint arXiv:2110.02813. Bartlett, M. S. 1951. An inverse matrix adjustment arising in discriminant analysis. The Annals of Mathematical Statistics, 22(1): 107–111. Beg, M. F.; Miller, M. I.; Trouv´e, A.; and Younes, L. 2005. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International Journal of Computer Vision, 61: 139–157. Boley, D. 2013. Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs. SIAM Journal on Optimization, 23(4): 2183–2207. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J.; et al. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1): 1–122. Cand`es, E. J.; Li, X.; Ma, Y.; and Wright, J. 2011. Robust principal component analysis? Journal of the ACM (JACM), 58(3): 1–37. Capus, C.; and Brown, K. 2003. Fractional Fourier transform of the Gaussian and fractional domain signal support. IEE Proceedings-Vision, Image and Signal Processing, 150(2): 99–106. Chan, S. H.; Wang, X.; and Elgendy, O. A. 2016. Plugand-play ADMM for image restoration: fixed-point convergence and applications. IEEE Transactions on Computational Imaging, 3(1): 84–98. De Pierro, A. R.; and Iusem, A. 1986. A relaxed version of Bregman’s method for convex programming. Journal of Optimization Theory and Applications, 51: 421–440. Deng, W.; and Yin, W. 2016. On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing, 66: 889–916. Duan, J.; Jia, X.; Bartlett, J.; Lu, W.; and Qiu, Z. 2023. Arbitrary order total variation for deformable image registration. Pattern Recognition, 109318. Eckstein, J. 1994. Parallel alternating direction multiplier decomposition of convex programs. Journal of Optimization Theory and Applications, 80(1): 39–62. Franc¸a, G.; and Bento, J. 2016. An explicit rate bound for over-relaxed ADMM. In 2016 IEEE International Symposium on Information Theory (ISIT), 2104–2108. IEEE. Ghadimi, E.; Teixeira, A.; Shames, I.; and Johansson, M. 2014. Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems. IEEE Transactions on Automatic Control, 60(3): 644–658. Goldstein, T.; and Osher, S. 2009. The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences, 2(2): 323–343. Hou, R.; Li, F.; and Zhang, G. 2022. Truncated residual based plug-and-play ADMM algorithm for MRI reconstruction. IEEE Transactions on Computational Imaging, 8: 96– 108. Jia, X.; Thorley, A.; Chen, W.; Qiu, H.; Shen, L.; Styles, I. B.; Chang, H. J.; Leonardis, A.; De Marvao, A.; O’Regan, D. P.; et al. 2021. Learning a model-driven variational network for deformable image registration. IEEE Transactions on Medical Imaging, 41(1): 199–212. Li, Q.; Kailkhura, B.; Goldhahn, R.; Ray, P.; and Varshney, P. K. 2022. Robust decentralized learning using ADMM with unreliable agents. IEEE Transactions on Signal Processing, 70: 2743–2757. Liu, Y.; Huang, K.; Yang, C.; and Wang, Z. 2023. Distributed network reconstruction based on binary compressed sensing via ADMM. IEEE Transactions on Network Science and Engineering. Lu, W.; Duan, J.; Qiu, Z.; Pan, Z.; Liu, R. W.; and Bai, L. 2016. Implementation of high-order variational models made easy for image processing. Mathematical Methods in the Applied Sciences, 39(14): 4208–4233. Mavromatis, C.; Foti, M.; and Vavalis, M. 2020. Auto-tuned weighted-penalty parameter ADMM for distributed optimal power flow. IEEE Transactions on Power Systems, 36(2): 970–978. Mhanna, S.; Verbiˇc, G.; and Chapman, A. C. 2018. Adaptive ADMM for distributed AC optimal power flow. IEEE Transactions on Power Systems, 34(3): 2025–2035. Nesterov, Y. E. 1983. A method of solving a convex programming problem with convergence rate O( 1 k2 ). In Doklady Akademii Nauk, volume 269, 543–547. Russian Academy of Sciences. O’donoghue, B.; and Candes, E. 2015. Adaptive restart for accelerated gradient schemes. Foundations of computational mathematics, 15: 715–732. Stellato, B.; Banjac, G.; Goulart, P.; Bemporad, A.; and Boyd, S. 2020. OSQP: An operator splitting solver for quadratic programs. Mathematical Programming Computation, 12(4): 637–672. Teixeira, A.; Ghadimi, E.; Shames, I.; Sandberg, H.; and Johansson, M. 2015. The ADMM algorithm for distributed quadratic problems: parameter selection and constraint preconditioning. IEEE Transactions on Signal Processing, 64(2): 290–305. Thorley, A.; Jia, X.; Chang, H. J.; Liu, B.; Bunting, K.; Stoll, V.; de Marvao, A.; O’Regan, D. P.; Gkoutos, G.; Kotecha, D.; et al. 2021. Nesterov accelerated ADMM for fast diffeomorphic image registration. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part IV 24, 150–160. Wang, J.; Yu, F.; Chen, X.; and Zhao, L. 2019. Admm for efficient deep learning with global convergence. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 111–119. Wohlberg, B. 2017. ADMM penalty parameter selection by residual balancing. arXiv preprint arXiv:1704.06209. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8124 Xu, Z.; Figueiredo, M.; and Goldstein, T. 2017. Adaptive ADMM with spectral penalty parameter selection. In Artificial Intelligence and Statistics, 718–727. PMLR. Yazaki, Y.; Tanaka, Y.; and Chan, S. H. 2019. Interpolation and denoising of graph signals using plug-and-play ADMM. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5431– 5435. Zhang, R.; Yan, K.; Li, G.; Jiang, T.; Li, X.; and Chen, H. 2020. Privacy-preserving decentralized power system economic dispatch considering carbon capture power plants and carbon emission trading scheme via over-relaxed ADMM. International Journal of Electrical Power & Energy Systems, 121: 106094. Zhou, S.; and Li, G. Y. 2023. Federated learning via inexact ADMM. IEEE Transactions on Pattern Analysis and Machine Intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8125
2024
903
18,744
Disjoint Partial Enumeration without Blocking Clauses Giuseppe Spallitta1, Roberto Sebastiani1, Armin Biere2 1DISI, University of Trento 2University of Freiburg [email protected], [email protected], [email protected] Abstract A basic algorithm for enumerating disjoint propositional models (disjoint AllSAT) is based on adding blocking clauses incrementally, ruling out previously found models. On the one hand, blocking clauses have the potential to reduce the number of generated models exponentially, as they can handle partial models. On the other hand, the introduction of a large number of blocking clauses affects memory consumption and drastically slows down unit propagation. We propose a new approach that allows for enumerating disjoint partial models with no need for blocking clauses by integrating: Conflict-Driven Clause-Learning (CDCL), Chronological Backtracking (CB), and methods for shrinking models (Implicant Shrinking). Experiments clearly show the benefits of our novel approach. Introduction All-Solution Satisfiability Problem (AllSAT) is an extension of SAT that requires finding all possible solutions of a propositional formula. AllSAT has been heavily applied in the field of hardware and software verification. For instance, AllSAT can be used to generate test suites for programs automatically (Khurshid et al. 2004) and for bounded and unbounded model checking (Jin, Han, and Somenzi 2005). Recently AllSAT has found applications in artificial intelligence. For example, (Spallitta et al. 2022) exploits AllSMT (a variant of AllSAT dealing with first-order logic theories) for probabilistic inference in hybrid domains. AllSAT has also been applied to data mining to deal with the frequent itemset mining problem (Dlala et al. 2016). Lastly, model counting over first-order logic theories (#SMT) (Chistikov, Dimitrova, and Majumdar 2015) relies on AllSAT too. Exploring the complete search space efficiently is a major concern in AllSAT. For a formula F with n variables, there are 2n possible total assignments. Generating all of these assignments explicitly would require exponential space complexity. To mitigate the issue, we can use partial models to obtain compact representations of a set of solutions. If a partial model does not explicitly assign the truth value of a variable, then it means that its truth value does not impact the satisfiability of that assignment, thus two assignments are represented by the partial one. In problems with n variables, Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. a partial assignment with m variables covers 2n−m total assignments in one shot. The literature distinguishes between enumeration with repetitions (AllSAT) and enumeration without repetitions (disjoint AllSAT). Whereas covering the same model may not be problematic for certain applications (e.g. predicate abstraction (Lahiri, Bryant, and Cook 2003)), it can result in an incorrect final solution for other contexts, such as Weighted Model Integration (Morettin, Passerini, and Sebastiani 2019) and #SMT (Chistikov, Dimitrova, and Majumdar 2015). In this paper, we will address disjoint AllSAT. SAT-based propositional enumeration algorithms can be grouped into two main categories: blocking solvers, and non-blocking solvers. Blocking AllSAT solvers (McMillan 2002; Jin, Han, and Somenzi 2005; Yu et al. 2014) rely on Conflict Driven Clause-Learning (CDCL) and non-chronological backtracking (NCB) to return the set of all satisfying assignments. They work by repeatedly adding blocking clauses to the formula after each model is found, which rules out the previous set of satisfying assignments until all possible satisfying assignments have been found. These blocking clauses ensure that the solver does not return the same satisfying assignment multiple times and that the search space is efficiently scanned (Morgado and Marques-Silva 2005a). Although blocking solvers are straightforward to implement and can be adapted to retrieve partial assignments, they become inefficient when the input formula F has a high number of models, as an exponential number of blocking clauses might be added to make sure the entire search space is visited. As the number of blocking clauses increases, unit propagation becomes more difficult, resulting in degraded performance. Non-blocking AllSAT solvers (Grumberg, Schuster, and Yadgar 2004; Li, Hsiao, and Sheng 2004) overcome this issue by not introducing blocking clauses and by implementing chronological backtracking (CB) (Nadel and Ryvchin 2018): after a conflict arises, they backtrack on the search tree by updating the most recently instantiated variable. Chronological backtracking guarantees not to cover the same model of a formula multiple times without the typical CPU-time blow-up caused by blocking clauses. The major drawback of this class of AllSAT solvers is that they only generate total assignments. Moreover, regions of the search The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8126 space with no solution cannot be escaped easily. (M¨ohle and Biere 2019b) proposes a new formal calculus of a disjunctive model counting algorithm combining the best features of chronological backtracking and CDCL, but without providing an implementation or experimental results. In (Sebastiani 2020; M¨ohle, Sebastiani, and Biere 2020, 2021) the authors discuss the calculus behind different approaches to determine if a partial assignment satisfies a formula when chronological backtracking is implemented in the CDCL procedure. However, both works rely on dual reasoning, which could perform badly when a high number of variables is involved (SAT and QBF oracle calls required by (M¨ohle, Sebastiani, and Biere 2020) may be expensive). Contributions In this work, we propose a novel AllSAT procedure to perform disjoint partial enumeration of propositional formulae by combining the best of current AllSAT state-of-the-art literature: (i) CDCL, to escape search branches where no satisfiable assignments can be found; (ii) chronological backtracking, to ensure no blocking clauses are introduced; (iii) efficient implicant shrinking, to reduce in size partial assignments, by exploiting the 2-literal watching scheme. We have implemented the aforementioned ideas in a tool that we refer to as TABULARALLSAT and compared its performance against other publicly available stateof-the-art AllSAT tools using a variety of benchmarks, including both crafted and SATLIB instances. Our experimental results show that TABULARALLSAT outperforms all other solvers on nearly all benchmarks, demonstrating the benefits of our approach. Background Notation We assume F is a propositional formula defined on the set of Boolean variables V = {v1, ..., vn}, with cardinality |V |. A literal ℓis a variable v or its negation ¬v. L(V ) denotes the set of literals on V. We implicitly remove double negations: if ℓis ¬v, by ¬ℓwe mean v rather than ¬¬v. A clause is the disjunction of literals W ℓ∈c ℓ. A cube is the conjunction of literals V ℓ∈c ℓ. The function M : V 7→{⊤, ⊥} mapping variables in F to their truth value is known as assignment. An assignment can be represented by either a set of literals {ℓ1, ..., ℓn} or a cube conjoining all literals in the assignment ℓ1 ∧... ∧ℓn. We distinguish between total assignments η or partial assignments µ depending on whether all variables are mapped to a truth value or not, respectively. A trail is an ordered sequence of literals I = ℓ1, ..., ℓn with no duplicate variables. The empty trail is represented by ε. Two trails can be conjoined one after the other I = KL, assuming K and L have no variables in common. We use superscripts to mark literals in a trail I: ℓd indicates a literal assigned during the decision phase, whereas ℓ∗refers to literals whose truth value is negated due to chronological backtracking after finding a model (we will refer to this action as flipping). Trails can be seen as ordered total (resp. partial) assignments; for the sake of simplicity, we will refer to them as total (resp. partial) trails. Definition 1 The decision level function δ(V ) 7→N ∪{∞} returns the decision level of variable V , where ∞means unassigned. We extend this concept to literals (δ(ℓ) = δ(V (ℓ))) and clauses (δ(C) = {max(δ(ℓ))|ℓ∈C}). Definition 2 The decision literal function σ(dl) 7→L(V ) ∪ {ε} returns the decision literal of level dl. If we have not decided on a literal at level dl yet, we return ε. Definition 3 The reason function ρ(ℓ) returns the reason that forced literal ℓto be assigned a truth value: • DECISION, if the literal is assigned by the decision selection procedure; • UNIT, if the literal is unit propagated at decision level 0, thus it is an initial literal; • PROPAGATED(c), if the literal is unit propagated at a decision level higher than 0 due to clause c; The 2-Watched Literal Scheme The 2-watched literal scheme (Moskewicz et al. 2001) is an indexing technique that efficiently checks if the currentlyassigned literals do not cause a conflict. For every clause, two literals are tracked. If at least one of the two literals is set to ⊤, then the clause is satisfied. If one of the two literals is set to ⊥, then we scan the clause searching for a new literal ℓ′ that can be paired with the remaining one, being sure ℓ′ is not mapped to ⊥. If we reach the end of the clause and both watches for that clause are set to false, then we know the current assignment falsifies the formula. The 2-watched literal scheme is implemented through watch lists. Definition 4 The watch list function ω(ℓ) returns the set of clauses {c1, ..., cn} currently watched by literal ℓ. CDCL and Non-chronological Backtracking Conflict Driven Clause Learning (CDCL) is the most popular SAT-solving technique (Marques-Silva and Sakallah 1999). It is an extension of the older Davis-PutnamLogemann-Loveland (DPLL) algorithm (Davis, Logemann, and Loveland 1962), improving the latter by dynamically learning new clauses during the search process and using them to drive backtracking. Every time the current trail falsifies a formula F, the SAT solver generates a conflict clause c starting from the falsified clause, by repeatedly resolving against the clauses which caused unit propagation of falsified literals. This clause is then learned by the solver and added to F. Depending on c, we backtrack to flip the value of one literal, potentially jumping more than one decision level (thus we talk about non-chronological backtracking, or NBC). CDCL and nonchronological backtracking allow for escaping regions of the search space where no satisfying assignments are admitted, which benefits both SAT and AllSAT solving. The idea behind conflict clauses has been extended in AllSAT to learn clauses from partial satisfying assignments (known in the literature as good learning or blocking clauses (Bayardo Jr and Pehoushek 2000; Morgado and Marques-Silva 2005b)) to ensure no total assignment is covered twice. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8127 Chronological Backtracking Chronological backtracking (CB) is the core of the original DPLL algorithm. Considered inefficient for SAT solving once NBC was presented in (Moskewicz et al. 2001), it was recently revamped for both SAT and AllSAT in (Nadel and Ryvchin 2018; M¨ohle and Biere 2019a). The intuition is that non-chronological backtracking after conflict analysis can lead to redundant work, due to some assignments that could be repeated later on during the search. Instead, independent of the generated conflict clause c we chronologically backtrack and flip the last decision literal in the trail. Consequently, we explore the search space systematically and efficiently, ensuring no assignment is covered twice during execution. Chronological backtracking combined with CDCL is effective in SAT solving when dealing with satisfiable instances. In AllSAT solving, it guarantees blocking clauses are no more needed to ensure termination. Enumerating Disjoint Partial Models without Blocking Clauses We propose a novel approach that allows for enumerating disjoint partial models with no need for blocking clauses, by integrating: Conflict-Driven Clause-Learning (CDCL), to escape search branches where no satisfiable assignments can be found; Chronological Backtracking (CB), to ensure no blocking clauses are introduced; and methods for shrinking models (Implicant Shrinking), to reduce in size partial assignments, by exploiting the 2-watched literal schema. To this extent, (M¨ohle and Biere 2019b) discusses a formal calculus to combine CDCL and CB for propositional model counting, strongly related to the task we want to achieve. We take the calculus presented in that paper as the theoretical foundation on top of which we build our algorithms, and refer to that paper for more details. Disjoint AllSAT by Integrating CDCL and CB The work in (M¨ohle and Biere 2019b) exclusively describes the calculus and a formal proof of correctness for a model counting framework on top of CDCL and CB, with neither any algorithm nor any reference in modern state-of-the-art solvers. To this extent, we start by presenting an AllSAT procedure for the search algorithm combining the two techniques, which are reported in this section. In particular, we highlight the major differences to a classical AllSAT solver implemented on top of CDCL and NBC. Algorithm 1 presents the main search loop of the AllSAT algorithm. The goal is to find a total trail T that satisfies F. At each decision level, it iteratively decides one of the unassigned variables in F and assigns a truth value (lines 10-11); it then performs unit propagation (line 4) until either a conflict is reached (lines 5-10), or no other variable can be unit propagated leading to a satisfying total assignment (lines 7-8) or DECIDE has to be called again (lines 10-11). Notice that the main loop is identical to an AllSAT solver based on non-chronological CDCL; the only differences are embedded in the procedure to get the conflict and the partial assignments. (From now on, we underline the lines that differ from the baseline CDCL AllSAT solver.) Algorithm 1: CHRONO-CDCL(F, V ) 1: T ←ε 2: dl ←0 3: while true do 4: T, c ←UNITPROPAGATION() 5: if c ̸= ε then 6: ANALYZECONFLICT(T, c, dl) 7: else if |T| = |V | then 8: ANALYZEASSIGNMENT(T, dl) 9: else 10: DECIDE(T) 11: dl ←dl + 1 12: end if 13: end while Algorithm 2: ANALYZECONFLICT(T, c, dl) 1: if δ(c) < dl then 2: T ←BACKTRACK(δ(c)) 3: end if 4: if dl = 0 then 5: terminate with all models found 6: end if 7: ⟨uip, c′⟩←LASTUIP-ANALYSIS() 8: T ←BACKTRACK(dl −1) 9: T.push(¬uip) 10: ρ(¬uip) ←PROPAGATED(c′) Suppose UNITPROPAGATION finds a conflict, returning one clause c in F which is falsified by the current trail T, so that we invoke ANALYZECONFLICT. Algorithm 2 shows the procedure to either generate the conflict clause or stop the search for new assignments if all models have been found. We first compute the maximum assignment level of all literals in the conflicting clause c and backtrack to that decision level (lines 1-2) if strictly smaller than dl. This additional step, not contemplated by AllSAT solvers that use NCB, is necessary to support out-of-order assignments, the core insight in chronological backtracking when integrated into CDCL as described in (Nadel and Ryvchin 2018). Apart from this first step, Algorithm 2 behaves similarly to a standard conflict analysis algorithm. If the solver reaches decision level 0 at this point, it means there are no more variables to flip and the whole search space has been visited, and we can terminate the algorithm (lines 4-5). Otherwise, we perform conflict analysis up to the last Unique Implication Point (last UIP, i.e. the decision variable at the current decision level), retrieving the conflict clause c′ (line 7), as proposed in (M¨ohle and Biere 2019b). Finally, we perform backtracking (notice how we force chronological backtracking independently from the decision level of the conflict clause), push the flipped UIP into the trail, and set c′ as its assignment reason for the flipping (lines 8-10). Suppose instead that every variable is assigned a truth value without generating conflicts (Algorithm 1, line 7); then the current total trail T satisfies F, and we invoke ANALYZEASSIGNMENT. Algorithm 3 shows the steps to possiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8128 Algorithm 3: ANALYZEASSIGNMENT(T, dl) 1: dl′ ←IMPLICANT-SHRINKING(T ) 2: if dl′ < dl then 3: T ←BACKTRACK(dl′) 4: end if 5: store model T 6: if dl′ = 0 then 7: terminate with all models found 8: else 9: ℓflip ←¬(σ(dl′)) 10: T ←BACKTRACK(dl′ −1) 11: T.push(ℓflip) 12: ρ(ℓflip) = BACKTRUE 13: end if bly shrink the assignment, store it and continue the search. First, IMPLICANT-SHRINKING checks if, for some decision level dl′, we can backtrack up to dl′ < dl and obtain a partial trail still satisfying the formula (Algorithm 3, lines 13). (We discuss the details of chronological implicant shrinking in the next subsection.) We can produce the current assignment from the current trail T (line 5). Then we check if all variables in T are assigned at decision level 0. If this is the case, then this means that we found the last assignment to cover F, so that we can end the search (lines 6-7). Otherwise, we perform chronological backtracking, flipping the truth value of the currently highest decision variables and searching for a new total trail T satisfying F (lines 9-12). We remark that in (M¨ohle and Biere 2019b) it is implicitly assumed that one can determine if a partial trail satisfies the formula right after being generated, whereas modern SAT solvers cannot check this fact efficiently, and detect satisfaction only when trails are total. To cope with this issue, in our approach the partial trail satisfying the formula is computed a posteriori from the total one by implicant shrinking. Moreover, the mutual exclusivity among different assignments is guaranteed, since the shrinking of the assignments is performed so that the generated partial assignments fall under the conditions of Section 3 in (M¨ohle and Biere 2019b)). Notice that the calculus discussed in (M¨ohle and Biere 2019b) assumes the last UIP is the termination criteria for the conflict analysis. We provide the following counterexample to show that the first UIP does not guarantee mutual exclusivity between returned assignments. Example 1 Let F be the propositional formula: F = c1 z }| { (x1 ∨¬x2) ∧ c2 z }| { (x1 ∨¬x3) ∧ c3 z }| { (¬x1 ∨¬x2) For the sake of simplicity, we assume CHRONO-CDCL to return total truth assignments. If the initial variable ordering is x3, x2, x1 (all set to false) then the first two total and the third partial trails generated by Algorithm 1 are: T1 = ¬xd 3¬xd 2¬xd 1; T2 = ¬xd 3¬xd 2x∗ 1; T3 = ¬xd 3x∗ 2 Notice how T3 leads to a falsifying assignment: x2 forces x1 due to c1 and ¬x1 due to c3 at the same time. A conflict Algorithm 4: IMPLICANT-SHRINKING(T) 1: b ←0 2: T ′ ←T 3: while T ′ ̸= ε do 4: ℓ←T ′.pop() 5: if ρ(ℓ) ̸= DECISION then 6: b ←max(b, δ(ℓ)) 7: else if δ(ℓ) > b then 8: b ←CHECK-LITERAL(ℓ, b, T ′) 9: else if δ(ℓ) = 0 or (δ(ℓ) = b and ρ(ℓ) = DECISION) then 10: break 11: end if 12: end while 13: return b arises and we adopt the first UIP algorithm to stop conflict analysis. We identify x2 as the first unique implication point (UIP) and construct the conflict clause ¬x2. Since this is a unit clause, we force its negation ¬x2 as an initial unit. We can now set x3 and x1 to ⊥and obtain a satisfying assignment. The resulting total trail T = ¬x3¬x2¬x1 is covered twice during the search process. ⋄ We also emphasize that the incorporation of restarts in the search algorithm (or any method that implicitly exploits restarts, such as rephasing) is not feasible, as reported in (M¨ohle and Biere 2019b). Chronological Implicant Shrinking Effectively shrinking a total trail T when chronological backtracking is enabled is not trivial. In principle, we could add a flag for each clause c stating if c is currently satisfied by the partial assignment or not, and check the status of all flags iteratively adding literals to the trail. Despite being easy to integrate into an AllSAT solver and avoiding assigning all variables a truth value, this approach is unfeasible in practice: every time a new literal ℓ is added/removed from the trail, we should check and eventually update the value of the flags of clauses containing it. In the long term, this would negatively affect performances, particularly when the formula has a large number of models. Also, relying on implicant shrinking algorithms from the literature for NCB-based AllSAT solvers does not work for chronological backtracking. Prime-implicant shrinking algorithms do not guarantee the mutual exclusivity between different assignments, so that they are not useful in the context of disjoint AllSAT. Other assignment-minimization algorithms, as in (Toda and Soh 2016), work under the assumption that a blocking clause is introduced. For instance, suppose we perform disjoint AllSAT on the formula F = x1 ∨x2 and the ordered trail is T1 = xd 1xd 2. A general assignment minimization algorithm could retrieve the partial assignment µ = x2 satisfying F, but obtaining it by using chronological backtracking is not possible (it would require us to remove x1 from the trail despite being assigned at a lower decision level than x2) unless blocking clauses are introduced. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8129 Algorithm 5: CHECK-LITERAL(ℓ, b, T ′) 1: for c ∈ω(ℓ) do 2: if ∃ℓ′ ∈c s.t. ℓ′ ̸= ℓand ℓ′ ∈T ′ then 3: Watch c by ℓ′ instead of ℓ 4: else 5: b ←max(b, δ(ℓ)) 6: end if 7: end for 8: return b In this context, we need an implicant shrinking algorithm such that: (i) it is compatible with chronological backtracking, i.e. we remove variables assigned at level dl or higher as if they have never been assigned; (ii) it tries to cut the highest amount of literals while still ensuring mutual exclusivity. Considering all the aforementioned issues, we propose a chronological implicant shrinking algorithm that uses stateof-the-art SAT solver data structures (thus without requiring dual encoding), which is described in Algorithm 4. The idea is to pick literals from the current trail starting from the latest assigned literals (lines 3-4) and determine the lowest decision level b to backtrack and shrink the implicant. First, we check if ℓwas not assigned by DECIDE (line 5). If this is the case, we set b to be at least as high as the decision level of ℓ(δ(ℓ)), ensuring that it will not be dropped by implicant shrinking (line 6), since ℓhas a role in performing disjoint AllSAT. If this is not the case, we compare its decision level δ(ℓ) to b (line 7). If δ(ℓ) > b, then we actively check if it is necessary for T to satisfy F (line 8) and set b accordingly. Two versions of CHECK-LITERAL will be presented. If ℓis either an initial literal (i.e. assigned at decision level 0) or both ρ(ℓ) = DECISION and δ(ℓ) = b hold, all literals in the trail assigned before ℓwould have a decision level lower or equal than b. This means that we can exit the loop early (lines 9-10), since scanning further the trail would be unnecessary. Finally, if none of the above conditions holds, we can assume that b is already greater than δ(ℓ), and we can move on to the next literal in the trail. Checking Literals Using 2-Watched Lists. In (D´eharbe et al. 2013) the authors propose an algorithm to shorten total assignments and obtain a prime implicant by using watch lists. We adopted the ideas from this work and adapted them to be integrated into CB-based AllSAT solving, which we present in Algorithm 5. For each literal ℓwe check its watch list ω(ℓ) (line 1). For each clause c in ω(ℓ) we are interested in finding a literal ℓ′ such that: (i) ℓ′ is not ℓitself, (ii) ℓ′ satisfies c and it is in the current trail T ′ so that it has not already been checked by IMPLICANT-SHRINKING (line 2). If it exists, we update the watch lists, so that now ℓ′ watches c instead of ℓ, then we move on to the next clause (line 3). If no replacement for ℓis available, then ℓis the only remaining literal that guarantees c is satisfied, and we cannot reduce it. We update b accordingly, ensuring ℓwould not be minimized by setting b to a value higher or equal than δ(ℓ) (line 6). Example 2 Let F be the following propositional formula: F = c1 z }| { (x1 ∨x2 ∨x3) F is satisfied by 7 different total assignments: { x1, x2, x3}, {¬x1, x2, x3}, { x1,¬x2, x3}, {¬x1,¬x2, x3}, { x1, x2,¬x3}, {¬x1, x2,¬x3}, { x1,¬x2,¬x3} When initialized, our solver has the following watch lists: ω(x1) ={c1}; ω(x2) = {c1}; ω(x3) = ∅ Algorithm 1 can produce the total trail I1 = xd 3xd 2xd 1. CHECK-LITERAL starts by minimizing the value of x1. The watch list associated with x1 contains c1, hence we need to substitute x1 with a new literal in clause c1. A suitable substitute exists, namely x3. We update the watch lists according to Algorithm 5, and obtain: ω(x1) = ∅; ω(x2) = {c1}; ω(x3) ={c1} Next, CHECK-LITERAL eliminates x2 from the current trail: x1 was already cut off, x2 and x3 are the current indexes for c1, and x3 is assigned to ⊤. Since no other variables are available in c1, we must force x3 to be part of the partial assignment, and we set b to 1 to prevent its shrinking. This yields the partial trail T1 = x3. Chronological backtracking now restores the watched literal indexing to its value before implicant shrinking (in this case the initial state of watch lists) and flips x3 into ¬x3. DECIDE will then assign ⊤to both x2 and x1. The new trail T2 = ¬x∗ 3xd 2xd 1 satisfies F. Algorithm 5 drops x1 since c1 is watched by x2 and thus we would still satisfy F without it. x2, on the other hand, is required in T2: x3 is now assigned to ⊥and thus cannot substitute x2. We obtain the second partial trail T2 = ¬x3xd 2. Last, we chronologically backtrack and set x2 to ⊤. Being x3 and x2 both ⊥, UNITPROPAGATION forces x1 to be ⊤at level 0. We obtain the last trail satisfying F, T3 = ¬x3¬x2x1. The final solution is then: {x3}, {x2,¬x3}, {x1,¬x2, ¬x3} A Faster but Conservative Literal Check. In Algorithm 5 the cost of scanning clauses using the 2-watched literal schema during implicant shrinking could result in a bottleneck if plenty of models cover a formula. Bearing this in mind, we propose a lighter variant of Algorithm 5 that does not requires watch lists to be updated. Suppose that the current trail T satisfies F, which implies that for each clause c in F, at least one of the two watched literals of c, namely ℓ1 and ℓ2, is in T. If CHECK-LITERAL tries to remove ℓ1 from the trail, instead of checking if there exists another literal in c that satisfies the clause in its place as in line 2 of Algorithm 5, we simply check the truth value of ℓ2 as if the clause c is projected into the binary clause ℓ1 ∨ℓ2. If ℓ2 is not in I, then we force the AllSAT solver to maintain ℓ1, setting the backtracking level to at least δ(ℓ1); otherwise we move on to the next clause watched by it. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8130 It is worth noting that this variant of implicant shrinking is conservative when it comes to dropping literals from the trail. We do not consider the possibility of another literal ℓ′ watching c, is in the current trail T, and has a lower decision level than the two literals watching c. In such a case, we could set b to δ(ℓ′), resulting in a more compact partial assignment. Nonetheless, not scanning the clause can significantly improve performance, making our approach a viable alternative when covering many solutions. Implicit Solution Reasons Incorporating chronological backtracking into the AllSAT algorithm makes blocking clauses unnecessary. Upon discovering a model, we backtrack chronologically to the most recently assigned decision variable ℓand flip its truth value, as if there were a reason clause c - containing the negated decision literals of T - that forces the flip. These reason clauses c are typically irrelevant to SAT solving and are not stored in the system. On the other hand, when CDCL is combined with chronological backtracking, these clauses are required for conflict analysis. Example 3 Let F be the same formula from Example 1. We assume the first trail generated by Algorithm 1 is T1 = ¬xd 3¬xd 2¬xd 1. Algorithm 4 can reduce x1 since ¬x2 suffices to satisfy both c1 and c3. Consequently, we obtain the assignment µ1 = ¬x3 ∧¬x2, then flip ¬x2 to x2. The new trail I2 = ¬xd 3x∗ 2 forces x1 to be true due to c1; then c3 would not be satisfiable anymore and cause the generation of a conflict. The last UIP is x3, so that the reason clause c′ forcing x2 to be flipped must be handled by the solver to compute the conflict clause. ⋄ To cope with this fact, a straightforward approach would be storing these clauses in memory with no update to the literal watching indexing; this approach would allow for c to be called exclusively by the CDCL procedure without affecting variable propagation. If F admits a large number of models, however, storing these clauses would negatively affect performances, so either we had to frequently call flushing procedures to remove inactive backtrack reason clauses, or we could risk going out of memory to store them. To overcome the issue, we introduce the notion of virtual backtrack reason clauses. When a literal ℓis flipped after a satisfying assignment is found, its reason clause contains the negation of decision literals assigned at a level lower than δ(ℓ) and ℓitself. Consequently, we introduce an additional value, BACKTRUE, to the possible answers of the reason function ρ. This value is used to tag literals flipped after a (possibly partial) assignment is found. When the conflict analysis algorithm encounters a literal ℓhaving ρ(ℓ) = BACKTRUE, the resolvent can be easily reconstructed by collecting all the decision literals with a lower level than ℓ and negating them. This way we do not need to explicitly store these clauses for conflict analysis, allowing us to save time and memory for clause flushing. Decision Variable Ordering As shown in (M¨ohle and Biere 2019b), different orders during DECIDE can lead to a different number of partial trails retrieved if chronological backtracking is enabled. After an empirical evaluation, we set Decide to select the priority score of a variable depending on the following ordered set of rules. First, we rely on the Variable State Aware Decaying Sum (VSADS) heuristic (Huang and Darwiche 2005) and set the priority of a variable according to two weighted factors: (i) the count of variable occurrences in the formula, as in the Dynamic Largest Combined Sum (DLCS) heuristics; and (ii) an ”activity score,” which increases when the variable appears in conflict clauses and decreases otherwise, as in the Variable State Independent Decaying Sum (VSIDS) heuristic. If two variables have the same score, we set a higher priority to variables whose watch list is not empty (this is particularly helpful when the lighter variant of the implicant shrinking is used). If there is still a tie, we rely on the lexicographic order of the name of the variables. Experimental Evaluation We implemented all the ideas discussed in the paper in a tool we refer to as TABULARALLSAT. The code of the algorithm and all benchmarks are available here: https://zenodo. org/records/10397723. It is built on top of a minimal SAT solver: besides chronological backtracking, it does not have any preprocessing, restarts and rephasing are disabled, and watching data structures are similar to MiniSAT. Experiments are performed on an Intel Xeon Gold 6238R @ 2.20GHz 28 Core machine with 128 GB of RAM, running Ubuntu Linux 20.04. Timeout has been set to 1200 seconds. Benchmarks The benchmarks used on related works on enumeration (Toda and Soh 2016) are typically from SATLIB (Hoos and St¨utzle 2000), which were thought for SAT solving. However, most of these benchmarks are not suited for AllSAT solving: some benchmarks are UNSAT or admit only a couple of solutions, whereas others are encoded in a way that no total assignment can be shrunk into a partial one. For the sake of significance for AllSAT, we considered benchmarks having two characteristics: (i) each problem admits a high number of total assignments; (ii) the problem structure allows for some minimization of assignments, to test the efficiency of the chronological implicant shrinking algorithms. Binary clauses is a crafted dataset containing problems with n variables defined by binary clauses in the form: (x1 ∨xn) ∧(x2 ∨xn−1) ∧... ∧(xn/2−1 ∨xn/2) Finding all solutions poses a significant challenge: retrieving all possible assignments requires returning 3n/2 assignments within a feasible timeframe. Rnd3sat contains 410 random 3-SAT problems with n variables, n ∈[10, 50]. In SAT instances, the ratio of clauses to variables needed to achieve maximum hardness is about 4.26, but in AllSAT, it should be set to approximately 1.5 (Bayardo Jr and Schrag 1997). For this reason, we choose not to use the instances uploaded to SATLIB and we created new random 3-SAT problems accordingly. We also tested our algorithms over SATLIB benchmarks, specifically CBS and BMS (Singer, Gent, and Smaill 2000). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8131 TABULARALLSAT BDD NBC MathSAT BC BC PARTIAL binary clauses (50) 30 28 21 16 13 18 rnd3sat (410) 410 409 396 229 194 210 CSB (1000) 1000 1000 1000 997 865 636 BMS (500) 499 498 498 473 368 353 Total (1960) 1939 1935 1915 1715 1440 1217 Table 1: Table reporting the number of instances solved by each solver within the timeout time (1200 seconds). 10 2 10 1 100 101 102 103 104 Light-check 10 2 10 1 100 101 102 103 104 WatchList-check BINARY RND3SAT BMS CBS WatchList L i g h t (a) CPU Time (in seconds) 100 102 104 106 108 1010 Light-shrinking 100 102 104 106 108 1010 WatchList-shrinking BINARY RND3SAT BMS CBS L i g h t WatchList (b) # of partial models Figure 1: Scatter plot comparing CPU time and log-total # of partial models with the two implicant shrinking algorithms. Comparing Implicant Shrinking Techniques In Figure 1 we compare the two implicant shrinking algorithms with respect to CPU time and the number of disjoint partial assignments. We checked the correctness of the enumeration by testing if the number of total assignments covered by the set of partial solutions was the same as the model count reported by the #SAT solver Ganak (Sharma et al. 2019), being always correct for both algorithms. Results suggest that, with no surprise, dynamically updating watches is more effective in shrinking total assignments. When considering time efficiency, however, the faster but conservative simplification algorithm outperforms the other variant. The computational cost of updating each watch list ω(ℓ) significantly slows down the computation process the higher the number of total models satisfying F is. All the experiments in the following subsections assume TABULARALLSAT relies on the lighter variant. Baseline Solvers We considered BC, NBC, and BDD (Toda and Soh 2016), respectively a blocking, a non-blocking, and a BDD-based disjoint AllSAT solver. BC also provides the option to obtain partial assignments (from now on BC PARTIAL). Lastly, we considered MATHSAT5 (Cimatti et al. 2013), since it provides an interface to compute partial enumeration of propositional problems by exploiting blocking clauses. Some other AllSAT solvers, such as BASOLVER (Zhang, Pu, and Sun 2020) and ALLSATCC (Liang et al. 2022), are currently not publicly available, as reported also in another paper (Fried, Nadel, and Shalmon 2023). Results Table 1 reports the number of instances solved by each solver for each set of benchmarks before reaching timeout, where ”solved” means that they enumerated completely a set of disjoint partial models covering all total models. We see that TABULARALLSAT solves the highest amount of instances for each benchmark, even though BDD and NBC are close. We also present some scatter plots comparing TABULARALLSAT time performance against each of the other AllSAT solvers available, using different marks and colors to distinguish instances from different benchmarks. The CPU times reported in Figure 2 consider only the time taken to reach each assignment, without storing them. TABULARALLSAT outperforms all the other solvers in every benchmark excluding RND3SAT, where BDD outperforms our approach. The latter instances are not structurally complex due to the low clause-to-variable ratio and can be compiled into BDDs with minimal inefficiencies, thus justifying this behavior: the higher the number of clauses is, the more challenging the compilation of the propositional formula into a BDD is, as we can see with BMS and CSB. Conclusion We presented an AllSAT procedure that combines CDCL, CB, and chronological implicant shrinking to perform partial disjoint enumeration. The experiments confirm the benefits of combining them, avoiding both performance degradations due to blocking clauses and bottlenecks generated by the solver being stuck in non-satisfiable search sub-trees. This work could be extended in several directions. First, we plan to compare our algorithm against other enumeration The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8132 10 2 10 1 100 101 102 103 104 BC 10 2 10 1 100 101 102 103 104 Our approach BINARY RND3SAT BMS CBS (a) BC 10 2 10 1 100 101 102 103 104 BC_SIMPLIFY 10 2 10 1 100 101 102 103 104 Our approach BINARY RND3SAT BMS CBS (b) BC PARTIAL 10 2 10 1 100 101 102 103 104 NBC 10 2 10 1 100 101 102 103 104 Our approach BINARY RND3SAT BMS CBS (c) NBC 10 2 10 1 100 101 102 103 104 BDD 10 2 10 1 100 101 102 103 104 Our approach BINARY RND3SAT BMS CBS (d) BDD 10 2 10 1 100 101 102 103 104 MATHSAT 10 2 10 1 100 101 102 103 104 Our approach BINARY RND3SAT BMS CBS (e) MathSAT Figure 2: Scatter plots comparing TABULARALLSAT CPU times against the other AllSAT solvers. The x and y axes are both log-scaled. algorithms based on knowledge compilation (for instance D4 (Lagniez and Marquis 2017)), even though this might involve a potentially costly compilation process before enumeration and accordingly such an approach is not any-time. Then, to further improve the performances of TABULARALLSAT, we plan to explore novel decision heuristics that are suitable for chronological backtracking. Finally, we plan to extend our techniques to handle also projected enumeration and to investigate the integration of chronological backtracking with component caching. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8133 Acknowledgements We acknowledge the support of the MUR PNRR project FAIR – Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. The work was partially supported by the project “AI@TN” funded by the Autonomous Province of Trento. This research was partially supported by TAILOR, a project funded by the EU Horizon 2020 research and innovation program under GA No 952215. References Bayardo Jr, R. J.; and Pehoushek, J. D. 2000. Counting models using connected components. In AAAI/IAAI, 157–162. Bayardo Jr, R. J.; and Schrag, R. 1997. Using CSP look-back techniques to solve real-world SAT instances. In Aaai/iaai, 203–208. Citeseer. Chistikov, D.; Dimitrova, R.; and Majumdar, R. 2015. Approximate counting in SMT and value estimation for probabilistic programs. In Tools and Algorithms for the Construction and Analysis of Systems: 21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings 21, 320–334. Springer. Cimatti, A.; Griggio, A.; Schaafsma, B. J.; and Sebastiani, R. 2013. The MathSAT5 SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems: 19th International Conference, TACAS 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings 19, 93–107. Springer. Davis, M.; Logemann, G.; and Loveland, D. 1962. A machine program for theorem-proving. Communications of the ACM, 5(7): 394–397. D´eharbe, D.; Fontaine, P.; Le Berre, D.; and Mazure, B. 2013. Computing prime implicants. In 2013 Formal Methods in Computer-Aided Design, 46–52. IEEE. Dlala, I. O.; Jabbour, S.; Sais, L.; and Yaghlane, B. B. 2016. A comparative study of SAT-based itemsets mining. In Research and Development in Intelligent Systems XXXIII: Incorporating Applications and Innovations in Intelligent Systems XXIV 33, 37–52. Springer. Fried, D.; Nadel, A.; and Shalmon, Y. 2023. AllSAT for Combinational Circuits. In 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023). Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik. Grumberg, O.; Schuster, A.; and Yadgar, A. 2004. Memory efficient all-solutions SAT solver and its application for reachability analysis. In Formal Methods in ComputerAided Design: 5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004. Proceedings 5, 275–289. Springer. Hoos, H. H.; and St¨utzle, T. 2000. SATLIB: An online resource for research on SAT. Sat, 2000: 283–292. Huang, J.; and Darwiche, A. 2005. Using DPLL for efficient OBDD construction. In Theory and Applications of Satisfiability Testing: 7th International Conference, SAT 2004, Vancouver, BC, Canada, May 10-13, 2004, Revised Selected Papers 7, 157–172. Springer. Jin, H.; Han, H.; and Somenzi, F. 2005. Efficient conflict analysis for finding all satisfying assignments of a Boolean circuit. In Tools and Algorithms for the Construction and Analysis of Systems: 11th International Conference, TACAS 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4-8, 2005. Proceedings 11, 287–300. Springer. Khurshid, S.; Marinov, D.; Shlyakhter, I.; and Jackson, D. 2004. A case for efficient solution enumeration. In Theory and Applications of Satisfiability Testing: 6th International Conference, SAT 2003, Santa Margherita Ligure, Italy, May 5-8, 2003, Selected Revised Papers 6, 272–286. Springer. Lagniez, J.-M.; and Marquis, P. 2017. An Improved Decision-DNNF Compiler. In IJCAI, volume 17, 667–673. Lahiri, S. K.; Bryant, R. E.; and Cook, B. 2003. A symbolic approach to predicate abstraction. In Computer Aided Verification: 15th International Conference, CAV 2003, Boulder, CO, USA, July 8-12, 2003. Proceedings 15, 141–153. Springer. Li, B.; Hsiao, M. S.; and Sheng, S. 2004. A novel SAT allsolutions solver for efficient preimage computation. In Proceedings Design, Automation and Test in Europe Conference and Exhibition, volume 1, 272–277. IEEE. Liang, J.; Ma, F.; Zhou, J.; and Yin, M. 2022. AllSATCC: Boosting AllSAT Solving with Efficient Component Analysis. In IJCAI, 1866–1872. Marques-Silva, J. P.; and Sakallah, K. A. 1999. GRASP: A search algorithm for propositional satisfiability. IEEE Transactions on Computers, 48(5): 506–521. McMillan, K. L. 2002. Applying SAT methods in unbounded symbolic model checking. In Computer Aided Verification: 14th International Conference, CAV 2002 Copenhagen, Denmark, July 27–31, 2002 Proceedings 14, 250– 264. Springer. M¨ohle, S.; and Biere, A. 2019a. Backing backtracking. In Theory and Applications of Satisfiability Testing–SAT 2019: 22nd International Conference, SAT 2019, Lisbon, Portugal, July 9–12, 2019, Proceedings 22, 250–266. Springer. M¨ohle, S.; and Biere, A. 2019b. Combining Conflict-Driven Clause Learning and Chronological Backtracking for Propositional Model Counting. In GCAI, 113–126. M¨ohle, S.; Sebastiani, R.; and Biere, A. 2020. Four flavors of entailment. In International Conference on Theory and Applications of Satisfiability Testing, 62–71. Springer. M¨ohle, S.; Sebastiani, R.; and Biere, A. 2021. On Enumerating Short Projected Models. arXiv preprint arXiv:2110.12924. Morettin, P.; Passerini, A.; and Sebastiani, R. 2019. Advanced SMT techniques for weighted model integration. Artificial Intelligence, 275: 1–27. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8134 Morgado, A.; and Marques-Silva, J. 2005a. Good learning and implicit model enumeration. In 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05), 6 pp.–136. Morgado, A.; and Marques-Silva, J. 2005b. Good learning and implicit model enumeration. In 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05), 6–pp. IEEE. Moskewicz, M. W.; Madigan, C. F.; Zhao, Y.; Zhang, L.; and Malik, S. 2001. Chaff: Engineering an efficient SAT solver. In Proceedings of the 38th annual Design Automation Conference, 530–535. Nadel, A.; and Ryvchin, V. 2018. Chronological backtracking. In Theory and Applications of Satisfiability Testing–SAT 2018: 21st International Conference, SAT 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 9–12, 2018, Proceedings 21, 111–121. Springer. Sebastiani, R. 2020. Are You Satisfied by This Partial Assignment? arXiv preprint arXiv:2003.04225. Sharma, S.; Roy, S.; Soos, M.; and Meel, K. S. 2019. GANAK: A Scalable Probabilistic Exact Model Counter. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI). Singer, J.; Gent, I. P.; and Smaill, A. 2000. Backbone fragility and the local search cost peak. Journal of Artificial Intelligence Research, 12: 235–270. Spallitta, G.; Masina, G.; Morettin, P.; Passerini, A.; and Sebastiani, R. 2022. SMT-based weighted model integration with structure awareness. In Uncertainty in Artificial Intelligence, 1876–1885. PMLR. Toda, T.; and Soh, T. 2016. Implementing efficient all solutions SAT solvers. Journal of Experimental Algorithmics (JEA), 21: 1–44. Yu, Y.; Subramanyan, P.; Tsiskaridze, N.; and Malik, S. 2014. All-SAT using minimal blocking clauses. In 2014 27th International Conference on VLSI Design and 2014 13th International Conference on Embedded Systems, 86– 91. IEEE. Zhang, Y.; Pu, G.; and Sun, J. 2020. Accelerating All-SAT computation with short blocking clauses. In Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, 6–17. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8135
2024
904
18,745
SAT-Based Algorithms for Regular Graph Pattern Matching Miguel Terra-Neves1, Jos´e Amaral1, Alexandre Lemos1, Rui Quintino1, Pedro Resende2, Antonio Alegria1 1OutSystems 2Zharta [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Graph matching is a fundamental problem in pattern recognition, with many applications such as software analysis and computational biology. One well-known type of graph matching problem is graph isomorphism, which consists of deciding if two graphs are identical. Despite its usefulness, the properties that one may check using graph isomorphism are rather limited, since it only allows strict equality checks between two graphs. For example, it does not allow one to check complex structural properties such as if the target graph is an arbitrary length sequence followed by an arbitrary size loop. We propose a generalization of graph isomorphism that allows one to check such properties through a declarative specification. This specification is given in the form of a Regular Graph Pattern (ReGaP), a special type of graph, inspired by regular expressions, that may contain wildcard nodes that represent arbitrary structures such as variable-sized sequences or subgraphs. We propose a SAT-based algorithm for checking if a target graph matches a given ReGaP. We also propose a preprocessing technique for improving the performance of the algorithm and evaluate it through an extensive experimental evaluation on benchmarks from the CodeSearchNet dataset. 1 Introduction Pattern recognition is an important research area (Foggia, Percannella, and Vento 2014) due to its numerous applications ranging from detecting bad code patterns (Piotrowski and Madeyski 2020) and software analysis (Park et al. 2010; Singh et al. 2021; Zou et al. 2020) in general, to computational biology (Carletti, Foggia, and Vento 2013; Zaslavskiy, Bach, and Vert 2009). One fundamental problem in pattern recognition is graph matching (Livi and Rizzi 2013). Two common approaches are: (i) graph isomorphism (Cordella et al. 1999; Dahm et al. 2012; Ullmann 1976, 2010; Larrosa and Valiente 2002; Ullmann 2010; Zampelli, Deville, and Solnon 2010) and (ii) approximated graph matching (Bunke 1997; Raymond and Willett 2002; Sanfeliu and Fu 1983). The first consists of deciding if two graphs are identical, which can be too strict for some applications (Auwatanamongkol 2007; Conte et al. 2004). Approximated graph matching algorithms are less strict and normally employ some sort of distance metric to evaluate the graphs. Despite Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. their usefulness, these approaches suffer from limitations. Neither of these allows one to check if a graph satisfies some specific complex structural properties, such as, for example, if some target graph contains two nested cycles since, given another similar reference graph that satisfies that property, one can increase their distance arbitrarily by adding extra nodes to, for example, the inner cycle. Alternatively, regular-path queries (Cruz, Mendelzon, and Wood 1987) allow one to specify paths between nodes through a regular expression. Different formalisms for this type of query exist in the literature (Angles et al. 2018; Fan et al. 2012; Reutter, Romero, and Vardi 2017; Wang et al. 2020; Zhang et al. 2016; Libkin, Martens, and Vrgoc 2013). These are very expressive and useful, but do not allow one to check if a graph contains some complex subgraph structure. For example, it is not possible to specify a sequence of arbitrary nested loops with no external connections. Therefore, we propose Regular Graph Patterns (ReGaPs) as a generalization of graph isomorphism. The goal is to be able to define complex structural properties through a declarative specification in the form of a special graph. The proposed specification also borrows inspiration from regular expressions. This graph may contain special nodes, referred to as wildcards, representing arbitrary structures such as variablesized sequences or subgraphs. These wildcards enable one to define compact representations of infinite sets of graphs. The main contributions of this paper are three-fold: (i) a generalization of graph isomorphism matching in the form of ReGaP matching; (ii) a novel Boolean Satisfiability (SAT) encoding for the ReGaP matching problem; and (iii) a graph simplification technique for improving the performance of the SAT solver. The proposed solution is evaluated using control-flow graphs extracted from the Python code snippets in the CodeSearchNet dataset (Husain et al. 2020). The ReGaPs replicate the kind of bad code patterns that are integrated in the AI Mentor Studio (OutSystems 2023) code analysis engine for the OutSystems visual programming language. Note that, although the evaluation focuses on a specific use case, the concept and algorithm are generic, and thus may be applied in other contexts. 2 Background In this section, we introduce the necessary background. We start with a brief introduction to graph isomorphism in SecThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8136 tion 2.1, followed by an explanation of SAT in Section 2.2. 2.1 Graph Isomorphism Consider two graphs G1 = (V1, E1) and G2 = (V2, E2). G1 and G2 are isomorphic if and only if there exists a bijective mapping f : V1 ↔V2 between the nodes of V1 and V2 such that for all (u, v) ∈E1, (f(u), f(v)) ∈E2 and vice-versa. In an attributed graph G = (V, E), each node has m attributes, denoted as AV = {a1 V , . . . , am V }. A node v ∈V is associated with an attribute vector AV (v) =  a1 V (v), . . . , am V (v)  , where ai V (v) is the value of attribute ai V for node v. Similarly, each edge has n attributes, denoted as AE = {a1 E, . . . , an E}, and AE((u, v)) =  a1 E((u, v)), . . . , an E((u, v))  is the attribute vector for edge (u, v). The definition of graph isomorphism must ensure consistency between all the attributes of a node/edge of G1 and the respective equivalent in G2, i.e. for all u ∈V1, AV (u) = AV (f(u)), and for all (u, v) ∈E1, AE((u, v)) = AE((f(u), f(v))). 2.2 Boolean Satisfiability Let X be a set of Boolean variables. A literal l is either a variable x ∈X or its negation x. A clause c is a disjunction of literals (l1 ∨· · · ∨lk). A propositional logic formula F in Conjunctive Normal Form (CNF) is a conjunction of clauses c1 ∧· · · ∧cn. A complete assignment α : X →{0, 1} is a function that assigns a Boolean value to each variable in X. A literal x (x) is satisfied by α if and only if α(x) = 1 (α(x) = 0). A clause c is satisfied by α if and only if at least one of its literals is satisfied. A CNF formula F is satisfied by α if and only if all of its clauses are satisfied. Given a CNF formula F, the SAT problem consists of deciding if there exists α which satisfies F. If so, then F is satisfiable and α is a model of F. Otherwise, F is unsatisfiable. Nowadays, most SAT solvers implement the conflict-driven clause learning algorithm (Audemard, Lagniez, and Simon 2013; Audemard and Simon 2009; Biere, Fleury, and Heisinger 2021; Liang et al. 2018; Marques-Silva and Sakallah 1996; Riveros 2021). Further details can be found in the literature (Biere et al. 2009). 3 Problem Definition The ReGaP matching problem consists of determining if a given ReGaP P = (VP , EP ) matches some graph G = (V, E). We assume that G is non-attributed for now. A ReGaP is a graph such that some of the nodes in VP may be of a special type referred to as wildcard. We consider four wildcard types, inspired on regular expressions: • any-1+-sequence (any-0+-sequence). Represents a directed path v1, . . . , vk of 1 (0) or more nodes such that, for each i ∈{2..k}, the only edge in E towards vi is (vi−1, vi). We use W S+ P ⊆VP (W S∗ P ⊆VP ) to denote the set of all any-1+-sequence (any-0+-sequence) wildcards in VP . • any-1+-subgraph (any-0+-subgraph). Represents a subgraph of 1 (0) or more nodes. We use W G+ P ⊆VP v1 v2 v3 v4 v5 v6 v7 A S+ B C G+ Figure 1: An example of a graph (top) and a ReGaP (bottom) that matches that graph. S+ represents an any-1+-sequence wildcard and G+ an any-1+-subgraph. (W G∗ P ⊆VP ) to denote the set of all any-1+-subgraph (any-0+-subgraph) wildcards in VP . Example 1 Figure 1 shows an example of a graph G and ReGaP P that matches G. P contains an any-1+-sequence wildcard S+ and an any-1+-subgraph wildcard G+. We use W + P (W ∗ P ) to denote the set of all any-1+ (any-0+) wildcards, i.e. W + P = W S+ P ∪W G+ P (W ∗ P = W S∗ P ∪W G∗ P ), W S P (W G P ) to denote the set of all sequence (subgraph) wildcards, i.e. W S P = W S+ P ∪W S∗ P (W G P = W G+ P ∪W G∗ P ), and WP to denote the set of all wildcards, i.e. WP = W S P ∪W G P . The definition of matching between a ReGaP and a graph G = (V, E) relies on a set of generalization rules depicted in Figure 2, which transform G into a generalized version G′, i.e. G′ is a ReGaP that matches G. For example, rule 1 replaces a non-wildcard u ∈V by an any-1+ wildcard, represented by the + node. W represents any wildcard type, while A represents any node. The rules must be applied in order1 (e.g. rule 1 cannot be applied after an instance of rule 2). By default, no constraint is imposed on the subgraph in the lefthand side of a rule and the respective connections to other nodes in V . The special anti-node × is used to specify such constraints. For example, the second anti-node in rule 2 dictates that no edge (u′, S) ∈E may exist such that u′ ̸= u. Such anti-nodes prevent non-directed paths from being generalized into an any-1+-sequence. For example, consider a generalized graph of the form u →S+ ←v. One cannot apply rule 2 on the edge (u, S+) due to the aforementioned anti-node and the existence of the edge (v, S+). Definition 1 Let P = (VP , EP ) be a ReGaP and G a graph. P is said to match G if and only if there exists a sequence of generalization rules transforming G into G′ = (V ′, E′) such that there exists a bijective mapping f : VP ↔ V ′ that satisfies the following conditions: 1. for all (u, v) ∈EP , (f(u), f(v)) ∈E′ and vice-versa. 2. for all w ∈WP , f(w) is a wildcard of the same type. 1Note that some rules are actually commutative, such as 7 and 8. However, others do need to follow the defined order, otherwise the definition would allow matches that do not make sense. For simplicity, a strict order is considered instead of a partial one. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8137 u → + → × S u × S+ → A2 A1 A1 ∗ A2 → S∗ S∗ S∗ → G∗ W G∗ → + + G+ → × A A ∗ → A × ∗ A → S∗ × S∗ S∗ → S∗ × S∗ S∗ → G∗ × ∗ G∗ → G∗ × ∗ G∗ → S+ S∗ → G+ G∗ 2) 1) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) Figure 2: Rules for generalizing a graph G. Example 2 Consider the example from Figure 1. By applying rules 1 and 2 to replace v2 and v3 with S+ and rules 1 and 6 to replace v6 and v7 with G+ one obtains a generalized graph that satisfies the conditions of Definition 1. Regarding time complexity, consider a non-deterministic algorithm that guesses the sequence of rule applications and the certificate of isomorphism proving that the resulting generalized graph does match P. In the worst case, rule 1 is applied to each node in V and rule 2 to each edge in E. Similarly, rule 3 is applied at most 2VP E times: 2 for both any-0+ wildcard types, and VP because VP may contain only any0+ wildcards. The worst case number of rule applications is polynomial2, and thus ReGaP matching is in NP. Graph isomorphism is a special case of ReGaP matching. The introduction of wildcards enables the compact specification of infinite sets of graphs. ReGaPs can be further extended with new wildcard types, such as optional nodes/edges and sequences/subgraphs with size limitations. 4 ReGaP Matching Encoding Our approach reduces ReGaP matching to an instance of SAT. First we detail the base encoding for the special case where VP does not contain wildcards. The adaptations needed to support each wildcard type are explained in Sections 4.1 and 4.2. The full encoding relies on the implicit 2Due to space limitations, the worst case for each rule is included in the extended version of the paper (Terra-Neves et al. 2023). mapping that exists between the nodes on the left of a generalization rule and the respective generalized node on the right. For example, in rule 1, the node u is mapped to the any-1+ wildcard that is introduced in its place. In rule 2, the node u and the nodes mapped to S become mapped to the new S+. The base encoding is an adaptation of an encoding for maximum common subgraph available in the literature (Feng et al. 2017; Terra-Neves et al. 2021). It solves the original problem by mapping the nodes and edges of G into those of P. For simplicity, some constraints are shown as at-most-1 constraints, i.e. of the form P i li ≤1, instead of clauses. Note that these can be converted to CNF by introducing the clause (¬li ∨¬lj) for each pair i, j such that i ̸= j, or by using one of many CNF encodings available in the literature (Ans´otegui and Many`a 2004; Chen 2010; Frisch et al. 2005; Klieber and Kwon 2007; Prestwich 2007). The following sets of Boolean variables are considered: • Inclusion variables. For each node vP ∈VP , a variable ovP is introduced to encode if some node of V is mapped to vP , i.e. if there exists a node v ∈V such that f(v) = vP (i.e. ovP = 1) or not (i.e. ovP = 0). • Mapping variables. For each node pair (vP , v) ∈VP × V , a variable mvP ,v is used to encode if the node v is mapped to vP . If f(v) = vP , then mvP ,v = 1, otherwise mvP ,v = 0. • Control-flow variables. These variables are the analogous of the inclusion variables for edges. For each edge (uP , vP ) ∈EP , a variable cuP ,vP is used to encode if there exists an edge (u, v) ∈E mapped to (uP , vP ). If so, then cuP ,vP = 1, otherwise cuP ,vP = 0. (u, v) is said to be mapped to (uP , vP ) if both u and v are mapped to uP and vP respectively. The SAT formula contains the following clauses: • Inclusion clauses. Ensure consistency between the inclusion and the mapping variables, i.e. for each node vP ∈VP , if ovP = 1, then at least one of the mvP ,v must also be set to 1 for some v ∈V , and vice-versa. ^ vP ∈VP  ovP ↔ _ v∈V mvP ,v  . (1) • One-to-one clauses. Each node in V must be mapped to at most one node in VP and vice-versa. ^ vP ∈VP X v∈V mvP ,v ≤1  ∧ ^ v∈V  X vP ∈VP mvP ,v ≤1  . (2) • Control-flow consistency clauses. Each edge in EP can only be mapped to edges that exist in E. More specifically, for each edge (uP , vP ) ∈EP , if (u, v) /∈E, then either u is not mapped to uP (i.e. muP ,u = 0), v is not mapped to vP (i.e. mvP ,v = 0), or no edge of E is mapped to (uP , vP ) (i.e. cuP ,vP = 0). ^ (uP ,vP )∈EP ^ (u,v)∈(V ×V )\E (muP ,u ∨mvP ,v ∨cuP ,vP ) . (3) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8138 • No spurious edge clauses. If an edge (uP , vP ) ∈EP is mapped to some edge in E, then uP and vP must also be mapped to nodes of V . ^ (uP ,vP )∈EP (cuP ,vP →ouP ∧ovP ) . (4) • Node isomorphism clauses. All nodes in V must be mapped to a node in VP and vice-versa. ^ vP ∈VP (ovP ) ∧ ^ v∈V  _ vP ∈VP mvP ,v  . (5) • Edge isomorphism clauses. All edges in E must be mapped to an edge in EP and vice-versa. For each edge (u, v) ∈E, we must ensure that, given a pair of nodes uP , vP of VP such that (uP , vP ) /∈EP , then either u or v is not mapped to uP or vP respectively. ^ (uP ,vP )∈EP (cuP ,vP ) ∧ ^ (u,v)∈E ^ (uP ,vP )∈(VP ×VP )\EP (muP ,u ∨mvP ,v) . (6) 4.1 Sequence Wildcards In order to support any-0+-sequence wildcards, each w ∈ W S∗ P must be expanded by replacing it with k non-wildcard nodes w1, . . . , wk, as well as k −1 edges, one for each (wi, wi+1) such that i ∈ {1..k −1}. The choice of value for k is discussed at the end of this section. We use V exp P (w) = {w1, . . . , wk} to denote the set of nonwildcard nodes added to replace w, and Eexp/mid P (w) = {(w1, w2), . . . , (wk−1, wk)} to denote the set of edges added between the nodes of V exp P (w). Each edge (uP , w) ∈ EP is replaced by the edge (uP , w1). We use Eexp/in P (w) to denote the set of edges added to replace each such (uP , w). Similarly, each (w, vP ) ∈EP is replaced by k edges (wi, vP ), one for each i ∈{1..k}. We use Eexp/out P (w, vP ) to denote the set of all new edges added to replace (w, vP ). Additionally, we use SP (w) ⊂VP to denote the set of successors of w, i.e. SP (w) = {vP ∈VP : (w, vP ) ∈EP }, and Eexp/out P (w) to denote the union of all sets Eexp/out P (w, vP ), i.e. Eexp/out P (w) = S vP ∈SP (w) Eexp/out P (w, vP ). Extra edges (uP , vP ) are added from each predecessor uP of w to each successor vP ∈SP (w). We use Eexp/skip P (uP , w) to denote the set of edges added from uP to the nodes in SP (w). Additionally, we use BP (w) ⊂VP to denote the set of predecessor nodes of w, i.e. BP (w) = {uP ∈VP : (uP , w) ∈Ep}, and Eexp/skip P (w, vP ) to denote the set of edges added from the nodes in BP (w) to vP . We use Eexp/skip P (w) to denote the union of all such sets, i.e. Eexp/skip P (w) = S uP ∈BP (w) Eexp/skip P (uP , w). Lastly, we use Eexp P (w) to denote all edges added when replacing w, i.e. Eexp P (w) = Eexp/in P (w)∪Eexp/mid P (w)∪Eexp/skip P (w)∪ Eexp/out P (w). Let P exp = (V exp P , Eexp P ) denote the graph that results from the expansion. The encoding is built using P exp instead of P, with the following changes: • Node isomorphism clauses. For each wildcard w ∈ W S∗ P , the nodes in V exp P (w) are optional and thus must be excluded from equation (5) as follows: ^ vP ∈V exp P \S w∈V S∗ P V exp P (w) (ovP ) ∧ ^ v∈V  _ vP ∈V exp P mvP ,v  . (7) • Edge isomorphism clauses. Analogously, the edges in Eexp P (w) must be excluded from equation (6) as follows: ^ (uP ,vP )∈Eexp P \S w∈W S∗ P Eexp P (w) (cuP ,vP ) ∧ ^ (u,v)∈E ^ (uP ,vP )∈(V exp P ×V exp P )\Eexp P (muP ,u ∨mvP ,v) . (8) However, extra clauses are necessary to ensure that an optional edge (uP , vP ) is mapped to the edge (u, v) ∈E when u and v are mapped to uP and vP respectively. ^ (uP ,vP )∈Eexp P ^ (u,v)∈E (muP ,u ∧mvP ,v →cuP ,vP ) . (9) • Sequence clauses. A sequence node wi can be mapped to some node in V only if each of the sequence nodes that precede wi have some node of V mapped to them. ^ w∈W S∗ P ^ wi∈V exp P (w),i≥2 (owi →owi−1) . (10) • Incoming any-0+-sequence control-flow clauses. If a node in V is mapped to a wildcard w ∈W S∗ P , then each incoming edge of w must be mapped to an edge in E. ^ w∈W S∗ P ^ (uP ,vP )∈Eexp/in P (w) (ow1 →cuP ,vP ) . (11) • Outgoing any-0+ control-flow clauses. Analogous of equation (11) but for the outgoing edges of w. ^ w∈W S∗ P ^ vP ∈SP (w)  ow1 → _ (uP ,vP )∈Eexp/out P (w,vP ) cuP ,vP  . (12) Note that, while w1 must always be the first sequence node mapped to some node of V , the last such sequence node wl can vary depending on the number l of nodes of V mapped to w. Only the outgoing edges of wl can be mapped to some edge in E. ^ w∈W S∗ P ^ (wi,vP )∈Eexp/out P (w),i≤k−1 owi+1 →cwi,vP  . (13) • Skip any-0+-sequence control-flow clauses. For each wildcard w ∈W S∗ P , if no node in V is mapped to w and w has at least one successor, then each predecessor uP of w must have at least one of the edges that connect uP to one of the successors of w mapped to some edge in E. ^ w∈W S∗ P ,|SP (w)|>0 ^ uP ∈BP (w)  ow1 → _ (uP ,vP )∈Eexp/skip P (uP ,w) cuP ,vP  . (14) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8139 Analogous of equation (14) but for the successors. ^ w∈W S∗ P ,|BP (w)|>0 ^ vP ∈SP (w)  ow1 → _ (uP ,vP )∈Eexp/skip P (w,vP ) cuP ,vP  . (15) Lastly, if a node in V is mapped to w, then the edges in E cannot be mapped to an edge in Eexp/skip P (w). ^ w∈W S∗ P ^ (uP ,vP )∈Eexp/skip P (w) (ow1 →cuP ,vP ) . (16) Note that the choice of k must ensure that P exp, together with the aforementioned changes to the encoding, retains the same semantics as P. One (naive) solution is to set k = |V |. Moreover, for the sake of simplicity, the encoding, as described, assumes that P does not contain edges between wildcards. The expansion of such wildcards is actually an iterative process. Therefore, an edge (w, w′) ∈EP between a pair of wildcards (w, w′) ∈W S∗ P ×W S∗ P ends up being replaced by k (wi, w′1) edges from each node wi ∈V exp P (w) to the first non-wildcard node w′1 introduced to replace w′. Additional edges must also be added from w1, . . . , wk and the predecessors of w to the successors of w′. Given the above encoding, any-1+-sequence wildcards are supported by replacing each such w ∈W S+ P with a non-wildcard node w1, an any-0+-sequence w′ and the edge (w1, w′), and by setting the destination (source) of all incoming (outgoing) edges of w to w1 (w′). 4.2 Subgraph Wildcards In order to support any-1+-subgraph wildcards, each w ∈ W G+ P is replaced by a copy of G, i.e. w is replaced by a wv node for each v ∈V and a (wu, wv) edge for each (u, v) ∈ E. As in Section 4.1, V exp P (w) (Eexp/mid P (w)) denotes the set of node (edge) copies created to replace w. Each edge (uP , w) ∈Ep is replaced by |V | edges (uP , wv) from uP to each wv ∈V exp P (w). We use Eexp/in P (uP , w) to denote the set of new edges that replace (uP , w). Similarly, each edge (w, vP ) ∈EP is replaced by |V | edges (wv, vP ) from each wv ∈V exp P (w) to vP . Eexp/out P (w, vP ) denotes the set of new edges that replace (w, vP ). The encoding in Section 4.1 can be adapted to support any-1+-subgraph wildcards by replacing W S∗ P with W S∗ P ∪ W G+ P in equation (7), equation (8) and equation (9), plus the following new clauses: • Any-1+-subgraph inclusion clauses. At least one node in V must be mapped to each wildcard w ∈W G+ P . ^ w∈W G+ P  _ vP ∈V exp P (w) ovP  . (17) • Incoming any-1+-subgraph control-flow clauses. Each incoming edge of w ∈W G+ P must be mapped to some edge in E. ^ w∈W G+ P ^ uP ∈BP (w)  _ (uP ,vP )∈Eexp/in P (uP ,w) cuP ,vP  . (18) • Outgoing any-1+ control-flow clauses. Analogous of equation (18) for the outgoing edges. ^ w∈W G+ P ^ vP ∈SP (w)  _ (uP ,vP )∈Eexp/out P (w,vP ) cuP ,vP  . (19) If w ∈W G∗ P is an any-0+-subgraph wildcard, an extra set of edges from each predecessor uP ∈BP (w) to each successor vP ∈SP (w) must also be added as described in Section 4.1. W S∗ P ∪W G+ P must be replaced by WP in equation (7), equation (8) and equation (9), and new clauses must be added encoding the incoming, outgoing and skip controlflow of w. Given that w is an any-0+ wildcard, the nodes in V exp P (w) are optional, and thus the clauses in equation (17) do not apply. The full encoding is available in the extended version (Terra-Neves et al. 2023). 5 Attributed ReGaP Matching In the attributed ReGaP matching problem, G is an attributed graph and P defines constraints over the attributes of the nodes/edges of G. There are 3 types of constraints: • Node constraints. Each node vP ∈VP is assigned a node constraint ϕvP over the attributes AV of the nodes in V . Given a node v ∈V , we use ϕvP (v) = 1 (ϕvp(v) = 0) to denote that v satisfies (does not satisfy) ϕvP . • Edge constraints. Analogously, an edge constraint ϕ(uP ,vP ) is associated with each edge (uP , vP ) ∈EP and ϕ(uP ,vP )((u, v)) denotes if the edge (u, v) ∈E satisfies ϕ(uP ,vP ). • Node pair relation constraints. A node pair relation constraint ψuP ,vP is associated with each node pair uP , vP ∈Vp such that uP ̸= vP . Note that the existence of a node pair relation constraint between uP and vP does not imply that the edge (uP , vP ) exists. The definition for the attributed problem must ensure that these constraints are satisfied. We assume that node and node pair relation constraints cannot be associated with wildcards. Definition 2 Let P = (VP , EP ) be a ReGaP and G an attributed graph. P is said to match G if and only if there exists a sequence of generalization rules transforming G into G′ = (V ′, E′) such that there exists a bijective mapping f : VP ↔V ′ that satisfies the following conditions: 1. for all (u, v) ∈EP , (f(u), f(v)) ∈E′ and vice-versa. 2. for all w ∈WP , f(w) is a wildcard of the same type. 3. for all v ∈VP \ WP , ϕv(f(v)) = 1. 4. for all (u, v) ∈EP , ϕ(u,v)((f(u), f(v))) = 1. 5. for all u, v ∈(VP \ WP ) × (VP \ WP ) such that u ̸= v, ψu,v(f(u), f(v)) = 1. 5.1 Encoding This section describes how to adapt the encoding in Section 4 for attributed matching. First, one must set the constraints for the extra nodes/edges added by wildcard expansion. Given a wildcard w of any type, the node constraint for each wi ∈V exp P (w) is set to ϕwi(v) ≡1, i.e. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8140 any node of V can be mapped to wi. The same applies to the edge constraints for each edge in Eexp/mid P (w). Given a predecessor uP ∈BP (w), the edge constraint for each (uP , wi) ∈Eexp/in P (uP , w) is set to ϕ(uP ,wi)((u, v)) ≡ ϕ(uP ,w)((u, v)). If w is an any-0+ wildcard, the constraints for the edges in Eexp/skip P (uP , w) are set in the same way. Lastly, given a successor vP ∈SP (w), the edge constraint for each (wi, vP ) ∈Eexp/out P (w, vP ) is ϕ(wi,vP )((u, v)) ≡ ϕ(w,vP )((u, v)). The following additional clauses are necessary: • Node constraint consistency clauses. The node constraints in P must be satisfied. More specifically, if a node v ∈V does not satisfy the node constraint of a node vP ∈VP , then v cannot be mapped to vP . ^ vP ∈VP ^ v∈V,ϕvP (v)=0 (mvP ,v) . (20) • Node pair relation constraint consistency clauses. The node pair relation constraints in P must be satisfied. More specifically, for each pair of nodes uP , vP of VP and u, v of V , if u is mapped to uP and the pair u, v does not satisfy the node pair relation constraint of uP , vP , then v cannot be mapped to vP . ^ (uP ,vP )∈VP ×VP uP ̸=vP ^ u∈V  muP ,u → _ v∈V u̸=v∧ψuP ,vP (u,v)=1 mvP ,v  . (21) Edge constraint satisfaction is ensured by changing the subscript of the inner conjunctions in equation (3) and equation (8) to exclude edges such that ϕ(uP ,vP )((u, v)) = 0. 5.2 Node Merging Wildcard expansion (see Sections 4.1 and 4.2) can have a severe impact in the size of the encoding, and thus the performance of the SAT solver. To mitigate this, we propose a sound and complete procedure that merges sequences of nodes in G that do not satisfy any node constraints in P, since such nodes can only be mapped to wildcards, regardless of the wildcard type, as long as P does not contain edges between wildcards. Proposition 1 Consider an attributed graph G = (V, E) and a ReGaP P = (VP , EP ) with no edges between wildcards, i.e. |WP ∩{uP , vP }| ≤1 for all (uP , vP ) ∈EP , and an edge (u, v) ∈E such that: for all vP ∈VP \ WP , ϕvP (u) = ϕvP (v) = 0; for all (u′, v) ∈E, u′ = u; and, for all (u, v′) ∈E, v′ = v. Let G′ = (V ′, E′) be an attributed graph such that: • V ′ = V \ {u}; A′ V (v′) = AV (v′) for all v′ ∈V ′; • E′ = [E \ ({(u, v)} ∪{(u′, u) : (u′, u) ∈E})] ∪ {(u′, v) : (u′, u) ∈E}; • A′ E((u′, v′)) = AE((u′, v′)) for all (u′, v′) ∈E ∩E′; • A′ E((u′, v)) = AE((u′, u)) for all (u′, u) ∈E. P matches G if and only if P matches G′. v1 {x : −1} v2 {x : 1} v3 {x : 2} v4 {x : 0} A x < 0 S+ B x = 0 G : v′ 1 {x : −1} v′ 2 {x : 1} v′ 3 {x : 2} v′ 4 {x : 0} G′ : Figure 3: An example of an attributed ReGaP (top), an attributed graph G that matches the ReGaP and an attributed graph G′ that does not match. Example 3 Consider the example ReGaP and attributed graphs in Figure 3. Node v1 satisfies the node constraint of A, while v4 satisfies the constraint of B. However, v2 and v3 do not satisfy either constraint. Therefore, we can safely merge v2 and v3 into a single node. On the other hand, v′ 1 and v′ 4 also satisfy the node constraints for A and B respectively, while v′ 2 and v′ 3 do not. However, v′ 2 and v′ 3 cannot be merged because there exists an edge from v′ 2 to v′ 4, thus v′ 2 and v′ 3 cannot be mapped to the same sequence wildcard. Due to space limitations, the proof is included in the extended version (Terra-Neves et al. 2023). Based on Proposition 1, if P does not contain edges between wildcard nodes, we apply a preprocessing step that repeatedly transforms G into G′ until no more edges (u, v) exist in E satisfying the respective criteria. 6 Experimental Evaluation This section evaluates the performance of the SAT-based approach for ReGaP matching. For the attributed graphs, we used a collection of control-flow graphs extracted from the Python code snippets in the CodeSearchNet dataset (Husain et al. 2020). The ReGaPs used in the evaluation replicate the kind of bad code patterns that are integrated in the AI Mentor Studio (OutSystems 2023) code analysis engine for the OutSystems visual programming language, which uses ReGaPs as a formalism for specifying such patterns. One concrete example of a bad performance pattern that occurs frequently in OutSystems is a database query Q1, followed by a loop that iterates the output of Q1 and performs another query Q2 with a filter by the current record of Q1. Typically, Q2 can be merged with Q1 through a join condition, resulting in just 1 query instead of N + 1, where N is the number of records returned by Q1. The graph dataset and ReGaPs are publicly available3, plus an executable that can be used to replicate the results presented in this evaluation. 13 ReGaPs are considered in 3https://github.com/MiguelTerraNeves/regap The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8141 this evaluation, containing up to 5 wildcards. We consider only the 72 812 graphs with at least 15 nodes because the ReGaP matcher was always able to solve the instances with smaller ones in under 7 seconds, resulting in a total of 946 556 instances. The maximum number of nodes is 1070, with a median of 21. Out of the 946 556 instances, 130 136 are know to be satisfiable, 791 426 are unsatisfiable, and 24 994 are unknown. Further statistics regarding the dataset and ReGaPs are available in the extended version (Terra-Neves et al. 2023). SAT formulas were solved using PySAT (version 0.1.7.dev21) (Ignatiev, Morgado, and Marques-Silva 2018), configured to use the Glucose solver (version 4.1) (Audemard, Lagniez, and Simon 2013) with default settings. All experiments were run once with a timeout of 60 seconds4 on an AWS m5a.24xlarge instance with 384 GB of RAM. Different experiments were split across 84 workers running in parallel, each running Glucose sequentially. We aim to answer the following research questions regarding the performance of the proposed approach: R1: What is the impact of the node merging step? R2: What is the impact of the graph size? R3: What is the impact of the number of wildcards? Recall from Section 1 that, although ReGaPs and regularpath queries are related, there exist structures expressible with ReGaPs that are not expressible using regular-path queries. Therefore, ReGaPs solve a fundamentally different problem, thus the lack of comparison with the state of the art in regular-path queries. 6.1 Impact of Node Merging Table 1 shows the impact of node merging on the size of the graph and the SAT encoding. We only consider instances for which the ReGaP matcher did not timeout before the encoding was complete, both with and without node merging. Moreover, one of the ReGaPs used in this evaluation contains an edge between wildcards. The respective instances are also not considered since node merging is not applicable in this scenario (see Proposition 1). The reduction in the number of nodes is 15.4%, on average, which translates to a reduction, on average, of 25.7% less clauses in the SAT encoding. We observed that the overhead of node merging is at most 1 second for these instances. Figure 4 compares the execution time, in seconds, with and without node merging. The ReGaP with the edge between wildcards is also not considered in this figure. Node merging has a significant impact on performance, being able to solve many more instances faster than the base encoding. For example, 42 498 more instances are solved in less than 10 seconds with node merging. 971 of these instances result in a timeout without node merging. The base encoding resulted in timeouts for 34 110 out of the 946 556 instances. With node merging, this value is reduced to 26 487. However, some instances are actually solved slower with node merging. In fact, node merging times out for 940 instances that are solved with the base encoding. We observed that 4The 60 seconds timeout is what is considered acceptable in the context of AI Mentor Studio. Figure 4: Execution time with node merging versus without. Figure 5: Timeouts as a function of graph size. node merging resulted in very little reduction for these specific instances: 0% median reduction and only 1.8% in the 90-percentile. When it is 0%, the time required by the base algorithm is very close to the timeout: around 58 seconds on average. Therefore, these timeouts are likely due to noise introduced by worker contention in the parallel experimental environment. When the reduction is very small, this causes the formula’s variables and constraints to change, which can trigger unpredictable behavior in the solver. Because the size of the formula is very similar, the slightly smaller formula can be harder to solve. 6.2 Impact of Graph Size Figure 5 shows the timeouts and solved instances as a function of graph size, considering only the ReGaP with an edge between wildcards. The maximum number of nodes (edges) among solved instances is 50 (64), while the minimum for the instances that timeout is 34 (38). This hints at a strong potential for further performance improvements by investing The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8142 Base With Node Merging min max median average min max median average #Nodes 15 1 070 21 26 1 1 070 18 22 #Edges 14 1 335 24 30.5 0 1 335 21 26 #Var 285 31 601 1 745 2 661 5 25 259 1 423 2 097 #Const 9675 21 079 098 228 175 659 547 15 17 343 652 164 140 490 259 Table 1: Comparison of the graph and encoding size with and without node merging. Exec Time (s) ReGaP W Timeouts avg med std call loops 1 337 0.5% 1.3 0.2 4.0 call 2 930 1.3% 2.7 0.6 5.8 ma loops 2 1071 1.5% 3.1 0.8 6.2 afa loops s 3 2003 2.7% 4.8 2.1 7.9 bterm loose 3 8026 11.0% 9.6 5.0 11.5 bterm 3 2010 2.8% 4.5 1.8 7.8 fcall loops 3 1733 2.4% 4.5 1.8 7.7 ma 3 1785 2.4% 4.5 1.8 7.8 afa loops 4 2915 4.0% 6.4 3.2 9.1 fcall 4 2737 3.8% 6.2 3.1 9.0 mfy loops 4 2757 3.8% 6.4 3.2 9.1 afa 5 3983 5.5% 8.0 4.2 10.1 mfy 5 3823 5.2% 7.9 4.2 10.1 Table 2: The number of wildcards (W column) and timeouts, and the baseline execution times for each ReGaP. in further simplification of the graph. We observed the same behavior for the remaining ReGaPs, with some variation regarding the maximum graph size among solved instances and the minimum for the ones that resulted in timeouts. 6.3 Impact of the Number of Wildcards Table 2 compares the number of timeouts and execution times obtained without node merging for each ReGaP. The rows are sorted by the respective number of wildcards. Note that the execution time statistics do not consider instances that resulted in a timeout. In most cases, the number of timeouts and the mean/median execution time seem to grow with the number of wildcards. The algorithm also seems to become more unstable as the number of wildcards grows, as evidenced by the increase in the standard deviation. The obvious exception is the bterm loose ReGaP, the one with edges between wildcard nodes, which was expected for the following reason. Lets consider an edge (w, w′) between, for example, two any-1+-subgraph wildcards w and w′. The expansion of w replaces the edge (w, w′) by |V | new edges from each of the non-wildcard nodes inserted in place of w to w′. In turn, each of those edges will be replaced by another |V | edges when w′ is expanded as well. Note that this is a preliminary evaluation and further experiments should be performed with a more diverse set of ReGaPs. Also, this pattern of performance degradation as the number of wildcards increases is not so clear with node merging. Depending on the narrowness of the constraints, node merging can have a significant impact on performance. 7 Conclusions & Future Work Solving graph matching problems has many applications. In particular, it is an essential tool for code analysis in visual programming languages. However, the state of the art focuses either on solving graph isomorphism, approximated graph matching or regular-path queries. We propose ReGaP matching, an extension of graph isomorphism that allows one to check complex structural properties through declarative specifications. We propose a SAT encoding for solving ReGaP matching and a simplification technique for reducing encoding size, thus improving the performance of the SAT solver. An extensive experimental evaluation carried on benchmarks from the CodeSearchNet dataset (Husain et al. 2020) shows the effectiveness of the proposed approach. In the future, we plan to extend ReGaPs with new types of wildcards (e.g. optional nodes/edges and sequence/subgraph wildcards with size limitations). We also wish to explore more compact encodings for ReGaP matching that do not rely on wildcard expansion, as well as further evaluate the impact of the number of wildcards and overall structure of the ReGaPs on the performance of the algorithm. Other SAT solvers and alternative automated reasoning frameworks, such as constraint programming (Rossi, van Beek, and Walsh 2006) and satisfiability modulo theories (de Moura and Bjørner 2011), should also be evaluated. We can also explore tighter bounds for the value of k used for expansion, and an algorithm that starts with a small value for k and iteratively increments k until the formula becomes satisfiable or an upper bound is reached. Lastly, we also plan to develop tools for synthesizing ReGaPs from positive and negative examples. Note that ReGaP matching is a general problem that we believe has the potential to be useful in other applications that deal with graph data, such as computational biology, chemistry and network analysis. We hope to see future work explore such applications. References Angles, R.; Arenas, M.; Barcel´o, P.; Boncz, P. A.; Fletcher, G. H. L.; Guti´errez, C.; Lindaaker, T.; Paradies, M.; Plantikow, S.; Sequeda, J. F.; van Rest, O.; and Voigt, H. 2018. G-CORE: A Core for Future Graph Query Languages. In Proceedings of the International Conference on Management of Data (SIGMOD), 1421–1432. ACM. Ans´otegui, C.; and Many`a, F. 2004. Mapping Problems with Finite-Domain Variables into Problems with Boolean Variables. In Proceedings of the 7th International Conference on Theory and Applications of Satisfiability Testing (SAT), 1–15. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8143 Audemard, G.; Lagniez, J.; and Simon, L. 2013. Improving Glucose for Incremental SAT Solving with Assumptions: Application to MUS Extraction. In Proceedings of the 16th International Conference on Theory and Applications of Satisfiability Testing (SAT), 309–317. Springer. Audemard, G.; and Simon, L. 2009. Predicting Learnt Clauses Quality in Modern SAT Solvers. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), 399–404. Auwatanamongkol, S. 2007. Inexact graph matching using a genetic algorithm for image recognition. Pattern Recognition Letters, 28(12): 1428–1437. Biere, A.; Fleury, M.; and Heisinger, M. 2021. CaDiCaL, Kissat, Paracooba Entering the SAT Competition 2021. In Proceedings of the SAT Competition 2021 – Solver and Benchmark Descriptions, Department of Computer Science Report Series B, 10–13. University of Helsinki. Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds. 2009. Handbook of Satisfiability, volume 185 of Frontiers in Artificial Intelligence and Applications. IOS Press. Bunke, H. 1997. On a relation between graph edit distance and maximum common subgraph. Pattern Recognition Letters, 18(8): 689–694. Carletti, V.; Foggia, P.; and Vento, M. 2013. Performance Comparison of Five Exact Graph Matching Algorithms on Biological Databases. In Proceedings of the ICIAP International Workshop on New Trends in Image Analysis and Processing, 409–417. Springer. Chen, J. 2010. A New SAT Encoding of the At-Most-One Constraint. Proceedings of the 9th International Workshop on Constraint Modelling and Reformulation. Conte, D.; Foggia, P.; Sansone, C.; and Vento, M. 2004. Thirty Years Of Graph Matching In Pattern Recognition. International Journal of Pattern Recognition and Artificial Intelligence, 18(3): 265–298. Cordella, L. P.; Foggia, P.; Sansone, C.; and Vento, M. 1999. Performance Evaluation of the VF Graph Matching Algorithm. In Proceedings of the 10th International Conference on Image Analysis and Processing (ICIAP), 1172–1177. IEEE Computer Society. Cruz, I. F.; Mendelzon, A. O.; and Wood, P. T. 1987. A Graphical Query Language Supporting Recursion. In Dayal, U.; and Traiger, I. L., eds., Proceedings of the International Conference on Management of Data (SIGMOD), 323–330. ACM. Dahm, N.; Bunke, H.; Caelli, T.; and Gao, Y. 2012. Topological features and iterative node elimination for speeding up subgraph isomorphism detection. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), 1164–1167. IEEE Computer Society. de Moura, L. M.; and Bjørner, N. S. 2011. Satisfiability modulo theories: introduction and applications. Communications of the ACM, 54(9): 69–77. Fan, W.; Li, J.; Ma, S.; Tang, N.; and Wu, Y. 2012. Adding regular expressions to graph reachability and pattern queries. Frontiers of Computer Science, 6(3): 313–338. Feng, Y.; Bastani, O.; Martins, R.; Dillig, I.; and Anand, S. 2017. Automated Synthesis of Semantic Malware Signatures using Maximum Satisfiability. In Proceedings of the 24th Annual Network and Distributed System Security Symposium. The Internet Society. Foggia, P.; Percannella, G.; and Vento, M. 2014. Graph Matching and Learning in Pattern Recognition in the Last 10 Years. International Journal of Pattern Recognition and Artificial Intelligence, 28(1). Frisch, A. M.; Peugniez, T. J.; Doggett, A. J.; and Nightingale, P. 2005. Solving Non-Boolean Satisfiability Problems with Stochastic Local Search: A Comparison of Encodings. Journal of Automated Reasoning, 35(1-3): 143–179. Husain, H.; Wu, H.-H.; Gazit, T.; Allamanis, M.; and Brockschmidt, M. 2020. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. arXiv:1909.09436. Ignatiev, A.; Morgado, A.; and Marques-Silva, J. 2018. PySAT: A Python Toolkit for Prototyping with SAT Oracles. In Proceedings of the 21st International Conference on Theory and Applications of Satisfiability Testing (SAT), 428–437. Springer. Klieber, W.; and Kwon, G. 2007. Efficient CNF encoding for selecting 1 from n objects. In Proceedings of the International Workshop on Constraints in Formal Verification, 39. Larrosa, J.; and Valiente, G. 2002. Constraint Satisfaction Algorithms for Graph Pattern Matching. Mathematical Structures in Computer Science, 12(4): 403–422. Liang, J. H.; Oh, C.; Mathew, M.; Thomas, C.; Li, C.; and Ganesh, V. 2018. Machine Learning-Based Restart Policy for CDCL SAT Solvers. In Proceedings of the 21st International Conference on Theory and Applications of Satisfiability Testing (SAT), 94–110. Springer. Libkin, L.; Martens, W.; and Vrgoc, D. 2013. Querying graph databases with XPath. In Proceedings of the 16th Joint International Conference on Extending Database Technology and International Conference on Database Theory (EDBT/ICDT), 129–140. ACM. Livi, L.; and Rizzi, A. 2013. The graph matching problem. Pattern Analysis and Applications, 16(3): 253–283. Marques-Silva, J. P.; and Sakallah, K. A. 1996. GRASP - a new search algorithm for satisfiability. In Proceedings of the International Conference on Computer-Aided Design (ICCAD), 220–227. IEEE Computer Society / ACM. OutSystems. 2023. Manage technical debt. https: //success.outsystems.com/documentation/11/managing the applications lifecycle/manage technical debt/. ”Accessed: 2023-12-11”. Park, Y. H.; Reeves, D. S.; Mulukutla, V.; and Sundaravel, B. 2010. Fast malware classification by automated behavioral graph matching. In Proceedings of the 6th Cyber Security and Information Intelligence Research Workshop (CSIIRW), 45. ACM. Piotrowski, P.; and Madeyski, L. 2020. Software Defect Prediction Using Bad Code Smells: A Systematic Literature Review. Data-Centric Business and Applications, 77–99. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8144 Prestwich, S. D. 2007. Variable Dependency in Local Search: Prevention Is Better Than Cure. In Proceedings of the 10th International Conference on Theory and Applications of Satisfiability Testing (SAT), 107–120. Springer. Raymond, J. W.; and Willett, P. 2002. Maximum common subgraph isomorphism algorithms for the matching of chemical structures. Journal of Computer-Aided Molecular Design, 16(7): 521–533. Reutter, J. L.; Romero, M.; and Vardi, M. Y. 2017. Regular Queries on Graph Databases. Theory of Computing Systems, 61(1): 31–83. Riveros, O. 2021. SLIME SAT Solver. In Proceedings of the SAT Competition 2021 – Solver and Benchmark Descriptions, Department of Computer Science Report Series B, 37–37. University of Helsinki. Rossi, F.; van Beek, P.; and Walsh, T., eds. 2006. Handbook of Constraint Programming, volume 2 of Foundations of Artificial Intelligence. Elsevier. Sanfeliu, A.; and Fu, K. 1983. A distance measure between attributed relational graphs for pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 13(3): 353–362. Singh, J.; Chowdhuri, S. R.; Gosala, B.; and Gupta, M. 2021. Detecting design patterns: a hybrid approach based on graph matching and static analysis. Information Technology and Management, 23(3): 139–150. Terra-Neves, M.; Amaral, J.; Lemos, A.; Quintino, R.; Resende, P.; and Alegria, A. 2023. SAT-Based Algorithms for Regular Graph Pattern Matching. arXiv:2312.09995. Terra-Neves, M.; Nadkarni, J.; Ventura, M.; Resende, P.; Veiga, H.; and Alegria, A. 2021. Duplicated code pattern mining in visual programming languages. In Proceedings of the 29th Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), 1348–1359. ACM. Ullmann, J. R. 1976. An Algorithm for Subgraph Isomorphism. Journal of the ACM, 23(1): 31–42. Ullmann, J. R. 2010. Bit-vector algorithms for binary constraint satisfaction and subgraph isomorphism. ACM Journal of Experimental Algorithmics, 15: 1.1–1.64. Wang, X.; Wang, Y.; Xu, Y.; Zhang, J.; and Zhong, X. 2020. Extending Graph Pattern Matching with Regular Expressions. In Proceedings of the 31st International Conference on Database and Expert Systems Applications (DEXA), 111–129. Springer. Zampelli, S.; Deville, Y.; and Solnon, C. 2010. Solving subgraph isomorphism problems with constraint programming. Constraints, 15(3): 327–353. Zaslavskiy, M.; Bach, F. R.; and Vert, J. 2009. Global alignment of protein-protein interaction networks by graph matching methods. Bioinformatics, 25(12): 1259–1267. Zhang, X.; Feng, Z.; Wang, X.; Rao, G.; and Wu, W. 2016. Context-Free Path Queries on RDF Graphs. In Proceedings of the 15th International Semantic Web Conference (ISWC), 632–648. Springer. Zou, Y.; Ban, B.; Xue, Y.; and Xu, Y. 2020. CCGraph: a PDG-based code clone detector with approximate graph matching. In Proceedings of the 35th International Conference on Automated Software Engineering (ASE), 931–942. IEEE/ACM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8145
2024
905
18,746
CEGAR-Based Approach for Solving Combinatorial Optimization Modulo Quantified Linear Arithmetics Problems Kerian Thuillier1, Anne Siegel1, Lo¨ıc Paulev´e2 1 Univ. Rennes, Inria, CNRS, IRISA, UMR6074, F-35000 Rennes, France 2 Univ. Bordeaux, Bordeaux INP, CNRS, LaBRI, UMR5800, F-33400 Talence, France [email protected], [email protected], [email protected] Abstract Bioinformatics has always been a prolific domain for generating complex satisfiability and optimization problems. For instance, the synthesis of multi-scale models of biological networks has recently been associated with the resolution of optimization problems mixing Boolean logic and universally quantified linear constraints (OPT+qLP), which can be benchmarked on real-world models. In this paper, we introduce a Counter-Example-Guided Abstraction Refinement (CEGAR) to solve such problems efficiently. Our CEGAR exploits monotone properties inherent to linear optimization in order to generalize counter-examples of Boolean relaxations. We implemented our approach by extending Answer Set Programming (ASP) solver CLINGO with a quantified linear constraints propagator. Our prototype enables exploiting independence of sub-formulas to further exploit the generalization of counter-examples. We evaluate the impact of refinement and partitioning on two sets of OPT+qLP problems inspired by system biology. Additionally, we conducted a comparison with the state-of-the-art ASP solver Clingo[lpx] that handles non-quantified linear constraints, showing the advantage of our CEGAR approach for solving large problems. Introduction Satisfiability (SAT) solving has proven to be highly successful in addressing a wide range of real-world combinatorial satisfiability problems across various fields. In the last decades, many applications in bioinformatics have been formulated as complex combinatorial satisfiability and optimization problems according to biological knowledge and data. For decision-aided tasks, life-scientists then take advantage of sampling the full space of solutions in order to prioritize future experiments. Therefore, challenges reside both in solving such complex combinatorial problems on large-scale and real-world instances but also in enumerating part, if not all, the set of solutions. Traditionally, the problems addressed in life-sciences were either linear programming and optimization (LP) problems (Orth, Thiele, and Palsson 2010; von Kamp and Klamt 2014) or Boolean optimization problems (Videla et al. 2017; Chevalier et al. 2019). In this case, efficient approaches based on Answer Set Programming (ASP), a logic programming framework for symbolic satisfiability problems (Baral Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2003), have been developed. They take advantage of the ability of modern ASP solvers, like Clingo (Gebser et al. 2017), to support various reasoning modes, Boolean optimization, and model enumeration. A recent evolution in life-sciences is the emergence of hybrid optimization problems combining Boolean logic and linear constraints (Frioux et al. 2019; Mahout, Carlson, and Peres 2020). ASP solvers handling quantifier-free linear constraints, like Clingo[lpx] (Janhunen et al. 2017), have been developed to solve such hybrid optimization problems, by extending ASP solver with a DPLL-adapted simplex algorithm (Dutertre and De Moura 2006) used by modern Satisfiability Modulo Theory (SMT) solvers. A new class of complexity appeared recently with the problem of inferring metabolic regulatory rules, which is formulated as a hybrid optimization problem with one level of quantified linear constraints (Thuillier et al. 2022) and associated with realworld benchmarks. The goal of this paper is to investigate efficient solutions to solve this new class of hybrid optimization problems, which we denote as OPT+qLP. The state-of-the-art strategy to solve OPT+qLP problems is to rely on quantifier elimination to get back to quantifierfree hybrid optimization problems. There is an equivalence between universally quantified linear constraints and constraints on the optimum of LP problems. Hence, based on the strong duality theorem, universally quantified linear constraints can be converted into equi-satisfiable quantifier-free linear constraints through a dual transformation. This allows tackling OPT+qLP problems with standard hybrid approaches, as offered by Clingo[lpx] and SMT solvers. An alternative lies in the Counter-Example-Guided Abstraction Refinement (CEGAR) method (Clarke et al. 2003). While sharing similarities with the DPLL algorithm (Nieuwenhuis, Oliveras, and Tinelli 2006) used in modern SMT solvers, the CEGAR approach enables to easily compose solvers for different tasks, including for Boolean optimization and enumeration problems. The strength of the CEGAR approach therefore lies in its generic and solverindependent nature, which allows for taking advantage of the structure of linear problems. It has been widely applied for the solving of quantified Boolean formula (Janota et al. 2016), and SMT problems (Brummayer and Biere 2008; Barrett and Tinelli 2018). However, CEGAR approaches have not been applied so far to OPT+qLP problems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8146 In this paper, we introduce a CEGAR-based algorithm to solve and enumerate models of OPT+qLP problems. Our approach refines Boolean abstraction of the OPT+qLP problem using monotone properties on LP problems structures and linear constraints partitioning. We rely on the resolution of a formula, a Boolean abstraction, that subsumes the models of the OPT+qLP problem. If this abstraction is unsatisfiable, then so is the OPT+qLP problem. Otherwise, a model of the Boolean abstraction is found. This model is a solution to the OPT+qLP problem if it satisfies the quantified linear constraints. Otherwise, it is a counter-example, and the abstraction is refined with additional constraints derived from the counter-example. This iterative process continues until either the OPT+qLP problem is proven to be unsatisfiable or all its models have been enumerated. To implement it, we developed a prototype based on ASP and evaluated its performance on real-world benchmarks based on biological models. Additionally, we conducted a comparison with Clingo[lpx] and compared the performance regarding both quantifier elimination and linear constraints partitioning. Combinatorial Optimization Problems Modulo Quantified Linear Constraints We focus on combinatorial optimization problems whose constraints merge propositional logic and quantified linear arithmetics (OPT+qLP). The quantified linear constraints are restricted to one level of quantifier. Solving OPT+qLP problems aims at finding variable assignments, or models, satisfying SAT+qLP constraints while minimizing a given objective function. Let x ∈Bn denotes Boolean variables and y ∈Rm realvalued variables. We consider SAT+qLP formulas of the following form: ^ c∈C c(x) (1a) ∧ ^ d∈D d(x, y) (1b) ∧ ∀z ∈Rp, ^ e∈E e(x, z) =⇒ ^ h∈H h(x, z) (1c) where C denotes Boolean clauses of the form W i xi W j ¬xj, and D (resp. E, H) denotes hybrid clauses of the form “W i xi W j ¬xj∨f(y) ≤0” (resp. W i xi W j ¬xj∨f(z) ≤0), with f denoting linear functions over reals. Given a hybrid clause c ∈D, E, H, we will denote by fc its linear constraint fc(y) ≤0 (resp. fc(z) ≤0). Universally quantified linear constraints are modeled by Eq. 1c. The first part of the implication (V e∈E e(x, z)) defines the domain D(x) of the universal real-valued variables z according to x. The domain D(x) is a subset of Rp, and contains all z ∈Rp such that (x, z) satisfy V e∈E e(x, z). Eq. 1c is therefore equivalent to ∀z ∈D(x), V h∈H h(x, z). Let φ be a SAT+qLP formula of the form of Eq. 1. A variable assignment (x, y) ∈Bn × Rm is a model of φ if and only if it satisfies φ, i.e. (x, y) |= φ. The formula φ is unsatisfiable, denoted by ̸|= φ, if there is no model ν satisfying φ. Otherwise, φ is satisfiable. The SAT+qLP satisfiability problem can be extended into an OPT+qLP optimization problem by considering only the models (x, y) of φ that minimize an objective function over Boolean variables g : Bn →R: minimize g(x) (2a) such that: (x, y) |= φ (2b) with x ∈Bn, y ∈Rm For the rest, let (g, φ) be an instance of an OPT+qLP problem. A pair (x, y) ∈Bn × Rm is a model of (g, φ), denoted by (x, y) |= (g, φ), if and only if Eqs. 2a and 2b are verified. Many applications can benefit from a comprehensive characterization of the solution space of satisfiability and optimization problems. Thus, in addition to searching for a model of an OPT + qLP problem, we will also consider the enumeration up to k different models of it. Example. Let ψ be the SAT+qLP formula of Fig. 1a over Boolean variables x1, x2, x3. It has no existentially quantified real-valued variables and 2 universally quantified realvalued variables z1, z2. Using the notations of Eq. 1, ψ has 1 Boolean (C = {(x1 ∨x2 ∨x3)}) and 4 hybrid clauses (D = ∅, E = {(z2 ≥1∨¬x1), (z1 +z2 ≤1∨¬x2), (−z1 +z2 ≤ 0 ∨¬x3)}, H = {(z2 ≤0.6)}). Fig. 1b gives a graphical representation of the linear constraints. For the rest, we will write a model ν as a set such that a Boolean variable xi belongs to ν if and only if xi = ⊤. Among the 8 models of ψ, only 2 satisfy it: ν1 = {x2, x3} and ν2 = {x1, x2, x3}. For the former, the set of hybrid clauses E is true if and only if at least z1 + z2 ≤1 and −z1 + z2 ≤0 hold. As shown in Fig. 1b, all assignments of (z1, z2) matching these two constraints satisfy z2 ≤0.6. For the latter, it does not exist an assignment of (z1, z2) that satisfies all hybrid clauses in E. Let g : B3 →R be an objective function such that g(x1, x2, x3) = |x1| + |x2| + |x3| with |xi| = 1 if xi = ⊤, 0 else. Let (g, ψ) be an OPT+qLP problem. Its only model is {x2, x3} (g({x2, x3}) = 2 and g({x1, x2, x3}) = 3). Contribution: A CEGAR for Solving OPT+qLP We present a CEGAR-based approach for addressing OPT+qLP problems. Algorithm 1 summarizes the overall procedure. First, we define a Boolean abstraction (g, φapprox) of the OPT+qLP problem (g, φ), such that (g, φ) =⇒ (g, φapprox) (line 2, see details below). Next, we introduce two necessary conditions (lines 3 and 4, see details below) to ensure that there exists a model of (g, φ) given a model of (g, φapprox). If at least one of the two conditions fails, then φapprox is refined by generalizing the counter-examples that fail them (line 8, see details below). Finally, we propose a quantified linear constraints partitioning method to increase the efficiency of refinement functions. Proofs of the properties, lemmas, and theorems of this section are provided in the technical appendix (Thuillier, Siegel, and Paulev´e 2023). Boolean Abstractions of OPT+qLP Problems Let c be a hybrid clause over Boolean variables x ∈Bn and real-valued variables y ∈Rm of the form “W i xi W j ¬xj ∨ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8147 ψ = (x1 ∨x2 ∨x3) ∧∀z ∈R2, (z2 ≥1 ∨¬x1) ∧ (z1 + z2 ≤1 ∨¬x2) ∧ (−z1 + z2 ≤0 ∨¬x3) ! =⇒z2 ≤0.6 (a) Example SAT+qLP problem ψ. ψapprox = (x1 ∨x2 ∨x3) ∧(α ∨¬x1) ∧(β ∨¬x2) ∧(γ ∨¬x3) ∧δ (c) Boolean abstraction ψapprox of ψ described in (a). (b) Visual representation of the quantified linear constraints. No assignments of z1 and z2 can satisfy the three linear constraints z2 ≥1, z1 + z2 ≤1 and −z1 + z2 ≤0. (d) Hasse diagram of all the quantified linear constraints subsets of the example OPT+qLP problem (Fig. 1a) with their optimums. Red block is unsatisfiable. Blocks with dashed borders are the optimal cores of the green block. Blue blocks are the subsets of {α; β}. Figure 1: Example of SAT+qLP formula ψ (a) over three Boolean variables (x1, x2, x3) and two universally quantified realvalued variables (z1, z2). Visual representations of the four linear constraints involved in ψ are shown in (b). In (c) and (d), α, β, γ, δ are Boolean variables associated with the linear constraints z2 ≥1, z1 + z2 ≤1, −z1 + z2 ≤0 and z2 ≤0.6, respectively. The Boolean abstraction ψapprox is defined in (c) following Eqs. 4. (d) shows the maximum value of z2 for each subset of linear constraints. Algorithm 1: CEGAR for solving OPT+qLP problem Input: an OPT+qLP problem (g, φ) of the form Eq. 2 Output: a model (x, y) ∈Bn × Rm s.t. (x, y) |= (g, φ) 1: φapprox ←a Boolean abstraction of φ of the form Eq. 4 2: while ∃(x, ¯f) |= (g, φapprox) do 3: if ∃y |= CD x then 4: if ̸|= CE x or ∀h ∈CH x , f ∗ h(CE x ) ≤0 then 5: return x, y 6: end if 7: end if 8: φapprox ←φ∃ r(x) ∧φ∀ r(x) ∧φapprox 9: end while 10: return UNSAT fc(y) ≤0”. A Boolean abstraction ¯c of c is a Boolean clause over the Boolean variables x ∈Bn and ¯fc ∈B. The clause ¯c is defined by Eq. 3. _ i xi _ j ¬xj ∨¯fc denoted by ¯c(x, ¯fc) (3) Let φ be a SAT+qLP formula with C its set of Boolean clauses and D, E, H its sets of hybrid clauses. Let ¯d, ¯e and ¯h denote Boolean abstractions of the hybrid clauses d ∈D, e ∈E and h ∈H, respectively. We define the Boolean abstraction of φ as the following SAT formula: ^ c∈C c(x) (4a) ∧ ^ d∈D ¯d(x, ¯fd) (4b) ∧ ^ e∈E ¯e(x, ¯fe) ∧ ^ h∈H ¯h(x, ¯fh) (4c) Theorem 1 (φ ⇒φapprox). Let φ a SAT+qLP problem and φapprox its Boolean abstraction. For any model (x, y) ∈Bn× Rm of φ, there exists ¯f ∈B|D|+|E|+|H| such that (x, ¯f) is a model of φapprox. From the above theorem, one can remark that the value g(x) of the objective function on any model (x, y) of an OPT+qLP problem (g, φ) is the same on the corresponding model of φapprox. In Algorithm 1, the abstraction (g, φapprox) of the OPT+qLP problem (g, φ) is computed line 1. In line 2, the search for (g, φapprox) models can be performed using a pure Boolean optimization solver. By Theorem 1, if (g, φapprox) is unsatisfiable, then so is (g, φ). Example. Consider the OPT+qLP problem (g, ψ) from the previous example. Let α, β, γ, δ be four Boolean variables associated with the linear constraints z2 ≥1, z1 + z2 ≤1, −z1 + z2 ≤0 and z2 ≤0.6, respectively. The set of Boolean variables associated with linear constraints is ¯f = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8148 {α, β, γ, δ}. The Boolean abstraction of ψ is the SAT formula ψapprox defined by Fig. 1c. Formula ψ has two models ν1 = {x2, x3} and ν2 = {x1, x2, x3}. Using the conversion procedure used to prove Theorem 1, ¯ν1 = {x2, x3, β, γ, δ} and ¯ν2 = {x1, x2, x3, α, β, γ, δ} are two models of ψapprox. The model ν1 is the only model of (g, ψ). It has the optimal score g∗= 2. The model ¯ν1 associated with ν1 has the same score. Ensuring Quantified Linear Constraints Let C be a set of linear constraints of the form f(y) ≤0. A variable assignment y ∈Rm is a model of C, denoted by y |= C, if and only if y |= V f∈C f(y) ≤0. Given f : Rm →R a linear function, y ∈Rm is a model of the linear optimization problem (f, C) if and only if y |= C and it maximizes the objective function f, i.e. ∀y′ ∈Rm, y′ |= C =⇒f(y′) ≤f(y). The optimum value of (f, C) will be denoted by f ∗(C) = maxy|=C f(y). Let Ch be a set of hybrid clauses and x ∈Bn a Boolean variable assignment. For x to be a model of Ch, it must exist y ∈Rm such that each hybrid clause h ∈Ch is satisfied by either x or y. Let CCh x be the set of linear constraints of clauses for which x is not a model: CCh x = {fc(y) ≤0|c ∈Ch, x ̸|= c} (5) Hence, given c ∈Ch and (x, ¯fc) |= ¯c(x, ¯fc), if fc ∈CCh x then ¯fc = ⊤. Otherwise, x would be a model of ¯c(x, ¯fc). Theorem 2. Let φ be a SAT+qLP formula and φapprox its Boolean abstraction. Given x ∈Bn and y ∈Rm, (x, y) |= φ if and only if the following three conditions hold: (C1) ∃¯f, (x, ¯f) |= φapprox; (C2) y |= CD x ; (C3) (̸|= CE x ) ∨(V h∈CH x f ∗ h(CE x ) ≤0). Theorem 2 can be further extended for OPT+qLP problems. Let (g, φ) be an OPT+qLP problem and (g, φapprox) its Boolean abstraction. Any variable assignment (x, y) ∈ Bn × Rm minimizing g and satisfying C1, C2 and C3 is a model of (g, φ). Corollary 2.1. Given x ∈Bn and y ∈Rm a real-valued variables assignment, if (C1’) ∃¯f, (x, ¯f) |= (g, φapprox), C2 and C3 hold, then (x, y) |= (g, φ). In Algorithm 1, the condition C1’ is ensured if a model (x, ¯f) of (g, φapprox) is found (line 2). Condition C2 is ensured in line 3 by finding a model y of the set of linear constraints CD x using a linear programming (LP) solver. C2 holds only if y exists. Finally, condition C3 is ensured in line 4. If CE x is satisfiable, a linear optimization problem (fh, CE x ) is solved for each fh ∈CH x . The linear optimization problems are solved using LP solvers. Each optimum f ∗ h(CE x ) is then compared to 0. If at least one optimum is strictly greater than 0, then C3 does not hold. If the three conditions C1’, C2 and C3 hold, (x, y) |= (g, φ) is returned. Otherwise, (x, ¯f) is a counter-example. Example. Consider the OPT+qLP problem (g, ψ) and its Boolean abstraction ψapprox (Fig. 1c) from the previous example. The variable assignment {x1, α, δ} is a model of ψapprox that minimize g, with g({x1, α}) = 1. By Corollary 2.1, {x1} is also a model of (g, ψ) if either ̸|= {z2 ≥ 1} or if the linear optimization problem (fδ(z1, z2) = z2, {z2 ≥1}) has an optimum less or equals to 0.6. From Fig. 1b, we can see that {z2 ≥1} is satisfiable and that f ∗ δ ({z2 ≥1}) is +∞. Therefore, C3 does not hold and {x1, δ} is not a model of (g, ψ). The variable assignment {x1, α, δ} is a counter-example. Counter-Examples Generalization Let φ be a SAT+qLP formula and φapprox its Boolean abstraction. Theorem 2 states that for any model ¯ν = (x, ¯f) of φapprox there is a corresponding model ν of φ if conditions C2 and C3 hold. If either C2 or C3 is not satisfied, then ¯ν is a counter-example. From ¯ν, new Boolean logic constraints φr(¯ν) can be deduced and used to refine φapprox. The new Boolean abstraction of φ becomes φapprox ∧φr(¯ν), such that φ =⇒φapprox ∧φr(¯ν). Existential counter-example. Suppose that (x, ¯f) does not satisfy C2. The set of linear constraints CD x is unsatisfiable, i.e. ̸|= CD x . Therefore, any supersets of linear constraints of CD x will be unsatisfiable too. An unsatisfiable core (Cunsat) of a given set of linear constraints C is the smallest subset of C for which ̸|= Cunsat. In other words, for all C′ ⊂Cunsat, there exists a vector y ∈Rm that satisfies C′. When C is satisfiable, Cunsat is an empty set. Unsatisfiable cores have been widely used in SMT solvers and CEGAR-based approaches for generalizing sets of unsatisfiable constraints (Cimatti, Griggio, and Sebastiani 2011; Khasidashvili, Korovin, and Tsarkov 2015). Let Cunsat be an unsatisfiable core of CD x . The refinement function φ∃ r(x) is defined by Eq. 6. φ∃ r(x) = _ f∈Cunsat ¬ ¯f (6) Note that refinement function φ∃ r(x) does not generate any constraints if C2 holds (Cunsat = ∅). Lemma 3. φ =⇒φapprox ∧φ∃ r(x). Universal counter-example. Suppose that (x, ¯f) does not satisfy C3. This implies that there is at least one hybrid clause h ∈H such that CE x is satisfiable and f ∗ h(CE x ) > 0. Then, any model (x′, y′) such that CE x′ ⊆CE x will be such f ∗ h(CE x′) > 0, as stated by the following property: Property 4. Given a linear objective function f and two linear optimization problems (f, C1) and (f, C2), C1 ⊆C2 =⇒f ∗(C1) ≥f ∗(C2). Similarly to unsatisfiable cores, we can introduce the notion of optimal cores. Given a linear objective function f and a set of linear constraints C, an optimal core is a biggest superset Cf opt of C such that Cf opt is satisfiable and f ∗(C) = f ∗(Cf opt). Let Cf opt be an optimal core of (f, CE x ). The refinement function φ∀ r(x) is defined by Eq. 7. φ∀ r(x) = ^ h∈CH x f ∗ h(CE x )>0 ¬ ¯fh ∨ _ e∈E fe̸∈C fh opt ¯fe (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8149 Lemma 5. φ =⇒φapprox ∧φ∀ r(x) Constraints generated by the refinement functions φ∃ r(x) and φ∀ r(x) do not involve the same sets of variables. Therefore, φ∃ r(x) ∧φ∀ r(x) ∧φapprox still subsumes φ. Theorem 6. Given (x, ¯f) |= φapprox, φ =⇒ φ∃ r(x) ∧ φ∀ r(x) ∧φapprox. Corollary 6.1. (g, φ) =⇒φ∃ r(x) ∧φ∀ r(x) ∧φapprox. Corollary 6.2. ∀ν∗|= (g, φ) =⇒∃ν′ |= φ∃ r(x) ∧φ∀ r(x) ∧ φapprox, g(ν′) = g(ν∗). Algorithm 1 refines the Boolean abstraction φapprox in line 8. Corollaries 6.1 and 6.2 ensure that the refined Boolean abstraction is still an overapproximation of (g, φ). Therefore, Corollary 2.1 still holds for the next iteration. Example. Consider ψapprox as defined in Fig. 1c and the counter-example {x1, α, δ} find previously. This counterexample satisfies C2 since there are no existentially quantified linear constraints in ψ. Hence, φ∃ r({x1}) does not generate any constraints. However, it fails to satisfy C3. A Hasse diagram of all the subsets of the set of linear constraints of ψ is shown in Fig. 1d. It can be seen that {α} has two optimal cores: {α, β} and {α, γ}. The set {α, β, γ} is not an optimal core since it is not satisfiable. All linear optimization problems whose linear constraints are either a subset of {α, β} or of {α, γ} will also fail C3. Suppose that the optimal core {α, β} has been selected by the refinement function φ∀ r({x1}). It will generate the constraints ¬δ ∨γ, and it will prohibit selecting any model containing a subset of {α, β}, blue and green boxes in Fig. 1d. Partitioning Quantified Linear Constraints Let (g, φ) be an OPT+qLP problems with (g, φapprox) its Boolean abstraction. Linear constraints of φ can be partitioned to exploit the sparsity of the underlying linear optimization problems. Let P = {P1, ..., Pk} be a partition of the linear constraints of φ such that (i) no two linear constraints share variables among different subsets; (ii) each subset contains either existentially quantified linear constraints or universally quantified linear constraints. Let (x, ¯f) |= (g, φapprox). The set of linear constraints CD x can be partitioned in PD x according to the partition P. Deciding the satisfiability of CD x comes down to deciding the satisfiability of each subset Pi ∈PD x . If at least one subset is unsatisfiable, so is CD x . Otherwise, it exists a model yi for each subset Pi ∈PD x and {yi}i |= CD x . Lemma 7. ∃y ∈Rm, CD x ⇐⇒V Pi∈PD x y |= Pi. If (x, ¯f) fails C2, one can exhibit a subset of sets of PD x that are unsatisfiable. Unsatisfiable cores can be computed independently for each unsatisfiable set, which reduces the computational cost of finding unsatisfiable cores. Let Cunsat be the set of unsatisfiable cores associated with the unsatisfiable sets. The existential refinement function φ∃ r(x) can be reformulated as: φ∃ r(x) = ^ Cunsat∈Cunsat _ f∈Cunsat ¬ ¯f (8) Benchmark Small-scale Large-scale Instances SAT 29 32 Instances UNSAT 31 28 Boolean variables 6.5 × 104 4 × 109 Existential real variables 2 × 103 8 × 103 Universal real variables 2 × 103 8 × 103 Boolean constraints 2.7 × 105 1.8 × 106 Existential linear constraints 6 × 103 25 × 103 Universal linear constraints 6 × 103 25 × 103 Table 1: Benchmarks descriptions. Only the order of magnitude of the number of constraints and variables is given. Similarly, all linear constraints fh ∈CH x are partitioned with the linear constraints of CE x that can impact their values. Let P′ ∈P be the partitioned containing fh and P′E x the set of all linear constraints of CE x in P′. Lemma 8. If CE x is satisfiable, then f ∗ h(CE x ) = f ∗ h(P′E x ). If (x, ¯f) fails C3, it is necessarily since there is not enough constraints in P′E x . Since only linear constraints in P′ have an impact on f ∗ h, the computation of an optimal core P′ opt can be restricted to the set of linear constraints in P′. The universal refinement function φ∀ r(x) can be reformulated as: φ∀ r(x) = ^ h∈CH x f ∗ h(P′E x )>0 ¬ ¯fh ∨ _ e∈E fe̸∈P′ opt ¯fe (9) It is important to note that Theorem 6 still holds with these new definitions of φ∃ r and φ∀ r. They generate smaller refinement constraints and allow reducing the computational cost of finding unsatisfiable and optimal cores. Experiments We propose MERRINASP (https://github.com/kthuillier/ merrinasp), an ASP-based implementation of Algorithm 1. It extends the Clingo solver, using its Python API, with a linear constraint propagator, implemented with the Python PULP library and the LP solver COIN (Lougee-Heimer 2003). Model enumeration is made through the Clingo solver which keeps track of all refinements during the enumeration process. The partitioning is explicitly specified in the input problem. Benchmark Problem description. Regulatory flux balance analysis (rFBA) is a common model of dynamics of bacteria (Covert, Schilling, and Palsson 2001). The rFBA framework consists in sequentially solving maximum flow problems on weighted hypergraphs. The hyperedge capacities are updated at each step according to Boolean rules. Capacities are either set to 0 or to their initial value. The metabolic regulatory rules inference problem (Thuillier et al. 2022) is an inverse problem. Given a weighted hypergraph and sequences of observed maximum flows, it consists in inferring a set of Boolean rules controlling the hyperedge capacities matching the sequences of observations. For each The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8150 0 10 20 30 100 101 102 103 104 105 Time (log10 seconds) Solved instances (a) Benchmark Small-SAT 0 10 20 30 100 101 102 103 104 105 Time (log10 seconds) Solved instances (b) Benchmark Small-UNSAT 0 10 20 30 100 101 102 103 104 Time (log10 seconds) Solved instances (c) Benchmark Large-SAT 0 5 10 15 20 100 101 102 103 104 Time (log10 seconds) Solved instances (d) Benchmark Large-UNSAT Solver Enumeration clingo[lpx] merrinASP[P,Q] merrinASP[P,¬Q] merrinASP[¬P,Q] merrinASP[¬P,¬Q] Total number of instances 1 model 100 models or unsatisfiable Figure 2: Runtime distribution of 4 configurations of our MERRINASP implementation of the CEGAR-based Algorithm 1 and Clingo[lpx] on OPT+qLP problem instances. All variants were applied to a benchmark built from a small-scale real biological model (Figs. (a) and (b), 60 instances) and a large-scale real biological model (Figs. (c) and (d), 60 instances). Small-scale and large-scale benchmarks contain both satisfiable instances (panels (a) and (c)) and unsatisfiable instances (panels (b) and (d)). The four configurations of MERRINASP include a partitioning option (P) and the use of universally quantified linear constraints (Q). Time is given in seconds in log10 scale. Dashed black horizontal lines represent the total number of instances Benchmark Partitioned (P) Quantified (Q) Deciding SAT Time (s) Enumeration Time (s) LP solver Time (s) Number of LP solvers calls Number of refinements × × 18 761 ± 4 759 49 952 ± 18 515 3 812 ± 2 727 16 795 ± 2 364 2 ± 0 Small-SAT × ✓ 5 528 ± 1 498 2 116 ± 1 044 1 433 ± 223 9 944 ± 1 470 1 ± 0 ✓ × 28 ± 6 40 ± 11 34 ± 7 937 ± 111 5 ± 1 ✓ ✓ 9 ± 1 15 ± 3 15 ± 2 501 ± 41 6 ± 1 × × 5 143 ± 4 395 NA 1 112 ± 766 6 596 ± 3 723 1 ± 0 Small-UNSAT × ✓ 247 ± 38 NA 137 ± 17 2 039 ± 115 1 ± 0 ✓ × 30 ± 10 NA 24 ± 10 669 ± 221 9 ± 4 ✓ ✓ 10 ± 2 NA 7 ± 1 252 ± 54 9 ± 4 Large-SAT ✓ × 3 163 ± 1 538 13 922 ± 1 946 801 ± 236 17 957 ± 5 032 41 ± 16 ✓ ✓ 183 ± 75 865 ± 112 121 ± 74 3 548 ± 2 184 21 ± 11 Large-UNSAT ✓ × 739 ± 454 NA 374 ± 248 7 480 ± 4 673 17 ± 8 ✓ ✓ 135 ± 19 NA 41 ± 11 1 155 ± 307 13 ± 3 Table 2: Comparative analysis of MERRINASP performance under different configurations. Results are presented as average value ± standard deviation. Deciding SAT times denote the time needed to find a first model or to decide unsatisfiable. NA indicates information not available. Bold values indicate the best value among all configurations for the current benchmark. observation, it must find which capacities were set to 0 for the maximum flow to match the observation. In this problem, Boolean clauses delimit admissible Boolean rules according to biological knowledge. For each observation, existential constraints ensure the existence of a corresponding flow, while universal constraints ensure that no flow is strictly higher than the observed one. We refer the reader to the above-mentioned paper for a formal definition of the problem. Benchmark description. We conducted experiments using MERRINASP on real-world benchmarks of metabolic regulatory rules inference problems (Thuillier, Siegel, and Paulev´e 2023). Our benchmarks are composed of 120 instances divided into 60 small-scale instances and 60 largescale instances. The small-scale benchmark is directly sourced from (Thuillier et al. 2022), while the large-scale benchmark is generated based on a large-scale regulated metabolic network (Covert and Palsson 2002), following the methodology outlined in the aforementioned paper. Benchmarks are described in table 1. Instances of the large-scale benchmarks have approximately 10 times more variables and constraints than instances of the small-scale benchmarks. Linear constraints can be partitioned into about 200 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8151 sets for small-scale instances and 140 sets for large-scale instances. Configuration. Each instance was executed on Haswell Intel Xeon E5-2680 v3 CPU at 2.5GHz and 128GB of RAM and 100 models were enumerated. Results We compared MERRINASP with Clingo[lpx], a state-of-theart ASP solver that handles quantifier-free linear constraints (Janhunen et al. 2017) by extending Clingo with a DPLLadapted simplex algorithm (Dutertre and De Moura 2006). Clingo[lpx] supports neither linear constraints partitioning nor universal linear constraints. We further conducted a comparative analysis of MERRINASP under four configurations: with and without partioning of linear constraints (denoted by P and ¬P), using the CEGAR approach over quantified linear constraints (denoted by Q) or using quantifier elimination (¬Q). Note that Clingo[lpx] is equivalent to the configuration [¬P, ¬Q], and that MERRINASP[P, Q] exploits all the properties described in previous sections. Comparison with Clingo[lpx]. As shown in Fig. 2a and 2b, on small-scale instances, MERRINASP and Clingo[lpx] solve the instances in a similar order of magnitude (10s in average for Clingo[lpx] and 30s in average for MERRINASP). On large-scale instances, MERRINASP outperforms Clingo[lpx] by a factor of 10 (see Figs. 2c and 2d). As shown in Fig. 2c, MERRINASP excels at finding the first model in large-scale satisfiable instances, outperforming Clingo[lpx] by a factor of 30. The difference in performance between the two solvers heavily depends on the enumeration phase. The CEGAR method requires many checks to ensure that a model of the Boolean abstraction is a model of the original OPT+qLP problem, even after reaching equisatisfiability. Consequently, while MERRINASP is significantly faster than Clingo[lpx] in finding the first model for satisfiable problems, both solvers exhibit similar performance in enumerating the other 99 models. Impact of partitioning (P). Figs. 2a and 2b suggest that linear constraints partitioning (P) increase the performance of MERRINASP by a factor of 1000 on satisfiable instances and a factor of 20 on unsatisfiable instances. No instance of the large-scale benchmark has finished in 48 hours for the not-partitioned configurations. Table 2 shows that while partitioning entails solving a larger number of linear optimization problems, the total number of linear optimization problems solved is reduced by a factor of 10 compared to without partitioning. On the small-scale satisfiable (resp. unsatisfiable) instances, MERRINASP[P, Q] solved in average 501 (resp. 252) linear optimization problems, against 9 944 (resp. 2 039) for MERRINASP[¬P, Q]. Impact of quantified linear constraints (Q). Our counter-example generation for universally quantified linear constraints consistently outperforms quantifier elimination reformulations by a factor of 3 on the small-scale and 20 large-scale benchmarks. From Table 2, we can see that twice fewer refinements are made when using quantified linear constraints (Q) compared to using quantifier elimination (¬Q). For large-scale (resp. small-scale) instances, these refinements were generated using 7 (resp. 2) times fewer calls to the linear solvers when using (Q) compared to (¬Q). Discussion. These results highlight that both linear constraint partitioning (P) and counter-example generation for universally quantified linear constraints (Q) have significant impacts on performance. Using both of them allows dividing computation time by 2 000 compared to not using any of them. They allow for generating more efficient refinements (gain of 2) while reducing the number of linear solver calls (gain of 7). This reduction is attributed to the partitioning approach, which enables solving independent linear optimization problems with a reduced number of constraints and variables. Their small size leads to faster computation of unsatisfiable and optimal cores for each counter-example, and their independence allows for reducing the number of verifications: a set that has passed the linear checks does not have to be checked again. MERRINASP is a prototype and does not use efficient approaches to instantiate and solve linear optimization problems. In contrast, Clingo[lpx] and SMT solvers, such as z3 (De Moura and Bjørner 2008), use an incremental implementation of the simplex algorithm to check linear constraints (Dutertre and De Moura 2006). Our approach is not dependent on the method used to solve linear constraints. This suggests that MERRINASP has the potential to further enhance its performance by integrating these algorithms. Conclusion and Future Work In this paper, we presented a novel approach for solving combinatorial optimization problems with Boolean logic and quantified linear constraints (OPT+qLP), based on Counter-Example-Guided Abstraction Refinement (CEGAR). Our implementation, MERRINASP, was developed using Answer Set Programming. To evaluate the effectiveness of our approach, we introduced a new benchmark of small-scale and large-scale OPT+qLP problems inspired by systems biology. We compared MERRINASP against a state-of-the-art ASP modulo quantifier-free linear constraints solver, Clingo[lpx]. The results highlight that MERRINASP scales significantly better than Clingo[lpx] on large-scale satisfiable instances, especially for the search of one model on satisfiable instances. The enumeration of models and unsatisfiable instances remain competitive with Clingo[lpx] but suggest room of improvement to improve the CEGAR approach and reduce the number of counter-example checks (Brummayer and Biere 2009; Lagniez et al. 2017). Looking ahead, we plan to automate the linear constraint partitioning process and explore the integration of our approach with the DPLL-based simplex algorithm used in Clingo[lpx]. Moreover, the integration of quantified Linear Real Arithmetics theory (LRA) (Reynolds, King, and Kuncak 2017) could provide complementary refinements using linear constraints, while our approach refines by the means of combinatorial constraints. These future advancements hold the promise of further enhancing the efficiency and applicability of CEGAR-based OPT+qLP solvers. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8152 Acknowledgments Work of KT and LP is supported by the French Agence Nationale pour la Recherche (ANR) in the scope of the project “BNeDiction” (grant number ANR-20-CE45-0001). References Baral, C. 2003. Knowledge Representation, Reasoning and Declarative Problem Solving. New York, NY, USA: Cambridge University Press. ISBN 0521818028. Barrett, C.; and Tinelli, C. 2018. Satisfiability modulo theories. Springer. Brummayer, R.; and Biere, A. 2008. Lemmas on Demand for the Extensional Theory of Arrays. In Proceedings of the Joint Workshops of the 6th International Workshop on Satisfiability Modulo Theories and 1st International Workshop on Bit-Precise Reasoning, SMT ’08/BPR ’08, 6–11. New York, NY, USA: Association for Computing Machinery. ISBN 9781605584409. Brummayer, R.; and Biere, A. 2009. Effective bit-width and under-approximation. In International Conference on Computer Aided Systems Theory, 304–311. Springer. Chevalier, S.; Froidevaux, C.; Paulev´e, L.; and Zinovyev, A. 2019. Synthesis of Boolean Networks from Biological Dynamical Constraints using Answer-Set Programming. In 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), 34–41. Cimatti, A.; Griggio, A.; and Sebastiani, R. 2011. Computing small unsatisfiable cores in satisfiability modulo theories. Journal of Artificial Intelligence Research, 40: 701– 728. Clarke, E.; Grumberg, O.; Jha, S.; Lu, Y.; and Veith, H. 2003. Counterexample-guided abstraction refinement for symbolic model checking. Journal of the ACM (JACM), 50(5): 752– 794. Covert, M. W.; and Palsson, B. Ø. 2002. Transcriptional Regulation in Constraints-based Metabolic Models of Escherichia coli* 210. Journal of Biological Chemistry, 277(31): 28058–28064. Covert, M. W.; Schilling, C. H.; and Palsson, B. 2001. Regulation of gene expression in flux balance models of metabolism. Journal of theoretical biology, 213(1): 73–88. De Moura, L.; and Bjørner, N. 2008. Z3: An efficient SMT solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, 337– 340. Springer. Dutertre, B.; and De Moura, L. 2006. Integrating simplex with DPLL (T). Computer Science Laboratory, SRI International, Tech. Rep. SRI-CSL-06-01. Frioux, C.; Schaub, T.; Schellhorn, S.; Siegel, A.; and Wanko, P. 2019. Hybrid metabolic network completion. Theory and Practice of Logic Programming, 19(1): 83–108. Gebser, M.; Kaminski, R.; Kaufmann, B.; and Schaub, T. 2017. Multi-shot ASP solving with clingo. CoRR, abs/1705.09811. Janhunen, T.; Kaminski, R.; Ostrowski, M.; Schellhorn, S.; Wanko, P.; and Schaub, T. 2017. Clingo goes linear constraints over reals and integers. Theory and Practice of Logic Programming, 17(5-6): 872–888. Janota, M.; Klieber, W.; Marques-Silva, J.; and Clarke, E. 2016. Solving QBF with counterexample guided refinement. Artificial Intelligence, 234: 1–25. Khasidashvili, Z.; Korovin, K.; and Tsarkov, D. 2015. EPRbased k-induction with Counterexample Guided Abstraction Refinement. In GCAI, 137–150. Lagniez, J.-M.; Berre, D. L.; de Lima, T.; and Montmirail, V. 2017. A Recursive Shortcut for CEGAR: Application To The Modal Logic K Satisfiability Problem. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 674–680. Lougee-Heimer, R. 2003. The Common Optimization INterface for Operations Research: Promoting open-source software in the operations research community. IBM Journal of Research and Development, 47(1): 57–66. Mahout, M.; Carlson, R. P.; and Peres, S. 2020. Answer Set Programming for Computing Constraints-Based Elementary Flux Modes: Application to Escherichia coli Core Metabolism. Processes, 8(12). Nieuwenhuis, R.; Oliveras, A.; and Tinelli, C. 2006. Solving SAT and SAT modulo theories: From an abstract Davis–Putnam–Logemann–Loveland procedure to DPLL (T). Journal of the ACM (JACM), 53(6): 937–977. Orth, J. D.; Thiele, I.; and Palsson, B. Ø. 2010. What is flux balance analysis? Nature biotechnology, 28(3): 245–248. Reynolds, A.; King, T.; and Kuncak, V. 2017. Solving quantified linear arithmetic by counterexample-guided instantiation. Formal Methods in System Design, 51: 500–532. Thuillier, K.; Baroukh, C.; Bockmayr, A.; Cottret, L.; Paulev´e, L.; and Siegel, A. 2022. MERRIN: MEtabolic regulation rule INference from time series data. Bioinformatics, 38(Supplement 2): ii127–ii133. Thuillier, K.; Siegel, A.; and Paulev´e, L. 2023. CEGARbased approach for solving combinatorial optimization modulo quantified linear arithmetics problems – Code and Appendix. https://doi.org/10.5281/zenodo.10361533. Accessed: 2023-12. Videla, S.; Saez-Rodriguez, J.; Guziolowski, C.; and Siegel, A. 2017. caspo: a toolbox for automated reasoning on the response of logical signaling networks families. Bioinformatics, 33(6): 947–950. von Kamp, A.; and Klamt, S. 2014. Enumeration of Smallest Intervention Strategies in Genome-Scale Metabolic Networks. PLOS Computational Biology, 10(1): 1–13. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8153
2024
906
18,747
Learning to Learn in Interactive Constraint Acquisition Dimos Tsouros, Senne Berden, Tias Guns KU Leuven, Belgium [email protected], [email protected], [email protected] Abstract Constraint Programming (CP) has been successfully used to model and solve complex combinatorial problems. However, modeling is often not trivial and requires expertise, which is a bottleneck to wider adoption. In Constraint Acquisition (CA), the goal is to assist the user by automatically learning the model. In (inter)active CA, this is done by interactively posting queries to the user, e.g., asking whether a partial solution satisfies their (unspecified) constraints or not. While interactive CA methods learn the constraints, the learning is related to symbolic concept learning, as the goal is to learn an exact representation. However, a large number of queries is required to learn the model, which is a major limitation. In this paper, we aim to alleviate this limitation by tightening the connection of CA and Machine Learning (ML), by, for the first time in interactive CA, exploiting statistical ML methods. We propose to use probabilistic classification models to guide interactive CA to generate more promising queries. We discuss how to train classifiers to predict whether a candidate expression from the bias is a constraint of the problem or not, using both relation-based and scope-based features. We then show how the predictions can be used in all layers of interactive CA: the query generation, the scope finding, and the lowest-level constraint finding. We experimentally evaluate our proposed methods using different classifiers and show that our methods greatly outperform the state of the art, decreasing the number of queries needed to converge by up to 72%. Introduction Constraint Programming (CP) is considered one of the foremost paradigms for solving combinatorial problems in artificial intelligence. In CP, the user declaratively states the constraints over a set of decision variables, defining the feasible solutions to their problem, and then a solver is used to solve it. Although CP has many successful applications on combinatorial problems from various domains, the modeling process is not always trivial and is limiting non-experts from using CP on complex problems. This is considered a major bottleneck for the wider adoption of CP (Freuder and O’Sullivan 2014; Freuder 2018). Motivated by the need to overcome this obstacle, assisting the user in modeling is regarded as an important reCopyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. search topic (Kolb 2016; De Raedt, Passerini, and Teso 2018; Freuder 2018; Lombardi and Milano 2018). In Constraint Acquisition (CA), which is an area where CP meets Machine Learning (ML), the model of a constraint problem is learned from a set of examples (i.e., assignments to the variables) of solutions, and possibly non-solutions. In passive CA, a set of pre-existing examples is given to the system, and using these examples a set of constraints is returned (Bessiere et al. 2004, 2005; Lallouet et al. 2010; Beldiceanu and Simonis 2012; Bessiere et al. 2017; Kumar, Kolb, and Guns 2022; Berden et al. 2022). On the other hand, active or interactive acquisition systems interact with the user to learn a target set of constraints, which represent the problem the user has in mind (Freuder and Wallace 1998; Bessiere et al. 2007, 2017). In the early days, most methods only made use of membership queries (is this a solution or not?) (Angluin 1988; Bessiere et al. 2007), while a more recent family of algorithms also makes use of partial membership queries (Arcangioli, Bessiere, and Lazaar 2016; Bessiere et al. 2013; Lazaar 2021; Tsouros and Stergiou 2020, 2021; Tsouros, Stergiou, and Bessiere 2019, 2020; Tsouros, Stergiou, and Sarigiannidis 2018). Such (partial) queries ask the user to classify (partial) assignments to the variables as (non-)solution. Recently, a way to guide the top-level query generation was introduced (Tsouros, Berden, and Guns 2023), based on counting-based probabilistic estimates of whether candidate expressions are constraints of the problem or not. Using this method, the number of queries required to converge decreased significantly. Despite the recent advancements in active CA, there are still significant drawbacks to overcome. One of the most important drawbacks is the large number of queries still required in order to find all constraints. We believe this is due to the search-based learning being mostly uninformed. During learning it is not aware of patterns that may appear in the constraints acquired so far, which can guide the rest of the process. An exception is the ANALAYZE&LEARN (Tsouros, Stergiou, and Bessiere 2019) function, which tries to detect potential cliques in the constraint network learned. In this work, we focus on this major limitation and contribute the following elements to alleviate it: • We show how probabilistic classification can be used to predict whether a candidate expression is a constraint of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8154 the problem or not, based on the constraints learned so far and the ones removed from the candidate set at any point during the acquisition process. We use both relationbased and scope-based features to train ML models that are then exploited to guide interactive CA systems. • Previously it was shown that top-level query generation can be guided with (counting-based) probabilistic estimates. We show how such guidance can be extended to all layers of interactive CA where queries are asked. • We make a comprehensive experimental evaluation of our proposed methods, showing the effect of different classifiers, focusing on the number of queries vs. runtime for the ML-guided systems. We also show the effect of guiding all layers where queries are posted to the user. Background Let us first give some basic notions regarding constraint satisfaction problems. A constraint satisfaction problem (CSP) is a triple P = (X, D, C), consisting of: • a set of n variables X = {x1, x2, ..., xn}, representing the entities of the problem, • a set of n domains D = {D1, D2, ..., Dn}, where Di ⊂ Z is the finite set of values for xi, • a constraint set (also called constraint network) C = {c1, c2, ..., ct}. A constraint c is a pair (rel(c), var(c)), where var(c) ⊆X is the scope of the constraint, and rel(c) is a relation over the domains of the variables in var(c), that (implicitly) specifies which of their value assignments are allowed. |var(c)| is called the arity of the constraint. The constraint set C[Y ], where Y ⊆X, denotes the set of constraints from C whose scope is a subset of Y . The set of solutions of a constraint set C is denoted by sol(C). An example eY is an assignment on a set of variables Y ⊆ X. eY is rejected by a constraint c iff var(c) ⊆Y and the projection evar(c) of eY on the variables in the scope var(c) of the constraint is not in rel(c). A complete assignment e that is accepted by all the constraints in C is a solution to C, i.e., e ∈sol(C). An assignment eY is called a partial solution iff eY ∈sol(C[Y ]). κC(eY ) represents the subset of constraints from a constraint set C[Y ] that reject eY . In CA, the pair (X, D) is called the vocabulary of the problem at hand and is common knowledge shared by the user and the system. Besides the vocabulary, the learner is also given a language Γ consisting of fixed arity constraints. Using the vocabulary (X, D) and the constraint language Γ, the system generates the constraint bias B, which is the set of all expressions that are candidate constraints for the problem. The (unknown) target constraint set CT is a constraint set such that for every example e it holds that e ∈sol(CT ) iff e is a solution to the problem the user has in mind. The goal of CA is to learn a constraint set CL that is equivalent to the target constraint set CT . Algorithm 1: Generic Constraint Acquisition Template Input: X, D, B, Cin (X: the set of variables, D: the set of domains, B: the bias, Cin: an optional set of known constraints) Output: CL : the learned constraint network 1: CL ←Cin 2: while True do 3: e ←QGEN(CL, B) 4: if e = nil then return CL ▷converged 5: if ASK(e) = True then 6: B ←B \ κB(e) 7: else 8: (B, S) ←FINDSCOPE(e, B) 9: (B, CL) ←FINDC(S, CL, B) Interactive Constraint Acquisition In interactive CA, the system interacts with the user while learning the constraints. The classification question ASK(eX), asking the user if a complete assignment eX is a solution to the problem that the user has in mind, is called a membership query (Angluin 1988). A partial query ASK(eY ), with Y ⊂X, asks the user to determine if eY , which is an assignment in DY , is a partial solution or not, i.e., if eY ∈sol(CT [Y ]). A (complete or partial) query ASK(eY ) is called irredundant iff the answer is not implied by information already available. That is, it is irredundant iff eY is rejected by at least one constraint from the bias B, and not rejected by the network CL learned thus far. Algorithm 1 presents the generic process followed in interactive CA through partial queries. The learned set CL is first initialized either to the empty set or to a set of constraints given by the user that is known to be true (line 1). Then the main loop of the acquisition process begins, where first the system generates an irredundant example (line 3) and posts it as a query to the user (line 5). If the query is classified as positive, then the candidate expressions from B that violate it are removed (line 6). If the example is classified as negative, then the system tries to find one (or more) constraint(s) from CT that violates it. This is done in two steps. First, the scope of one or more violated constraints is found, by asking queries and possibly shrinking the bias along the way (line 8). Then, the relations of the constraints in this scope(s) are found, again by asking queries and possibly shrinking the bias (line 9). This process continues until the system converges. The acquisition process has converged on the learned network CL ⊆B iff CL agrees with the set of all labeled examples E, and for every other network C ⊆B that agrees with E, it holds that sol(C) = sol(CL). This is proved if no example could be generated at line 3, as in this case, all constraints in B are redundant. Notice that, interactive CA systems consist of three components where (increasingly simpler) queries are asked to the user: (1) Top-level query generation (line 3), (2) Finding the scope(s) of violated constraints (line 8), (3) Finding the relations of constraints in the scopes found (line 9). State-of-the-art algorithms like QuAcq (Bessiere et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8155 2013, 2023), MQuAcq (Tsouros and Stergiou 2020) and MQuAcq-2 (Tsouros, Stergiou, and Bessiere 2019) follow this template. Recently, a meta-algorithm named GrowAcq (Tsouros, Berden, and Guns 2023) was introduced, in order to handle large biases and to reduce the number of queries. The key idea is to call a CA algorithm on an increasingly large subset of the variables Y ⊆X, initially using a small number of variables, each time using a (growing) subset of the potentially huge bias. Guiding Query Generation When using GROWACQ, only a subset of B needs to be considered at a time, and query generation is often fast, leaving sufficient room for using optimization to find a good query in top-level query generation (line 3 of Algorithm 1). Query generation is formulated as a CSP with variables Y and constraints CL[Y ] ∧W ci∈B[Y ] ¬ci, in order to find an example eY . Hence, when the set of candidates B is reduced, query generation is simplified. As a result of this speed-up, in (Tsouros, Berden, and Guns 2023) a method to guide the top-level query generation was proposed. This method introduces an objective function that uses the prediction of a model M(c): e = arg max e X c∈B Je ̸∈sol({c})K · (1 −|Γ| · JM(c)K) (1) where J·K is the Iverson bracket converting True/False to 1/0. The objective function’s aims are twofold. First, it wants queries that lead to a positive answer to violate many constraints in the bias, shrinking it faster. Second, it wants constraints that lead to a negative answer to violate a small number of constraints from the bias, so that the actual constraint leading to the negative answer can be found more easily. For more exposition on how this objective function achieves these aims, we refer the reader to (Tsouros, Berden, and Guns 2023). Model M tries to determine for every constraint c whether violating or satisfying c would lead to the least amount of queries later on in the algorithm, based on a probabilistic estimate P(c ∈CT ) of how likely a constraint is to belong to the target set of constraints of the problem M(c) = 1 P(c ∈CT ) ≤log(|Y |)  (2) Using Probabilistic Classification To Guide Interactive CA The model M leverages a probabilistic estimate of the likelihood of a given candidate constraint belonging to the problem at hand. In (Tsouros, Berden, and Guns 2023), a simple counting-based method was utilized that only uses information about the relation of the constraints. That is, the number of times a constraint with relation rel(c) has been added to CL is counted, and then divided by the total number of times that such a constraint has been removed from B. While this technique provides basic guidance, we propose to use more advanced prediction techniques. Specifically, we propose to use statistical ML techniques, exploiting probabilistic classification in order to calculate P(c ∈CT ). In order to use probabilistic classification in this context, we need to build a dataset to learn from. We formally define a dataset D as a collection of N instances, each instance corresponding to a constraint. Each instance is a tuple (xi, yi), i ∈1, 2, ..., N, with xi being a vector of features of constraint ci, and yi being a (Boolean) label that denotes whether ci ∈CT . ID Name Type Description 1 Relation String Constraint relation 2 Arity Int Constraint arity 3 Has constant Bool If a constant value is present 4 Constant Int The constant value 5 Var name same Bool If all variables share the same name 6 Var Ndims same Bool If the number of dimensions of all variables is the same 7 Var Ndims max Int The maximum number of dimensions among variables 8 Var Ndims min Int The minimum number of dimensions among variables 9 Var dimi has Bool If dimension i is present for all variables 10 Var dimi same Bool If dimension i of all variables is the same 11 Var dimi max Int Maximum dimension i value among variables 12 Var dimi min Int Minimum dimension i value among variables 13 Var dimi avg Float Average dimension i value among variables 14 Var dimi spread Float Spread of dimension i values among variables Table 1: Features for each constraint To be able to use constraints as instances in our dataset, we need to have a feature representation of constraints. In this paper, for the feature representation, we use both relation-based and scope-based information, exploiting the information we have for the constraint’s relation, the variables of its scope, their indices, name, etc. The features we use are shown in Table 1. Note that variables can be given to the CA system in the form of a matrix or tensor. For example, a natural way to structure the variables representing the cell assignments Sudoku is in a 9x9 2-dimensional matrix. When variables are given in such a form, we represent in the features information about the indices of the variables occurring in the constraints, in each dimension of the tensor they were given in. This allows the system to detect patterns like all variables occurring in the same row or column, not being spread out in some dimension, etc., which are common The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8156 patterns in CP problems. The dataset D is grown incrementally throughout the CA process, as gradually more information is obtained about constraints from the initial bias. More concretely, whenever a constraint is removed from B (because they were verified to not be part of CT ), it is added to D with a negative label. On the other hand, whenever a constraint gets added to CL, it also gets added to D with a positive label. Since dataset D grows throughout the CA process, the probabilistic classifier should be updated regularly. How often this update should be performed is an important question, as this may affect the waiting time for the user when interacting with the CA system. In this paper, we retrain the classifier on the current dataset D right before every toplevel query generation (line 3 of Algorithm 1), exploiting all the collected information each time. Preliminary experiments showed that this yields the best results. Guiding All Layers of Interactive CA As (Tsouros, Berden, and Guns 2023) showed, guiding toplevel query generation can reduce the number of queries significantly and improve CA systems. However, CA systems ask queries to the user also in the FINDSCOPE (line 8 of Algorithm 1) and FINDC (line 9 of Algorithm 1) components, which respectively try to find the scope of one or more violated constraints, and then all constraints on that scope. While guiding the generation of top-level queries delivers significant advantages, neglecting guidance within these two layers is a missed opportunity. In the rest of this section, we show how to use the same logic for guiding query generation in the remaining two layers of CA systems. Guiding FindScope The functions used in the literature (Bessiere et al. 2013; Tsouros and Stergiou 2020; Bessiere et al. 2023) to find the scope of violated constraints after a negative answer from the user (line 8 of Algorithm 1) work in a similar way. We will use FINDSCOPE from (Bessiere et al. 2023) (shown in Algorithm 2) to demonstrate our method, but the same logic applies to all existing in the literature. FINDSCOPE methods recursively map the problem of finding a constraint to a simpler problem by removing blocks of variable assignments from the original query (the one asked in line 3 of Algorithm 1, to which the user answered “no”) and asking partial queries to the user. The removed block must contain at least one variable while not including all the present variables, in order to lead to an irredundant query. If after the removal of some variables, the answer of the user changes to “yes”, then the removed block contains at least one variable from the scope of a violated constraint. When this happens, FINDSCOPE focuses on refining this block, adding some variable assignments back to the query. When, after repeating this procedure, the size of the considered block becomes 1 (i.e., the block contains a single variable), this variable is found to be in the scope of a violated constraint we seek (line 5 of Algorithm 2). In practice, the problem that must be solved in each step is to find a set of variables Y1 ⊂Y , splitting the previously Algorithm 2: FINDSCOPE Input: e, R, Y , B (e: the example, R, Y : sets of variables, B: the bias) Output: B, Scope ( B: the updated bias, Scopes: a set of variables, the scope of a constraint in CT 1: function FINDSCOPE(e, R, Y , B) 2: if κB(eR) ̸= ∅then 3: if ASK(eR) = “yes” then B ←B \ κB(eR); 4: else return (B, ∅); 5: if |Y | = 1 then return (B, Y ); 6: split Y into < Y1, Y2 > such that |Y1| = ⌈|Y |/2⌉; 7: if κB(eR∪Y1) = κB(eR∪Y ) then S1 ←∅ 8: else (B, S1) ←FindScope(e, R ∪Y1, Y2, B); 9: if κB(eR∪S1) = κB(eR∪Y ) then S2 ←∅ 10: else (B, S2) ←FindScope(e, R ∪S1, Y1, B); 11: return (B, S1 ∪S2); considered set of variables Y into two parts (line 6). Then, Y1 is used in the next query posted to the user while Y2 is the removed set of variables that can be taken into account in the next queries. Depending on the answer of the user we can then update B and decide which part of the problem to focus on next. Existing FINDSCOPE functions naively choose Y1 ⊂Y , by splitting the set Y in half. The advantage of this approach is that a logarithmic number of steps is achieved. However, no information about the violated constraints from B is used, and no guidance is utilized. The set of variables to remove, or keep, in the assignment is usually chosen randomly (Bessiere et al. 2023). The problem of finding a Y1 ⊂Y , so that the FINDSCOPE procedure is correct and will lead to finding the scope of a violated constraint, can be formulated as: find Y1 s.t. ∅⊊Y1 ⊊Y (3) That is because, in each query, we get information either for Y1, if the answer of the user remains negative, or for Y2, if the answer of the user changes to positive. Thus we want both Y1, Y2 ⊋∅. This problem can be formulated as a CSP with boolean variables BV , with |BV | = |Y |, deciding whether a variable xi ∈Y is included in Y1. The CSP contains the following constraint: 0 < X bvi∈BV JbviK < |Y | (4) However, just choosing any (arbitrarily sized) subset of Y can result in many unneeded recursive calls and a large number of queries. Now that we have formally formulated this problem, we can modify the constraints and/or add an objective function in order to improve the performance of FINDSCOPE. In order to achieve the logarithmic complexity from (Bessiere et al. 2023), we can impose the following constraint in our CSP formulation: X bvi∈BV JbviK = |Y | 2  (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8157 We propose to use this CSP formulation of the problem, and to integrate the objective function from Equation (1) to guide FINDSCOPE queries. Notice that, the queries asked in FINDSCOPE take into account the set R, which is the set R∪Y1 from the previous recursive call, where we split Y . In addition, we now are not generating assignments, but deciding which variable assignments from the existing example e to include in the next query ASK(eR∪Y1), Thus, we propose to maximize the following objective function: X c∈κB(e) Jvar(c) ∈R ∪Y1K · (1 −|Γ| · JM(c)K) (6) We also slightly modify the constraint from Equation (5), as, when deciding which constraints to violate in the next query, the number of variables these constraints participate in could be lower than half (but still needs to be at least one, as in Equation (4)). As a result, the constraint becomes 0 < X bvi∈BV JbviK ≤ |Y | 2  (7) Correctness We now prove that FINDSCOPE is still correct when our modification of line 6 is used, as long as the constraint from Equation (4) holds. Proposition 1. Given the assumption that CT is representable by B, FINDSCOPE (with our modification at line 6) is correct. Proof. Soundness We will now prove that given an example eY , FINDSCOPE will return a set of variables S, such that there exists at least one violated constraint c ∈CT s.t. ∀xi ∈ S | xi ∈var(c) . An invariant of FindScope is that the example e violates at least one constraint whose scope is a subset of R∪Y (i.e., ASK(R ∪Y ) = “no”). κCT (eR∪Y ) ⊋∅ (8) That is because it is called only when the example eY is classified as non-solution by the user and the recursive calls at lines 8 and 10 are reached only if the conditions at lines 7 and 9 respectively are false. In addition, FindScope reaches line 5 only in the case that eR does not violate any constraint from CT (i.e., ASK(eR) = “yes” at line 3). κCT (eR) = ∅ (9) In FindScope variables are returned (and added in S) only at line 5, in the case Y is a singleton. (8), (9) =⇒∃Y ′ ⊆Y s.t. Y ′ ⊆var(c) | c ∈κCT (e) |Y |=1 ====⇒ line 5 Y ⊆var(c) | c ∈κCT (e) (10) Thus, for any xi ∈S we know that xi ∈var(c) | c ∈CT . Completeness We will now prove that given an example eY , the set of variables S returned by FINDSCOPE will be the full scope of a constraint in CT , i.e. there exists at least one constraint c ∈CT for which S = var(c). FINDSCOPE in Algorithm 2 has been proven to be complete in (Bessiere et al. 2023). The key part in that is line 6, splitting Y into 2 parts. The requirement is that in no recursive call we end up with Y = ∅, so that it continues searching in different subsets of variables in each call. This means that in the recursive call of line 9, Y2 ̸= ∅and in the recursive call of line 10, we must have Y1 ̸= ∅. Due to the constraint imposed in Equation (4), we know that Y1 ⊋∅and also that Y1 ⊊Y =⇒Y1 ⊊Y1 ∪Y2 =⇒Y2 ̸= ∅ (11) Thus, this constraint guarantees that Y1, Y2 ̸= ∅, meaning that FINDSCOPE is still complete. Guiding FindC After the system has located the scope of a violated constraint, it calls function FINDC (Bessiere et al. 2013, 2023) to find the relation of the violated constraint. To locate this constraint, FINDC asks partial queries to the user in the scope returned by FINDSCOPE. Alternative assignments are used for the variables in the scope given, to discriminate which of the candidate constraints with that scope is part of the target problem. In order to do so, FINDC functions currently use the following query generation step: find e′ S ∈sol(CL[S] ∧∅⊊κB(e′ S) ⊊∆), (12) with S being the scope found in the previous step and ∆the set of candidates for this scope, initially being equal to the set of violated constraints in the previous example κB(eS). The objective function typically used in this step, in order to again achieve a logarithmic complexity in terms of the number of queries posted, is to try to half the number of violated candidates, minimizing a slack variable b such that b = |∆| 2  −κB(e′ S) (13) We propose to replace this objective function with one that guides the query generation in the same way as in Equations (1) and (6): X c∈∆ JeS ̸∈sol({c})K · (1 −|Γ| · JM(c)K) (14) Experimental Evaluation In this section, we perform an experimental evaluation of our proposed approaches, aiming to answer the following research questions: (Q1) How well can ML classifiers predict whether a candidate constraint is part of the target constraint network? (Q2) What is the effect of using probabilistic classification to guide query generation in CA? (Q3) What is the added benefit of also guiding the other layers of CA? The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8158 Benchmarks We selected the benchmarks for our experiments to cover different cases, including some puzzle problems that are typically used as benchmarks to evaluate CA systems, some problems closer to real-world applications with a subset of them having a more regular structure, and one randomly generated. The latter was included to evaluate the performance of our system when it cannot learn anything. More specifically, we used the following benchmarks for the experimental evaluation: Random. We used a problem with 100 variables and domains of size 5. We generated a random target network with 495 binary constraints from the language Γ = {≥, ≤, <, > , ̸=, =}. The bias was initialized with 19,800 constraints, using the same language. 9x9 Sudoku The Sudoku puzzle is a n2 × n2 grid (n = 3 for the case we used), which must be completed in such a way that all the rows, columns, and n2 non-overlapping n × n squares contain distinct numbers. This gives a vocabulary having 81 variables and domains of size 9. The target constraint network consists of 810 ̸= constraints. The bias was initialized with 12,960 binary constraints, using the language Γ = {≥, ≤, <, >, ̸=, =}. Jigsaw Sudoku. The Jigsaw Sudoku is a variant of Sudoku in which the 3 × 3 boxes are replaced by irregular shapes. It consists of 81 variables with domains of size 9. The target network contains 811 binary ̸= constraints on rows, columns, and shapes. The bias B was constructed using the language Γ = {≥, ≤, <, >, ̸=, =} and contains 19,440 binary constraints. Exam Timetabling There are ns semesters, each containing cps courses, and we want to schedule the exams of the courses in a period of d days, On each day, we have t timeslots and r rooms available for exams. The variables are the courses (|X| = ns · cps), having as domains the timeslots they can be assigned on (Di = 1, ..., r·t·d). There are ̸= constraints between each pair of exams. Also, two courses in the same semester cannot be examined on the same day, which is expressed by the constraints ⌊xi/spd⌋̸= ⌊xj/spd⌋, ∀i, j in the same program. We used an instance with ns = 8, cps = 6, d = 10 and r = 3. This resulted in a model with 48 variables and domains of size 90. CT consists of 1, 128 constraints. The language given is Γ = {≥, ≤, <, >, ̸=, = , ⌊xi/spd⌋̸= ⌊xj/spd⌋}, creating a bias of size 7,896. Nurse rostering There are n nurses, s shifts per day, ns nurses per shift, and d days. The goal is to create a schedule, assigning a nurse to all existing shifts. The variables are the shifts, and there are a total of d · s · ns shifts. The variables are modeled in a 3D matrix. The domains of the variables are the nurses (Dxi = {1, ..., n}). Each shift in a day must be assigned to a different nurse and the last shift of a day must be assigned to a different nurse than the first shift of the next day. In the instance used in the experiments, we have d = 7, s = 3 and ns = 5. The available nurses are n = 18. This results in |X| = 105 with domains {1, ..., 18}. CT consists of 885 ̸= constraints. The bias was built using the language Γ = {≥, ≤, <, >, ̸=, =}, resulting in |B| = 32, 760. Experimental Settings All the experiments were conducted on a system carrying an Intel(R) Core(TM) i7-2600 CPU, 3.40GHz clock speed, with 16 GB of RAM. The guiding techniques are integrated within GROWACQ, utilizing MQUACQ-2 as the underlying algorithm. We compare our approach with the counting method from (Tsouros, Berden, and Guns 2023), (“count“), as well as with GROWACQ without guiding (“base“). In the latter, the objective in query generation is simply to maximize the number of violated candidate constraints. We use the following classifiers: Random Forests (RF), Gaussian Naive Bayes (GNB), Multi-layer Perceptron (MLP), and Support Vector Machines (SVM). We used RF and GNB in their default settings, while we tuned the most important hyperparameters for MLP and SVM. For tuning, we used the final dataset for all benchmarks, having labeled all candidate constraints. A grid search, coupled with 10fold cross-validation, was conducted, using balanced accuracy as the metric to address class imbalance. Hyperparameter combinations surpassing a 10-second training time were omitted to ensure relevance in interactive scenarios. All methods and benchmarks were implemented1 in Python using the CPMpy constraint programming and modeling library (Guns 2019). OR-Tools’ CP-SAT (Perron, Didier, and Gay 2023) solver was used. For query generation, we used PQ-Gen from [Tsouros, Berden, Guns, 2023], with a cutoff of 1 second to return the best query found. Implementation of the ML classifiers was carried out using the Scikit-Learn library (Pedregosa et al. 2011). The comparison is based on the number of queries, which is very important for the applicability of interactive CA systems in real-world scenarios, and the maximum user waiting time, which is of paramount importance when human users are involved. The results presented in each benchmark, for each algorithm, are the means of 10 runs. Results Q1 In order to answer this question, we performed a 10fold cross-validation with each classifier on all the datasets, and present the averages. As metrics for the comparison, we use Accuracy, Balanced Accuracy (the datasets are highly unbalanced, with < 10% of B typically having a positive label), and F1-score. The results are shown in Figure 1. We can notice that all classifiers considered achieve a decent accuracy and balanced accuracy, with GNB performing slightly worse than the rest, and MLPs performing best. Focusing on F1-score, GNB presents quite bad results, but the rest of the classifiers still achieve a score higher than 70%. The results indicate that based on the way the dataset of constraints is created and the features used, it is possible to successfully train and use ML models to predict whether a constraint is part of the target network or not. However, in order to use the classifiers to assist during the acquisition process – guiding it to generate promising queries, based on the predictions – it is of high importance to evaluate how they perform not only when the labels for 1Our code is available online at: https://github.com/Dimosts/ActiveConLearn The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8159 Figure 1: Classification results with different classifiers all candidate constraints are available, but when only some parts of the dataset are available (as this is the case during the CA process). Thus, we conducted an experiment to evaluate how the classifiers perform when only a percentage of the dataset is available. We used an increasing portion of the dataset as the training set, to evaluate their performance in different stages of the acquisition process, with the rest of the candidates being the test set. The order of the constraints in the dataset was decided based on the order in which they were added in 5 different runs of CA systems. The averages are presented in Figure 2. We can observe that RF achieves the best results in all metrics in the beginning when only a small portion of the dataset is labeled, with MLP and SVMs reaching the same performance only when most of the dataset is available. GNB is shown here to have a bad performance throughout the process, having very low accuracy and F1-score. Q2 Let us now focus on the effect of using probabilistic classification to guide query generation in CA. Figure 3 presents the result when using the different classifiers, compared to guiding using the simple counting method from (Tsouros, Berden, and Guns 2023) (Count) and the GrowAcq without guiding (Base). In all benchmarks, except Random and JSS, the decrease in the number of queries is significant compared to both the baseline and the simple counting method, for most classifiers. When SVMs are used, the performance is similar to the baseline because it has a lower accuracy in earlier stages of the acquisition process and thus does not offer any meaningful guidance early enough. GNB presents decent results in some benchmarks, but its overfitting is shown in the Random benchmark, where guiding should not detect any patterns and thus have a similar performance as the baseline, which is true for the rest of the classifiers. Using RF and MLPs is the most promising, giving the best results in all benchmarks, with RF being superior in some cases. We attribute RF’s superior performance to the fact that it already achieves good prediction performance when only a small portion of the constraints is labeled, i.e., at the beginning of the acquisition process (Figure 2). Regarding the waiting time for the user, it includes 1 second for query generation (based on the imposed cutoff), and the rest of the waiting time involves mainly the training and prediction time. As a result, we can see higher waiting times (a) Accuracy (b) Balanced Accuracy (c) F1 Score Figure 2: Classification results when only part of the dataset is available for training when SVM or MLPs are used, which need a larger training time, while GrowAcq with no guiding (Base) and the simple counting method (Count) have similar waiting times because they do not need any training. We can also observe that the training time for GNB and RF is small and very reasonable for interactive settings, as the maximum observed waiting time is less than 2s. Overall, we can see that using RF to predict probabilities for the candidate constraints, and then guiding query generation based on these predictions, seems to be the best choice, both in terms of the number of queries and the user waiting time. It can decrease the number of queries required by up to 70% compared to the baseline (and up to 56% compared to the counting method), with the average decrease in the benchmarks that have structure (i.e., all except Random) being 52% (and 32% compared to counting). At the same time, the increase in the user waiting time is minor and acceptable for interactive scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8160 (a) (b) Figure 3: Results when guiding query generation using probabilistic classification Q3 We now evaluate the effect of also guiding the other layers of interactive CA where queries are asked to the user. We only use RF, as it presented the best performance in the previous experiment, and we compare it against the baseline (without guiding) and against only guiding query generation. Figure 4 presents the results. We can see a comparatively small but additional improvement when guiding all layers compared to guiding only toplevel query generation. The improvement is relatively small because guiding query generation already led to significantly fewer queries needed in FINDSCOPE and FINDC. However, the additional decrease still goes up to 22% (in Exam TT), with the average decrease in the number of queries reaching 10%. In addition, we can observe a slight reduction in the maximum waiting time for the user when we use the predictions to guide all layers of CA. We believe that this is because by prioritizing the removal of candidate constraints in all layers, B is shrinking during FINDSCOPE and FINDC. Thus, fewer top-level queries will be needed leading to a smaller amount of retraining steps. Conclusions One bottleneck of major importance in interactive CA is the number of queries needed to converge. The search-based learning used in CA is often not able to detect patterns existing in the problem while learning the constraints, and thus, does not use such patterns to better guide its search. In this (a) (b) Figure 4: Evaluating the effect of guiding all layers of CA work, we tighten the connection between ML and CA, by using for the first time statistical ML methods that can learn during the acquisition process and predict whether a candidate constraint is part of the problem or not. We propose to use probabilistic classification, using the predictions from the ML models in order to guide the search process of CA. In doing so, we extend recent work that guided query generation using probabilities derived via a simple countingbased method. We also extend guidance to the other components of CA that post queries to the user, further reducing the number of queries. Our experimental evaluation showed that the number of queries was decreased by up to 72% compared, greatly outperforming the state of the art. These findings confirm that statistical ML methods can indeed detect patterns in constraint models, while they are being learned. This can be a stepping stone to further reducing the number of queries in interactive CA. Future work should investigate the use of online learning in this setting, as data becomes available gradually. Other opportunities include learning a prior distribution over constraints and transfer learning across different problems. We also think that our closer integration with statistical ML techniques can be a stepping stone towards handling wrong answers from the user, which is an important part of future work, in order to make interactive CA more realistic. Finally, extending interactive CA systems to also be able to learn global constraints and linear inequalities with constants is important for expanding the reach of learnable problems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8161 Acknowledgments This research received funding from the European Research Council (ERC) under the EU Horizon 2020 research and innovation programme (Grant No 101002802, CHAT-Opt) and from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101070149, project Tuples. References Angluin, D. 1988. Queries and concept learning. Machine learning, 2(4): 319–342. Arcangioli, R.; Bessiere, C.; and Lazaar, N. 2016. Multiple Constraint Aquisition. In IJCAI: International Joint Conference on Artificial Intelligence, 698–704. Beldiceanu, N.; and Simonis, H. 2012. A model seeker: Extracting global constraint models from positive examples. In Principles and practice of constraint programming, 141– 157. Springer. Berden, S.; Kumar, M.; Kolb, S.; and Guns, T. 2022. Learning MAX-SAT Models from Examples using Genetic Algorithms and Knowledge Compilation. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Bessiere, C.; Carbonnel, C.; Dries, A.; Hebrard, E.; Katsirelos, G.; Narodytska, N.; Quimper, C.-G.; Stergiou, K.; Tsouros, D. C.; and Walsh, T. 2023. Learning constraints through partial queries. Artificial Intelligence, 319: 103896. Bessiere, C.; Coletta, R.; Freuder, E. C.; and O’Sullivan, B. 2004. Leveraging the learning power of examples in automated constraint acquisition. In International Conference on Principles and Practice of Constraint Programming, 123– 137. Springer. Bessiere, C.; Coletta, R.; Hebrard, E.; Katsirelos, G.; Lazaar, N.; Narodytska, N.; Quimper, C.-G.; Walsh, T.; et al. 2013. Constraint Acquisition via Partial Queries. In IJCAI, volume 13, 475–481. Bessiere, C.; Coletta, R.; Koriche, F.; and O’Sullivan, B. 2005. A SAT-based version space algorithm for acquiring constraint satisfaction problems. In European Conference on Machine Learning, 23–34. Springer. Bessiere, C.; Coletta, R.; O’Sullivan, B.; Paulin, M.; et al. 2007. Query-Driven Constraint Acquisition. In IJCAI, volume 7, 50–55. Bessiere, C.; Koriche, F.; Lazaar, N.; and O’Sullivan, B. 2017. Constraint acquisition. Artificial Intelligence, 244: 315–342. De Raedt, L.; Passerini, A.; and Teso, S. 2018. Learning constraints from examples. In Proceedings in Thirty-Second AAAI Conference on Artificial Intelligence. Freuder, E. C. 2018. Progress towards the Holy Grail. Constraints, 23(2): 158–171. Freuder, E. C.; and O’Sullivan, B. 2014. Grand challenges for constraint programming. Constraints, 19(2): 150–162. Freuder, E. C.; and Wallace, R. J. 1998. Suggestion strategies for constraint-based matchmaker agents. In International Conference on Principles and Practice of Constraint Programming, 192–204. Springer. Guns, T. 2019. Increasing modeling language convenience with a universal n-dimensional array, CPpy as pythonembedded example. In Proceedings of the 18th workshop on Constraint Modelling and Reformulation at CP (Modref 2019), volume 19. Kolb, S. M. 2016. Learning constraints and optimization criteria. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence. Kumar, M.; Kolb, S.; and Guns, T. 2022. Learning Constraint Programming Models from Data Using GenerateAnd-Aggregate. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik. Lallouet, A.; Lopez, M.; Martin, L.; and Vrain, C. 2010. On learning constraint problems. In Tools with Artificial Intelligence (ICTAI), 2010 22nd IEEE International Conference on, volume 1, 45–52. IEEE. Lazaar, N. 2021. Parallel Constraint Acquisition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3860–3867. Lombardi, M.; and Milano, M. 2018. Boosting Combinatorial Problem Modeling with Machine Learning. arXiv preprint arXiv:1807.05517. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikitlearn: Machine Learning in Python. Journal of Machine Learning Research, 12: 2825–2830. Perron, L.; Didier, F.; and Gay, S. 2023. The CP-SATLP Solver (Invited Talk). In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Schloss-Dagstuhl-Leibniz Zentrum f¨ur Informatik. Tsouros, D.; Berden, S.; and Guns, T. 2023. Guided BottomUp Interactive Constraint Acquisition. In International Conference on Principles and Practice of Constraint Programming. Tsouros, D. C.; and Stergiou, K. 2020. Efficient multiple constraint acquisition. Constraints, 25(3): 180–225. Tsouros, D. C.; and Stergiou, K. 2021. Learning Max-CSPs via Active Constraint Acquisition. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik. Tsouros, D. C.; Stergiou, K.; and Bessiere, C. 2019. Structure-Driven Multiple Constraint Acquisition. In International Conference on Principles and Practice of Constraint Programming, 709–725. Springer. Tsouros, D. C.; Stergiou, K.; and Bessiere, C. 2020. Omissions in Constraint Acquisition. In International Conference on Principles and Practice of Constraint Programming, 935–951. Springer. Tsouros, D. C.; Stergiou, K.; and Sarigiannidis, P. G. 2018. Efficient Methods for Constraint Acquisition. In 24th International Conference on Principles and Practice of Constraint Programming. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8162
2024
907
18,748
GSO-Net: Grid Surface Optimization via Learning Geometric Constraints Chaoyun Wang1,2,3, Jingmin Xin1,2,3, Nanning Zheng1,2,3, Caigui Jiang1,2,3* 1National Key Laboratory of Human-Machine Hybrid Augmented Intelligence 2National Engineering Research Center of Visual Information and Applications 3Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University [email protected], {jxin, nnzheng}@mail.xjtu.edu.cn, [email protected] Abstract In the context of surface representations, we find a natural structural similarity between grid surface and image data. Motivated by this inspiration, we propose a novel approach: encoding grid surfaces as geometric images and using image processing methods to address surface optimization-related problems. As a result, we have created the first dataset for grid surface optimization and devised a learning-based grid surface optimization network specifically tailored to geometric images, addressing the surface optimization problem through a data-driven learning of geometric constraints paradigm. We conduct extensive experiments on developable surface optimization, surface flattening, and surface denoising tasks using the designed network and datasets. The results demonstrate that our proposed method not only addresses the surface optimization problem better than traditional numerical optimization methods, especially for complex surfaces, but also boosts the optimization speed by multiple orders of magnitude. This pioneering study successfully applies deep learning methods to the field of surface optimization and provides a new solution paradigm for similar tasks, which will provide inspiration and guidance for future developments in the field of discrete surface optimization. The code and dataset are available at https://github.com/chaoyunwang/GSO-Net. Introduction The field of computer graphics is rife with discrete surface optimization problems, spanning applications such as 3D modeling, digital fabrication, physical simulation, and medical imaging. These problems encompass a variety of tasks such as the optimization of developable surfaces, surface flattening, and surface denoising (without emphasis, the term “surface” in this paper refers to “discrete surface” or “mesh” in general). Despite their different objectives, these tasks share a common foundation: optimizing certain geometric properties of the surface, such as flatness, smoothness, and curvature, while adhering to a set of geometric constraints such as surface continuity and boundary conditions. Conventional surface optimization heavily relies on various numerical optimization techniques, including methods used in nonlinear optimization. While these methods have *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. proven effective in dealing with small-scale or specific types of problems, they can struggle with large-scale and complex challenges, often encountering inefficiencies, convergence difficulties, and numerous hyperparameter issues (Nocedal and Wright 1999; Ma et al. 2021). In contrast, deep learning has made significant strides in the field of image processing. This success primarily stems from the regular structure of image data, allowing for specialized computations using convolutional kernels. These kernels perform sliding window computations, leading to efficient extraction and utilization of image features. This facilitates learning complex patterns from large amounts of data, an approach that might offer insights or alternatives to conventional optimization methods on discrete surfaces, especially grid surfaces in this paper. As (Yuan, Cao, and Shi 2023) mentioned in their overview of developable surfaces, there is scant research in the area of artificial intelligence applications to developable surfaces, they suggest building large-scale datasets for developable surface optimization. In recent years, the application of deep learning in mesh processing has primarily centered on tasks like classification, segmentation, retrieval, and denoising, with a particular focus on surfaces represented by irregular triangular meshes. To handle such irregular structured data, two main strategies are adopted: designing learning networks suitable for irregular structured data or transforming the data into regular structured data. The former, represented by MeshCNN (Hanocka et al. 2019) and SubdivNet mesh processing networks (Hu et al. 2022), can process directly on triangle patches and are mainly used for semantic segmentation and recognition problems. However, they are not applicable for surface optimization problems. In the latter research involving regularized data processing, some studies parameterize the mesh to create geometric images for processing (Gu, Gortler, and Hoppe 2002). These studies tend to focus more on mesh reconstruction and compression (Sinha, Bai, and Ramani 2016; Wang et al. 2018; Ren et al. 2023). Also, there is work involving sampling on mesh patches to build regular structured data, which is common in surface denoising tasks (Wang, Liu, and Tong 2016; Shen et al. 2022; Zhao et al. 2022). While these representations can be processed with standard deep learning algorithms, from a data transformation perspective, they are lossy and cannot meet the precision requirements of tasks such as surface optimization. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8163 From the current research landscape, it is clear that existing network architectures cannot be directly applied to surface optimization problems using deep learning methods. This is mainly due to the complex, irregular connectivity of triangular mesh representations, which are difficult for current deep learning algorithms to handle with high precision. Inspired by the work of (Jiang et al. 2020, 2021), on quadrilateral mesh surfaces, we have found that quadrilateral meshes can be better defined and constrained in surface optimization tasks such as developable surface optimization and surface flattening, compared to triangular meshes. We noticed that the structure of regular quadrilateral meshes has a natural similarity to image data structure. As shown in Figure 1, we encode the vertex position information of Grid surface into RGB color information of image pixels. This process does not lose any information, both can be equivalently replaced and are well-suited for lossless compression and data processing. Figure 1: A grid surface and its corresponding geometric image. This method allows for a lossless transformation of surface information, thus turning the surface optimization problem into an image processing problem. This conversion fits the prevalent deep learning image processing methods. Additionally, we can use the predefined geometric constraint properties in surface optimization as loss function for the network. This allows the network to learn surface optimization in a self-supervised manner, greatly improving the model’s generalization capabilities and task versatility, and offers the potential to surpass numerical optimization algorithms in optimization results. In summary, the main contributions of this paper are as follows: • We exploit the data structure similarity between grid surfaces and images to encode grid surfaces as geometric images, using image processing methods to solve grid surface optimization problems. • We have created a high-quality dataset of grid surface and corresponding results optimized by numerical optimization methods for research testing and comparison. • We design a grid surface optimization network to learn geometric constraints in a self-supervised manner, which has been validated in several tasks for its generality and effectiveness. Compared with the traditional numerical optimization method, the optimization speed is improved by multiple orders of magnitude. Grid Surface Dataset For deep learning methods, datasets with large and diverse samples are crucial. In view of the fact that there are no relevant datasets available for surface optimization, we investigate the construction method for the grid surface optimization dataset used in this paper. In surface construction, we utilize the Bezier surface construction algorithm proposed by (Aumann 2003). This method allows us to build surfaces of various shapes by using different quantities and arrangements of 3D control points, along with different degrees of Bezier curves. The grid surface is subsequently obtained by sampling 64×64 points on continuous Bezier surfaces. To diversify the surfaces in the dataset, we focus on two essential features of surfaces, Gaussian curvature and shape, and ensure that surfaces in each feature interval are included in the dataset through a surface generation and screening procedure. Specifically, in terms of Gaussian curvature features, we use KP (M), KN(M), KA(M) to characterize surface M: KP (M) = 1 n P k∈M max(κG(pk), 0) KN(M) = 1 n P k∈M min(κG(pk), 0) KA(M) = 1 n P k∈M |κG(pk)| (1) where n represents the number of vertices of the surface M, κG(pk) is the discrete Gaussian curvature at vertex pk computed with reference to (Meyer et al. 2003). In terms of shape features, we use σ2 to characterize surface M: σ2 = P4 i=1(Xi−¯ X) 2 4 (2) where Xi denotes the four boundary lengths of the grid surface as shown in Figure 1, ¯X is the mean value of these boundary lengths. Following the above approach, we generate 10401 grid surfaces for deep learning training and testing, moreover, include the results obtained using the optimization algorithm proposed by (Jiang et al. 2020) on the developable surface optimization and surface flattening tasks as a reference for traditional numerical optimization (TNO) method. Details of the dataset are given in the supplementary material. Method In image processing tasks, digital images are usually encoded using 8 bits, requiring only 1/256 accuracy. However, higher accuracy is required when expressing spatial grid vertex positions in the form of image data storage. In our experiments, we found that to ensure the basic smoothness requirements of the surface, for example, with a grid size of 64×64, the required precision needs to reach approximately 1/12500, which is about 50 times higher than image precision. This places demanding demands on the learning capabilities of deep learning networks. Moreover, unlike image data, which only represents color information, grid vertices The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8164 y -1 Mesh to image Denoise-result Developable-coarse-result Developable-fine-result Input Output GSO-Net Conv Skip connection Residual image x 1 y=tanh(x) Normalization Flatten-init Denoise Developable Fine? N Y Inverse normalization Flatten-result Inverse normalization Inverse normalization Inverse normalization GSO-Net Flatten Image to mesh Image to mesh Image to mesh Image to mesh Conv Figure 2: The pipeline of grid surface optimization. Different colored dashed lines are used in the Input and Output boxes to distinguish surface optimization tasks, and the middle GSO-Net box can be used for these tasks in general. represent actual spatial coordinate information. After undergoing any orthogonal transformation, the information represented by the surface remains unchanged. Thus, this image encoding method also needs to take into account the invariance of the surface representation under orthogonal transformations. All these challenges call for the careful design of appropriate loss functions and network architectures to address them. Grid Surface Optimization In designing the grid surface optimization network, we draw on the image encoding-decoding structure commonly employed in deep learning networks. Our aim is to learn the mapping of the vertices of the grid surface in an end-to-end manner. Building on this design concept, we relate this task to the image denoising, where the noise can be understood as the displacement of vertices for surface optimization. Figure 2 illustrates our proposed grid surface optimization pipeline for different tasks. Within the Input box, the three tasks we design are indicated by different colored dashed lines, and the same differentiation is applied in the Output box. The GSO-Net box in the middle represents the designed grid surface optimization network, which can handle these surface optimization tasks generically. Specifically, in the pipeline for the developable surface optimization task, the input is a non-developable surface whose vertex positions are read and normalized before being encoded into a geometric image data format. Then, with the designed GSO-Net, a residual image representing the vertex displacement of the surface is obtained. By concatenating the residuals with the input image, we obtain an optimized geometric image representing a developable surface. Once the developability requirements are met, the developable surface can be directly output. For surfaces with high values of the Gaussian curvature introduced as input, fine networks can be employed to repeat the network operations described above, allowing for further optimization that ultimately results in a refined developable surface. In the surface flattening task, a similar process is followed as in the developable surface optimization task. The difference lies in ensuring that the output is planar. Thus, the number of channels in the network output is set to 2. In GSO-Net, the blue dashed line in the input-output residual structure involves a Flatten-init for surface flattening, where the output represents the displacement of the initial mesh vertex. In the surface denoising task, the processing flow is similar. The input is a noisy surface and the output is a smooth surface with noise filtered out. By GSO-Net, the basic noise features are learned and denoised with the residual structure. Here is a specific explanation of the network module details mentioned in Figure 2: Network Architecture In the GSO-Net, the “Conv” bar represents initial convolutional layer, and the red bar represents the feature extraction module, referring to the IMDB module in the image denoising network (Hui et al. 2019), skip connection is used between codec structures, the specific parameters are analyzed in the experimental section. Normalization and Inverse Normalization Before feeding the surface data into the network, we apply a preprocessing method (Qi et al. 2017) that is suitable for point clouds rather than images to normalize the data, and we need to use these normalization parameters to inverse normalize the data for output. Flatten-init In order to ensure that the initialization plane is as close as possible to the original surface flattening results and reduce the difficulty of network optimization, we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8165 design an initialization method to ensure that the average mesh edge length of the 2d plane and the 3d surface in the sampling direction is equal. Residual Image In the design, we refer to (Zhang et al. 2017) image denoising network and utilize the residual learning structure between input and output. This structure allows the network to focus on learning the displacement of mesh vertices between the input surface and the optimized surface, which facilitates learning and convergence. Tanh Activation Function To avoid oscillations during training, we limit the range of network outputs. Inspired by (Zhang et al. 2020) approach in point cloud networks, we apply a tanh activation function to the output, restricting the vertex movement within the range of [-1, 1]. Coarse-to-Fine Coarse-to-fine is a computational strategy commonly used in the field of computer graphics. In the optimization of developable surfaces, considering the complexity of the surfaces, the network’s learning capacity, and precision, we design a similar network as the mesh denoising networks by (Wang, Liu, and Tong 2016; Zhao et al. 2019; Shen et al. 2022), employing a cascaded optimization approach. We adopt a similar training and optimization strategy by using coarse and fine networks, which results in a two-stage optimization process for the surface, leading to better optimization performance. Loss Functions During the training of the grid surface optimization network, we exploit geometric constraints and prior knowledge related to surface optimization, transforming them into a form that is computationally convenient for geometric image processing. Here, the input geometric image is denoted by I, the optimized output geometric image by O, and the number of vertices for each grid by N. Additionally, c represents the number of image channels. 1 2 3 4 5 6 1 1 1 ( 1, ) ( , ) ( 1, 1) ( , ) ( , 1) ( , ) ( 1, ) ( , 1 || || || | ) ( 1, ) ( , ) ( , 1) ( , ) arccos( ) | || || 2 i i i i i i i i d O O d O O d O O d O O d O O d O O d d d d A k x y x y x y x y x y x y x y x y x y x y x y x y d d                                            6 i 1 6 i 1 2 ( ( , )) i G i O x y A         x y 2 d  3 d  4 d  5 d  6 d  1d  3 A 5 A 4 A 6 A 1A 2 A 1 2  3  4  5  6  ( 1, ) O x y  ( 1, 1) O x y   ( , 1) O x y  ( 1, 1) O x y   ( 1, ) O x y  ( 1, 1) O x y   ( , ) O x y ( , 1) O x y  ( 1, 1) O x y   Figure 3: Discrete Gaussian curvature computation on geometric images. We refer to several geometric properties of the optimized developable surface to design the loss function: it is developable, smooth, and close to the original surface, and correspond to the loss functions lossgc, lossfair, lossin used for network training. The following is the specific introduction. lossgc is based on the fact that the Gaussian curvature at each vertex of the developable surface is zero. Figure 3 illustrates a schematic representation of calculating the discrete Gaussian curvature on geometric images. To Calculate the curvature at point O(x, y), one first needs to determine the angle θi formed by the adjacent edges met at the point and Ai, which represents the areas of the surrounding triangles. The discrete Gaussian curvature kG(O(x, y)) is then calculated by referring to (Meyer et al. 2003). In order to facilitate the optimized calculation of geometric image loss, the convolution kernel calculation is used as the calculation lossgc. The constant convolution kernel Wgc(i) designed as shown in Figure 4(a) can be used to convolve the output geometric image, thereby obtaining the edge vector ⃗di required for calculating kG(O(x, y)) in Figure 3. The formulas for the convolution calculation of ⃗di and the loss function lossgc are presented below: d(c) i = P m P n Wgc(i)(m, n) ∗O(c)(x + m, y + n) ⃗di = (d(1) i , d(2) i , d(3) i ) lossgc = P x P y | kG(O(x, y)) | (3) lossfair is designed to ensure the smoothness of the output surface. We refer to the smoothness calculation used in geometric optimization where adjacent vertices are forced to be located along a straight line. The constant convolution kernel Wfair, as depicted in Figure 4(b), is designed to compute the lossfair on the geometric image: f (c)(x, y) = P m P n Wfair(m, n) ∗O(c)(x + m, y + n) lossfair = P c P x P y |f (c)(x, y)| (4) lossin is designed to make the optimized surface closely resemble the original surface. We utilize the commonly used mean square error (MSE) loss function in image processing to compute lossin on the geometric image: lossin = P x P y (O(x, y) −I(x, y))2 (5) 0 1 0 0 -1 0 0 0 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 -1 1 0 0 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 -1 0 1 0 0 0 0 0 1 -1 0 0 0 0 1 gc W 2 gc W 3 gc W 4 gc W 5 gc W 6 gc W 0 1 0 1 -4 1 0 1 0 -1 0 0 1 0 -1 1 0 1 iso W 2 iso W fair W (a) (b) (c) Figure 4: Constant convolutional kernel for computing geometric loss.(a) Gaussian curvature; (b) fairness;(c)isometry. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8166 In the developable surface optimization task, the overall loss function Lossdevelopable is as follows: Lossdevelopable = win ∗lossin + wfair ∗lossfair +wgc ∗lossgc (6) where win, wfair, wgc are the weighting coefficients for the corresponding loss function. By properly setting these weighting coefficients, the trained model can achieve favorable overall performance in terms of surface developability, proximity and smoothness. In the surface flattening task, the term of isometry constraint defined in (Jiang et al. 2020) is used and converted to lossiso in geometric images, similarly to the calculation of lossgc. The constant convolution kernels Wiso(i) in Figure 4(c) are used to compute the quadrilateral diagonal vectors. This computation is performed in the same manner as the calculation of ⃗di in equation (3). The overall loss is as follows: Lossflatten = wiso ∗lossiso + wfair ∗lossfair (7) In the surface denoising task, our goal is to optimize the discrete surface so that it is similar to the input surface while ensuring fairness. To achieve this, we refer to the losses lossin and lossfair, as described in equation (5) and equation (4). The overall loss is as follows: Lossdenoise = win ∗lossin + wfair ∗lossfair (8) Evaluation Metrics We employ various evaluation metrics tailored to different tasks. Specifically, for the developable surface optimization task, we use proximity and developability evaluation metrics. In the surface flattening task, we apply isometry metrics, and for the surface denoising task, we utilize normal angle difference metrics. Let the sample size be denoted by n, and let the surfaces before and after optimization be represented as M and M ′respectively, with N being the number of quadrilateral faces. Proximity We indicate the shape similarity by the average Hausdorff distance dH , with respect to the length of the bounding box diagonal, and the average evaluation metric on the dataset is dH−a: dH−a = 1 n Pn i=1 dH(Mi) (9) Developability We evaluate the performance of surface developability optimization using KA−r, which is the rate of surface Gaussian curvature, and the overall average evaluation metric on the dataset is KA−r−a: KA−r(M)=(KA(M)−KA(M ′))/KA(M) KA−r−a = 1 n Pn i=1 KA−r(Mi) (10) Isometry We use the isometry loss on each quadrilateral cell as the evaluation metric, the formula is as follows: lossiso−cell = lossiso/N (11) Normal Angle Difference We refer to the average normal angle difference θ, which is commonly used in mesh denoising tasks. We calculate the mean values θnoisy−a and θopt−a before and after optimization on the dataset, and compute the average rate defined by θopt−r−a as the evaluation metric: θnoisy−a = 1 n Pn i=1 θ(Mi) θopt−a = 1 n Pn i=1 θ(M ′ i) θopt−r−a = 1 n Pn i=1(θ(Mi) −θ(M ′ i))/θ(Mi) (12) Experiments and Results In the experiments, we first analyze and select specific parameters of the GSO-Net using the developable surface optimization task as a reference, and then conduct sufficient experiments with this general network for the three surface optimization tasks mentioned above. Finally, the experimental results of the proposed method and traditional numerical optimization methods are comprehensively analyzed, both quantitatively and qualitatively. More details of the experiments and results are given in the supplementary material. Developable Surface Optimization In the developable surface optimization task, we initially use the optimization results of traditional numerical optimization methods as labels. This allows networks with different parameters to learn developable surface optimization in a supervised learning manner, enabling us to compare and select appropriate baseline network architectures. Subsequently, we compare the results of network learning in a self-supervised manner with different optimization strategies, and compare with traditional numerical optimization methods. Several surface optimization comparison example results are also presented. Figure 5: Comparison of optimization loss and parameter size for different network architecture parameters. Network Architecture Parameters For our task, we determine the number of IMDB (NB) and channels in the feature maps (NC) in each layer of the network by examining The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8167 their fitting effect on the surface optimization. Moreover, the size of model parameters is considered as a reference criterion. The specific experimental results are shown in Figure 5. When NC was set to 16X (16,32,64,128) and NB was set to 4, we achieved perfect learning performance with a parameter size of only 2.82M. It can be observed that setting NC to 32X does result in a slight improvement in performance, but this comes at the cost of a significant increase in the parameter size. Experimental Result Comparison In the experiment, we compare the traditional numerical optimization method with the GSO-Net method proposed in this paper, and verify the effectiveness of the coarse-to-fine optimization strategy. Assuming that the developability metric obtained by numerical optimization methods is already excellent, we adjust the optimization parameters of the model to make lossgc close to the result of the numerical optimization method. Then, we use the proximity metric to evaluate and compare the performance between the different methods. Methods TNO Net-S Net-C Net-CF lossgc ↓ 0.0283 0.0273 0.0417 0.0291 KA−r−a ↑ 0.8888 0.8776 0.8300 0.8768 dH−a ↓ 0.823% 0.608% 0.414% 0.517% time ↓ 38.256s 0.013s 0.013s 0.026s Table 1: Developable surface optimization task evaluation metrics statistical results for different methods. Table 1 shows the developable surface optimization results. In the “Methods” row, “TNO” represents the traditional numerical optimization method, “Net-C” and “NetCF” represent the coarse and fine models in the coarse-tofine strategy for GSO-Net, respectively, and “Net-S” represents the single model without this strategy. In our experiments, the lossgc of the pre-optimized dataset is around 0.307. As shown in Table 1, all the methods reduce lossgc by about 91% (except Net-C as an intermediate model), and the performance of these methods on KA−r−a is also relatively consistent. Compared with TNO, the methods based on GSO-Net show improvements in terms of dH−a. Specifically: (1) NetCF reduces dH−a from 0.8238% to 0.517%, a relative decrease of 37.24% compared to TNO; (2) Even without using a coarse-to-fine optimization strategy, Net-S reduces dH−a from 0.823% to 0.608%, a relative decrease of 26.12%; (3) If the requirement for the Gaussian curvature value of the developable surface is not overly stringent, Net-C reduces dH−a from 0.823% to 0.414%, a relative reduction of about 50%, while KA−r−a is reduced by only 5.88%. Comparing the results of Net-S and Net-CF, it can be observed that the proposed coarse-to-fine strategy can reduce the dH−a from 0.608% to 0.517%, balancing developability and significantly enhancing the similarity between the surfaces before and after optimization. The last row compares the time consumption among different methods. Since the proposed method can transform the surface optimization problem into an image processing problem, the speed of the proposed method is improved by multiple orders of magnitude compared with TNO. The results in Table 1 represent the average optimization performance over all surfaces, including surfaces with various values of Gaussian curvature. In Figure 6, the optimization results for surfaces with different ranges of Gaussian curvature values are displayed. In this figure, the X-axis corresponds to surfaces with varying Gaussian curvature values, and the left and right Y-axes represent KA−r−a and dH−a, respectively. Figure 6: Line plots of the KA−r−a and dH−a statistics in different surface Gaussian curvature interval values. From Figure 6, the KA−r−a correlation curves of the proposed methods and TNO are observed to be closely aligned, corresponding to the KA−r−a values in Table 1. Compared to TNO, the difference in dH−a between the proposed method and TNO grows rapidly as the Gaussian curvature of the surface increases. This demonstrates that the proposed method is significantly more adept than TNO at handling complex surface optimization challenges. Additionally, the advantage of the proposed coarse-to-fine strategy is apparent in the correlation curves and corresponds to the results documented in Table 1. However, it’s essential to recognize that when optimizing surfaces with particularly low Gaussian curvature values, or simple surfaces, TNO exhibits a higher KA−r−a compared to the proposed method. This observation is in alignment with the inherent strengths and weaknesses of TNO, as delineated in the introduction. In Figure 7, a collection of example results for developable surface optimization using different methods is exhibited. Accompanying each optimized surface, deformation heatmaps are provided. Within these heatmaps, the color is indicative of the value of dH, representing the distance from the vertex to the original surface. Darker colors symbolize greater deformation in comparison to the original surface. A discernible observation from the heatmaps is that the coloration corresponding to the proposed method is considerably lighter than that associated with the traditional method. This visual evidence corresponds to the quantitative results The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8168 Figure 7: Comparison of example developable surface optimization results for different methods. (a) raw input surfaces; (b) TNO; (c) Net-S; (d) Net-C; (e) Net-CF. presented in Table 1 for dH−a. It provides a lucid demonstration that the proposed method has been successful in reducing surface deformation while simultaneously ensuring the development of the optimization results. Surface Flatten In the surface flattening task, we used the same data and networks as in the developable surface optimization task. Table 2 shows the results for different optimization methods. In the context of Table 2, the “Methods” row details various strategies employed in this paper. The “Init” method, denoted as Flatten-init, serves as a baseline reference. “TNO” stands for the traditional numerical optimization approach commonly used in the field. The “Net” method introduces GSO-Net, our proposed solution. Building on the Net method, “Net-W” leverages prior knowledge to calculate the dynamic weight, wiso, particularly in areas with high Gaussian curvature where the vertex itself is non-developable. This leads to a considerable isometry loss. The value of wiso at the output point O(x,y) is equal to the normalized Gaussian curvature value of the point and subtracted by 1: wiso(x, y) = 1 −normalization(|kG(O)|)(x, y) (13) Methods Init TNO Net Net-W lossiso−cell ↓ 6.21e-3 3.30e-4 2.94e-4 2.46e-4 time ↓ 0.002s 4.235s 0.015s 0.015s Table 2: Surface flattening task evaluation metrics statistical results for different methods. As can be seen from Table 2, compared to the baseline Init, most methods achieve a loss reduction rate of about 95%. Our proposed method Net outperforms the traditional method TNO, with Net-W further reducing losses by 16.32% relative to Net and 25.45% compared to TNO. Moreover, in the last row, the proposed method can reduce the time consumption by multiple orders of magnitude. Figure 8 displays heatmaps of several surface flattening optimization results, where the color corresponds to the lossiso−cell. The variation in color depth among the different methods is consistent with the results in Table 2. Notably, the contours in Net-W are closer to those in TNO compared to Net, showing that TNO’s method provides better contouring. However, Net demonstrates superior performance in capturing detail. Net-W effectively combines the strengths of both to achieve the best overall results. Surface Denoise To further verify the generality of the proposed method, we conduct simple experiments on the surface denoising task. We add random noise to the original surfaces as noise surface input to the network for training and learning denoising. The surface noise addition is shown as follows: Mnoisy = M + N(µ, σ) where N(µ, σ) represents Gaussian noise, where µ= 0 is set and different levels of noise are generated by adjusting the parameter σ. Considering that the grid resolution is about 1/64, we set the maximum value of σ to 0.015 and the minimum value to 0.001 to study the effect of surface noise removal under different noise levels. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8169 Figure 8: Comparison of example surface flattening results for different methods. (a) Raw input surfaces; (b) Init; (c) TNO; (d) Net; (e) Net-W. Figure 9: Comparison of example surface denoising results for different noise levels. (A) noise surfaces; (B) denoising results; (a) raw smooth surface; (b) σ = 0.001; (c) σ = 0.005; (d) σ = 0.010; (e) σ = 0.015. Noise level(σ) 0.001 0.005 0.010 0.015 θnoisy−a 4.02◦ 21.36◦ 44.64◦ 61.59◦ θopt−a 0.633◦ 1.502◦ 2.505◦ 3.544◦ θopt−r−a 83.84% 92.85% 94.42% 94.30% Table 3: Surface denoising task evaluation metrics statistical results for different noise levels. Table 3 shows the statistical results of surface denoising tasks using GSO-Net. The results show that the proposed method can denoise surfaces with different noise levels very well, and the optimization efficiency does not decrease as the noise level becomes higher. In Figure 9, a surface denoising example is presented. A visual examination reveals that the optimized results align with the statistical findings for θopt−a, as shown in Table 3. When compared with the original smooth surface, it’s evident that the proposed method is capable of achieving excellent surface recovery for various levels of noise. Conclusion In this work, we introduce an innovative approach to encode the vertex location information of a grid surface into a geometric image, and subsequently address the grid surface optimization problem through the use of image processing methods. To implement this, we constructed a high-quality grid surface dataset and designed a grid surface optimization network, GSO-Net, applicable to general tasks. This network employs geometric constraint loss for self-supervised learning and can be effectively used for developable surface optimization, surface flattening, and surface denoising tasks. Experimental results show that our proposed method outperforms traditional numerical optimization methods in surface optimization, especially for complex surfaces, and reduces the optimization time by multiple orders of magnitude. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8170 Acknowledgments This work was supported by National Key Research and Development Program of China under Grant (No.2022YFB2502903), National Natural Science Foundation of China under Grant (No.62088102), Basic research expenses of Xi’an Jiaotong University(No.xzy022023109). References Aumann, G. 2003. A simple algorithm for designing developable B´ezier surfaces. Computer Aided Geometric Design, 20(8-9): 601–619. Gu, X.; Gortler, S. J.; and Hoppe, H. 2002. Geometry images. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, 355–361. Hanocka, R.; Hertz, A.; Fish, N.; Giryes, R.; Fleishman, S.; and Cohen-Or, D. 2019. Meshcnn: a network with an edge. ACM Transactions on Graphics (ToG), 38(4): 1–12. Hu, S.-M.; Liu, Z.-N.; Guo, M.-H.; Cai, J.-X.; Huang, J.; Mu, T.-J.; and Martin, R. R. 2022. Subdivision-based mesh convolution networks. ACM Transactions on Graphics (TOG), 41(3): 1–16. Hui, Z.; Gao, X.; Yang, Y.; and Wang, X. 2019. Lightweight image super-resolution with information multi-distillation network. In Proceedings of the 27th acm international conference on multimedia, 2024–2032. Jiang, C.; Wang, C.; Rist, F.; Wallner, J.; and Pottmann, H. 2020. Quad-mesh based isometric mappings and developable surfaces. ACM Transactions on Graphics (TOG), 39(4): 128–1. Jiang, C.; Wang, H.; Inza, V. C.; Dellinger, F.; Rist, F.; Wallner, J.; and Pottmann, H. 2021. Using isometries for computational design and fabrication. ACM Transactions on Graphics (TOG), 40(4): 1–12. Ma, W.; Liu, Z.; Kudyshev, Z. A.; Boltasseva, A.; Cai, W.; and Liu, Y. 2021. Deep learning for the design of photonic structures. Nature Photonics, 15(2): 77–90. Meyer, M.; Desbrun, M.; Schr¨oder, P.; and Barr, A. H. 2003. Discrete differential-geometry operators for triangulated 2manifolds. In Visualization and mathematics III, 35–57. Springer. Nocedal, J.; and Wright, S. J. 1999. Numerical optimization. Springer. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660. Ren, J.; An, N.; Zhang, Y.; Wang, D.; Sun, Z.; Lin, C.; Cui, W.; Wang, W.; Zhou, Y.; Zhang, W.; et al. 2023. SUGAR: Spherical Ultrafast Graph Attention Framework for Cortical Surface Registration. arXiv preprint arXiv:2307.00511. Shen, Y.; Fu, H.; Du, Z.; Chen, X.; Burnaev, E.; Zorin, D.; Zhou, K.; and Zheng, Y. 2022. GCN-denoiser: mesh denoising with graph convolutional networks. ACM Transactions on Graphics (TOG), 41(1): 1–14. Sinha, A.; Bai, J.; and Ramani, K. 2016. Deep learning 3D shape surfaces using geometry images. In Computer Vision– ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VI 14, 223–240. Springer. Wang, H.; Guo, J.; Yan, D.-M.; Quan, W.; and Zhang, X. 2018. Learning 3d keypoint descriptors for non-rigid shape matching. In Proceedings of the European Conference on Computer Vision (ECCV), 3–19. Wang, P.-S.; Liu, Y.; and Tong, X. 2016. Mesh denoising via cascaded normal regression. ACM Trans. Graph., 35(6): 232–1. Yuan, C.; Cao, N.; and Shi, Y. 2023. A Survey of Developable Surfaces: From Shape Modeling to Manufacturing. arXiv preprint arXiv:2304.09587. Zhang, D.; Lu, X.; Qin, H.; and He, Y. 2020. Pointfilter: Point cloud filtering via encoder-decoder modeling. IEEE Transactions on Visualization and Computer Graphics, 27(3): 2015–2027. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; and Zhang, L. 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7): 3142–3155. Zhao, W.; Liu, X.; Jiang, J.; Zhao, D.; Li, G.; and Ji, X. 2022. Local Surface Descriptor for Geometry and Feature Preserved Mesh Denoising. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 3446–3453. Zhao, W.; Liu, X.; Zhao, Y.; Fan, X.; and Zhao, D. 2019. NormalNet: Learning-based normal filtering for mesh denoising. arXiv preprint arXiv:1903.04015. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8171
2024
908
18,749
Encoding Constraints as Binary Constraint Networks Satisfying BTP Ruiwei Wang School of Computing, National University of Singapore [email protected] Abstract Recently, the Binary Constraint Tree (BCT), a tree structured Binary Constraint Network (BCN), has been shown to be more succinct than various ad-hoc constraints. In this paper, we investigate the modelling power of a well-known tractable hybrid class generalizing BCT, i.e. the class of BCNs satisfying Broken Triangle Property (BTP) called BTP Networks (BTPNs). We show that the consistency checker of BTPN can be computed by polysize monotone circuit, thus, some global constraints cannot be encoded as polysize BTPN, such as the AllDifferent and Linear constraints. Then our study reveals that BTPN is strictly more succinct than the DNNF constraint and all 14 ad-hoc constraints discussed in (Wang and Yap 2023), such as the context-free grammar, BCT and smart table constraints. Furthermore, we also show that BTPN is as powerful as DNNF in terms of computing various operations and queries. In addition, we prove that it is NP-hard to determine the minimum sized BTPN encoding a constraint. 1 Introduction Many tractable classes (Carbonnel and Cooper 2016) of Constraint Networks (CNs) have been proposed to study the tractability of Constraint Satisfaction Problems (CSPs). However, there is less work on encoding constraints with the tractable classes. Recently, (Wang and Yap 2022b, 2023) showed that Binary Constraint Tree (BCT) can be used to model various ad-hoc constraints in polysize, where BCT is the class of tree structured Binary CNs (BCNs) (Freuder 1982). In this paper, we investigate the class of BCNs satisfying Broken Triangle Property (BTP) called BTP Networks (BTPNs), which is a well-known tractable hybrid class generalizing BCT (Cooper, Jeavons, and Salamon 2010). (Dechter 1990) showed that the expressiveness of BCNs can be significantly improved with hidden variables. In addition, various binary encodings with hidden variables have been proposed to encode table constraints (Yap, Xia, and Wang 2020) as BCNs, such as the dual encoding (Dechter and Pearl 1989), hidden variable encoding (Rossi, Petrie, and Dhar 1990), double encoding (Stergiou and Walsh 1999) and bipartite encoding (Wang and Yap 2020). Although all constraint relations can be represented as table constraints, the tabulation of constraints may cause exponential blowup. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. For example, the Multi-valued Decision Diagram (MDD) can be exponentially smaller than its table representation. As such (Wang and Yap 2022b) proposed binary encodings to encode MDD constraints as BCTs. They showed that BCT is strictly more succinct than many ad-hoc constraints (Wang and Yap 2022a, 2023). Then a question that arises is can we define more succinct constraint representations with other tractable classes? Usually, the tractable classes of CNs can be classified as language classes, structural classes and hybrid classes (Carbonnel and Cooper 2016). Language classes restrict the language of constraint relations. For example, the largest tractable language class restricts that all constraint relations are preserved by a weak near-unanimity (WNU) operation (Bulatov 2017; Zhuk 2017). Then structural classes restrict the structure of constraint graphs, such as bounded fractional hypertree width (Grohe and Marx 2014). Hybrid classes simultaneously consider both constraint relations and graphs, such as the classes defined by forbidden patterns (Cohen et al. 2012; Cooper and ˇZivn´y 2016). Different from most previous works, which are mainly about the time complexity of CSPs and focus on the CSP dichotomy conjecture (Feder and Vardi 1998), we will investigate the tractable classes from a constraint modelling perspective. In the paper, we first show that it is unlikely to improve the succinctness of BCT by directly using known tractable structural and language classes. Then we investigate the modelling power of the class of BTPNs. To be specific, we prove that the consistency checker (Bessiere et al. 2009) of the BTPN constraint can be computed by polysize monotone circuit, thus, some global constraints cannot be encoded as polysize BTPN, such as Circuit (Beldiceanu and Contejean 1994), Channelling (Cheng et al. 1999) and Linear (Yuanlin and Yap 2000) constraints. Meanwhile, we introduce a binary encoding to transform the DNNF constraint (Gange and Stuckey 2012) into polysize BTPN, and show that BTPN is strictly more succinct than the DNNF constraint and all 14 ad-hoc constraints discussed in (Wang and Yap 2023), such as the context-free grammar (Quimper and Walsh 2006), BCT and smart table (Mairy, Deville, and Lecoutre 2015) constraints. Moreover, we prove that it is NP-hard to minimizing the size of the BTPN encoding a constraint. In addition, we also show that BTPN is as powerful as DNNF in terms of computing operations and queries. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8172 2 Preliminaries A Constraint Network (CN) is a pair (X, C) where X is a set of variables, D(x) is the domain of a variable x ∈ X, and C is a set of constraints. A literal of x is a pair (x, a). A tuple over variables {xi1, . . . , xir} is a set of literals {(xi1, a1), . . . , (xir, ar)}. Each constraint c has a scope scp(c) ⊆X and a relation rel(c), where rel(c) is a set of tuples over scp(c). Then |scp(c)| is the arity of c, and c is a binary constraint if |scp(c)| = 2. A CN P is a Binary CN (BCN) if the arity of all constraints in P is at most 2. A BCN is normalized if all binary constraints have different scopes. In this paper, we only consider normalized BCNs. Given any set V of variables and a set τ of literals, we use τ[V ] = {(x, a) ∈τ|x ∈V } to denote a subset of τ, and T[V ] = {τ[V ]|τ ∈T} is the projection of the tuples T on V . Then for any CN P = (X, C) and literals L, P[L] is a sub-problem (X, C ∪{cx L|x ∈X}) of P, where cx L is a constraint c over {x} with rel(c) = {{(x, a)}|(x, a) ∈L}. A tuple τ over variables V is consistent on a CN P = (X, C) if for all c ∈C such that scp(c) ⊆V , τ[scp(c)] ∈ rel(c). τ is an assignment if for all (x, a) ∈τ, a ∈D(x). A literal (x, a) is valid on P if x ∈X and a ∈D(x) and {(x, a)} is consistent on P. A tuple τ is valid on P if all literals in τ are valid on P. In addition, an assignment τ over X is a solution of P if τ is consistent on P. sol(X, C) or sol(P) denotes the solutions of P. A CN P is satisfiable if sol(P) ̸= ∅. A Constraint Satisfaction Problem (CSP) is to check whether a CN is satisfiable. A literal (x, a) is Generalized Arc Consistency (GAC) on a constraint c if rel(c) has a valid tuple τ including (x, a). A variable x is GAC on c if all valid literals of x are GAC on c. Then c is GAC if all variables in scp(c) are GAC on c. A CN is GAC if all constraints are GAC. Enforcing GAC on a CN P is to find the maximum set L of valid literals such that P[L] is GAC, and replacing P with P[L]. Additionally, GAC on BCNs is also called Arc Consistency (AC). A BCN P is solvable by AC means P is satisfiable if and only if all variables have valid literals after enforcing AC on P. A CN P = (X, C) encodes a constraint c if scp(c) ⊆X and sol(P)[scp(c)] = rel(c), where the variables in scp(c) and X\scp(c) are called original variables and hidden variables. The size of a BCN is the sum of its variable domain sizes and constraint sizes, where the size of a binary (unary) constraint c is defined as |rel(c)|. A class S of BCNs can encode a constraint c in polysize if S has a BCN encoding c whose size is polynomial in the size of c. Then S is conservative if for any P ∈S and literals L, P[L] is in S. 3 Language and Structural Classes In this section, we discuss why it is unlikely to improve the modelling power of the BCT constraint by directly using existing tractable language and structural classes. Language Classes It has been proved that language classes are either NPcomplete or P, and the CNs in a language class are tractable if and only if their constraint relations are preserved by a WNU operation (Bulatov 2017; Zhuk 2017). Then as shown x 0 1 3 y 2 1 z 2 3 (a) BCN h{x,y,z} η1 η2 h{x,y} ⊤ η3 η4 x 1 3 0 z 2 3 y 2 1 (b) BTPN Figure 1: A BCN P and a BTPN encoding P with hidden variables, where each bullet denotes a variable and values are given in the bullet and a dashed line between 2 variable values denotes a tuple that is not consistent on the networks. in (Jeavons, Cohen, and Gyssens 1997), the conjunction and projection operations on the constraint relations preserved by a WNU operation can only be used to encode the constraint relations preserved by the WNU operation. Therefore, tractable language classes cannot be directly used to encode arbitrary constraint relations. Structural Classes Unlike language classes, there are structural classes between NP-complete and P (Bodirsky and Grohe 2008). To the best of our knowledge, bounded treewidth is the largest tractable structural class of BCNs (Grohe 2007), and bounded fractional hypertree width (FHW) is the most general known tractable structural class of CNs (Grohe and Marx 2014). For any tree decomposition T of a CN P = (X, C) and a bag B in T , B ⊆X and the size of sol(P)[B] is bounded by a polynomial whose degree is the FHW of T (Grohe and Marx 2014). Then we can construct a CN PT = (X, CT ) such that sol(P) = sol(PT ) and CT = {cB|B is a bag in T } where scp(cB) = B and rel(cB) = sol(P)[B]. (Dechter and Pearl 1989; Wang and Yap 2023; Kuˇcera 2023) showed that PT can be encoded as polysize BCT, thus, we cannot directly define a strictly more succinct constraint representation with existing tractable structural classes. 4 Broken Triangle Property Networks As discussed in Section 3, it is unlikely to directly improve the succinctness of BCT with existing tractable language and structural classes. In this section, we introduce a well known hybrid tractable class generalizing BCT, i.e. the class of BCNs satisfying Broken Triangle Property (BTP). Definition 1. A Broken Triangle (BT) on a BCN P is a set of 4 valid literals {(x, a1), (y, a2), (z, a3), (z, a4)} such that the 3 tuples {(x, a1), (y, a2)}, {(x, a1), (z, a3)}, {(y, a2), (z, a4)} are consistent on P and the 2 tuples {(x, a1), (z, a4)}, {(y, a2), (z, a3)} are not consistent on P. Figure 1a gives a BCN P encoding the constraint x ̸= y∧x ̸= z∧y ̸= z. The figure has 3 BTs where each BT consists of 2 dashed lines. For example, a BT is the set of literals {(x, 1), (z, 2), (y, 2), (y, 1)}, where the 2 tuples {(x, 1), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8173 (y, 1)}, {(z, 2), (y, 2)} are not consistent on P (corresponding to 2 dashed lines), and the 3 tuples {(x, 1), (z, 2)}, {(x, 1), (y, 2)}, {(z, 2), (y, 1)} are consistent on P. Definition 2. A BCN P = (X, C) satisfies the Broken Triangle Property (BTP) w.r.t. an ordering O over X if there is not any BT {(Oi, a1), (Oj, a2), (Ok, a3), (Ok, a4)} on P such that i < j < k, where Oi, Oj, Ok denote the ith, jth, kth variables in O. Then P satisfies BTP if it satisfies BTP w.r.t. an ordering over X. A Broken Triangle Property Network (BTPN) is a BCN satisfying BTP. The BCN P given in Figure 1a does not satisfy BTP, as each variable in {x, y, z} has 2 valid literals included by a BT on P. Then Figure 1b is a BTPN P ′ encoding P with 2 hidden variables, that does satisfy BTP w.r.t. the variable ordering h{x,y,z} < h{x,y} < x < y < z. We remark that a BTPN may not satisfy BTP w.r.t. some other orderings, e.g. {(x, 1), (y, 1), (h{x,y}, µ3), (h{x,y}, µ4)} is a BT in Figure 1b, and the BTPN P ′ does not satisfy BTP w.r.t. the ordering x < y < z < h{x,y,z} < h{x,y}. BCT is a special case of BTPN. For any BCT, there is a variable ordering O such that every variable v is constrained by at most one variable before v in O. Then for any two variables x, y before a variable z in O, there is no binary constraint between x, z or y, z, i.e. there is no BT on the variables x, y, z. Correspondingly, the BCT satisfies BTP w.r.t. the variable ordering O, see Proposition 4.5 in (Cooper, Jeavons, and Salamon 2010). The expressiveness of BTPN can be dramatically improved with hidden variables. For example, the monotone Conjunctive Normal Form (CNF) is not a BCN, however, it can be encoded as a BTPN with hidden variables. Every clause x1 ∨· · · ∨xr in a monotone CNF F can be encoded with a hidden variables h and r binary constraints c1, · · · , cr where D(h) = {a1, · · · , ar} and for all i ∈[1, r], the binary constraint ci is defined as h ̸= ai ∨xi = True. Then the monotone CNF F can be encoded as a BTPN w.r.t. a variable ordering such that the hidden variables in the BTPN are before the original variables. Tractability of BTPN Assume P is a BTPN w.r.t. a variable ordering O. If P is AC, then every valid and consistent tuple over variables O1, · · · , Ok can be extended to a valid and consistent tuple over the variables O1, · · · , Ok+1. In addition, removing variable values during propagation and search does not introduce new BTs, since the class of BCNs satisfying BTP is conservative. Therefore, the satisfiability of a BTPN can be checked by enforcing AC on the BTPN, see more details in (Cooper, Jeavons, and Salamon 2010). Moreover, there is polytime algorithm to check whether a BCN P has a variable ordering O such that P satisfies BTP w.r.t. O, see Theorem 3.2 and Corollary 7.5 in (Cooper, Jeavons, and Salamon 2010). BTPN must have a variable v such that there is not any BT including 2 literals of v, and the variable v can be eliminated without introducing any new BT. Therefore, a variable ordering can be identified by iteratively eliminating variables. GAC on BTPN Constraints The class of BTPNs is conservative and can be solved by AC. Thereby, we can enforce GAC on a BTPN constraint by enforcing AC on the BTPNs generated by assigning a variable value in the constraint. Then it is tractable to enforce AC on a BCN, so there exists polytime propagators to enforce GAC on the BTPN constraint. We remark that AC on the BTPN encoding a constraint c is weaker than GAC on c. For example, a BCN P modelled with the variables {x1, x2, x3} and constraints {x1 ≥ x2, x2 ≥x3, x1 + x3 ≤1}), where variable domains are {0, 1}, satisfies BTP w.r.t. the variable ordering x1 < x2 < x3, however, AC on P cannot remove the literal (x3, 1) which is not included by any solution of P. In the future, it is interesting to explore a more efficient BTPN GAC propagator. For example, we can design a GAC propagator based on Directional Arc Consistency (DAC) (Dechter and Pearl 1987), as BTPN is solvable by DAC, see Proposition 5.5 in (Cooper, Jeavons, and Salamon 2010). In addition, similar to other binary encodings (Wang and Yap 2019, 2020, 2022b), it has potential to design specialized AC and DAC propagators for BTPN. 5 Circuit Complexity of Checking BTPN The consistency checker (Bessiere et al. 2009) of a constraint c is a monotone function fc where for any subset L of the literals {(x, a)|x ∈scp(c), a ∈D(x)}, if there is a tuple τ ∈rel(c) such that τ ⊆L, then fc(L) = 1, otherwise fc(L) = 0. Then a monotone circuit G is a rooted direct acyclic graph (DAG) where the leaves of G are 0, 1 values and the inner nodes are ∧and ∨gates of which the children are inputs of the gates and the output of G is the output of the root. We remark that we abuse the values 0,1 to denote the values False and True. The circuit complexity of fc is useful for comparing the succinctness of different constraint representations. We then show that the consistency checker of BTPN constraint can be computed by polysize monotone circuit, therefore, some constraints cannot be encoded as polysize BTPN, such as the AllDifferent constraint (R´egin 1994). Lemma 1. There is a polysize monotone circuit computing the consistency checker of the constraints encoded with a class of BCNs which is conservative and solvable by AC. Proof. Let P = (X, C) be the BCN encoding a constraint c. WLOG, we assume P is AC and |rel(c)| > 0. Then AC propagators can be simulated by a function fP = f k P ◦f k−1 P ◦ · · · ◦f 1 P where k = P x∈X |D(x)|. The function f i P maps each tuple τi over the Boolean variables Bi = {bi x,a|(x, a) is valid on P} to a tuple τi+1 over Bi+1 = {bi+1 y,b |(y, b) is valid on P} such that bi+1 y,b = bi y,b ∧(V x∈X\{y} W a∈Sy,b,x bi x,a) where Sy,b,x = {a ∈D(x)|{(x, a), (y, b)} is consistent on P} denotes the valid supports of (y, b) on x. For any assignment τi over Bi, if a literal (y, b) ∈Lτi is not AC on a constraint in P[Lτi], where Lτi denotes the literals {(y, b)|(bi y,b, 1) ∈τi}, then there is x ∈X such The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8174 that for all a ∈S(y, b, x), the literal (bi x,a, 0) is in τi, thus, bi+1 y,b = 0, otherwise if (y, b) is AC on P[Lτi], then bi+1 y,b = 1. In addition, if Lf i P (τi) = Lτi (i.e. P[Lτi] is AC), then Lτi is also equal to Lτk+1. P has at most k valid literals, so for any tuple τ1 over B1, LfP (τ1) gives the maximum subset of Lτ1 such that P[LfP (τ1)] is AC. P is in a conservative class solvable by AC, thus, for any tuple τ1 over B1, P[Lτ1] has a solution iff every x ∈X has a literal in LfP (τ1). Hence, the consistency checker fc can be computed by the function f o ◦fP , where f o = (V x∈X W bk+1 x,a ∈Bk+1 bk+1 x,a ). We can construct the input of f o ◦fP based on the input L of fc where for any b1 x,a ∈B1, if x ∈scp(c) and (x, a) /∈L, then b1 x,a = 0, else b1 x,a = 1. The functions f i P and f o only use the ∨and ∧operations, thus, f o ◦fP can be directly encoded as a DAG by regarding the ∨and ∧operations as nodes where the children of the nodes are inputs of the operations. In addition, the number of ∨, ∧operations used in the functions is O(k3). So fc can be computed by polysize monotone circuit. The class of BTPNs is conservative and solvable by AC, so the consistency checker of the BTPN constraint can be computed by polysize monotone circuit. Theorem 1. The consistency checker of BTPN constraints can be computed by polysize monotone circuit. Based on Theorem 1, we can separate BTPN from the constraints which cannot be computed by polysize monotone circuit. For example, the constraints encoded with a system of XOR constraints cannot be computed by polysize monotone circuit (see Section 6.2 in (Feder and Vardi 1998)), thus, they cannot be encoded as polysize BTPN. 6 Encoding Various Constraints as BTPN We then investigate the succinctness of BTPN on encoding various constraints. A constraint representation A is more succinct than another constraint representation B if every constraint encoded as B with a size n can be encoded as A with a size that is polynomial in n. A constraint representation A is strictly more succinct than B if A is more succinct than B, while B is not more succinct than A. Then A is incomparable to B if A is not more succinct than B, and B is not more succinct than A. Table 1 shows that BTPN is strictly more succinct than the DNNF constraint and all 14 ad-hoc constraints discussed in (Wang and Yap 2023), namely the table (Wang et al. 2016; Demeulenaere et al. 2016), regular (Pesant 2004), nondeterministic finite automaton (NFA) (Quimper and Walsh 2006), context free grammar (CFG) (Quimper and Walsh 2006), c-table (Katsirelos and Walsh 2007), MDD (Cheng and Yap 2010), short table (Jefferson and Nightingale 2013), sliced table (Gharbi et al. 2014), multi-valued variable diagram (MVD) (Amilhastre et al. 2014), smart table (smaT) (Mairy, Deville, and Lecoutre 2015), basic smart table (Verhaeghe et al. 2017), semi-MDD (sMDD) (Verhaeghe, Lecoutre, and Schaus 2018), segmented table (segT) (Audemard, Lecoutre, and Maamar 2020) and BCT (Wang and Yap 2022b) constraints. Strictly More Succinct Incomparable DNNF, CFG, smaT, segT, BCT Permutation, AllDifferent MVD, NFA, MDD, Regular GCC, NValue, Channelling sMDD, c-Table, Short Table Circuit, Cycle Basic smaT, Sliced Table, Table Linear, Knapsack Table 1: The modelling power of BTPN In addition, some special purpose global constraints are incomparable to the BTPN constraint, such as the permutation, AllDifferent (R´egin 1994), GCC (R´egin 1996), NValue (Pachet and Roy 1999), channelling (Cheng et al. 1999), cycle, circuit (Beldiceanu and Contejean 1994), linear (Yuanlin and Yap 2000) and knapsack (Trick 2003) constraints. Theorem 2. The results in Table 1 hold. In the rest of this section, we give the proof of Theorem 2. The context free grammar, smart table and segmented table constraints are incomparable to each other (Wang and Yap 2023), thus, we only need to prove BTPN is more succinct than them. In addition, for the special purpose global constraints, if they cannot be encoded as polysize BTPN, then BTPN is incomparable to them, as they cannot be used to encode arbitrary constraint relations. DNNF Constraints DNNF is a subset of the negation normal form (Darwiche 1999). We consider a multi-valued variant (Gange and Stuckey 2012; Kuˇcera 2023). A negation normal form (NNF) F over a set of variables X is a rooted DAG where each inner node is an or-node labeled with ∨or an and-node labeled with ∧; and each leaf is labeled with a literal of a variable in X. For each node η in F, we use vars(η) to denote the set of variables x ∈X such that η can reach a leaf labelled with a literal of x. In addition, the set of variables vars(F) is equal to vars(ρ) where ρ is the root of F. For technical convenience, we also allow the NNF consisting of a single node labelled with 0 or 1. A NNF is decomposable (DNNF) if vars(η1) ∧vars(η2) is empty for any 2 children η1, η2 of an and-node. Then a DNNF is smooth if vars(η1) = vars(η2) for any children η1, η2 of an or-node. A certificate (Bova et al. 2016) of a DNNF F over a set of variables X is a subgraph G of F such that (i) G is a DNNF including the root of F, (ii) each or-node in G has exactly one child (iii) and each and-node η in G has the same children as η in F. An assignment τ over X satisfies F if there is a certificate G of F such that each leaf of G is labelled with 1 or a literal from τ. A DNNF constraint c represents its constraint relation as a DNNF F over scp(c) where rel(c) consists of all assignments over scp(c) satisfying F. The size of c is the sum of the number of edges in F and variable domain sizes. Each DNNF can be encoded as a smooth DNNF in polytime (Darwiche and Marquis 2002). Every and-node having one child can be merged with its only child. In addition, the DNNF with a single node labelled with 0,1 can be encoded as unary constraints. So in the rest of this subsection, we only consider the smooth DNNF F without any 0,1 labels and each and-node in F has at least 2 children. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8175 W V η1 V η2 W z, 3 z, 2 V η3 W V η4 x, 0 y, 1 y, 2 x, 1 W x, 0 x, 3 y, 1 Figure 2: A DNNF encoding the BCN given in Figure 1a. Definition 3. A BTP binary encoding btp(F) of a DNNF F over variables X is a BCN (X ∪H, C) where • H is a set of hidden variables {hvars(η)|η is an and-node F} and if vars(F) = V , then D(hV ) = {and-nodes η in F | vars(η) = V }; else D(hV ) = {⊤} ∪{and-nodes η in F | vars(η) = V }. • For all x ∈vars(F), C has a constraint c over {x} where {(x, a)} is in rel(c) iff F has a leaf labelled with (x, a). • For all hV ∈H and x ∈V , there is a constraint c ∈C between x, hV such that {(hV , η1), (x, a2)} is in rel(c) iff η1 = ⊤or η1 can reach a leaf labelled with (x, a2). • For all hV1, hV2 in H with V1 ∩V2 ̸= ∅, there is a constraint c ∈C between hV1 and hV2 such that a tuple {(hV1, η1), (hV2, η2)} is in rel(c) iff it satisfies one of the following conditions: – η1 = ⊤and η2 = ⊤; – η1 = ⊤and for all children ηc of η2, V1 ̸= vars(ηc); – η2 = ⊤and for all children ηc of η1, V2 ̸= vars(ηc); – There is a path from η1 to η2 or from η2 to η1. Example 1. Figure 2 shows a DNNF F encoding the BCN given in Figure 1a, where vars(F) = {x, y, z}. F is smooth, as for each or-node η in F, all children of η can reach the leaves labelled with literals of the same variables. Figure 1b gives the BTP binary encoding btp(F) of the DNNF. Each certificate of F corresponds to a solution of btp(F), e.g. the subgraph with red color and dashed edges is a certificate of the DNNF, which corresponds to the solution {(h{x,y,z}, η1), (h{x,y}, η3), (x, 0), (y, 1), (z, 3)}. Lemma 2. An assignment τ over X ∪H is a solution of btp(F) if and only if there is a certificate G of the DNNF F such that the and-node values in τ are the and-nodes of G and the labels of all leaves of G are also included in τ. Proof. Let V = vars(F) and n be the number of and-nodes in F. If F does not have any and-node, then |V | = 1. The unary constraint over the only variable in V can ensure that τ is a solution of btp(F) iff F has a leaf η labelled with a literal included in τ where the path from the root to η is a certificate of F. So the lemma holds for n = 0. Assume the lemma holds for n < k. Then we prove it holds for n = k. There are 2 different cases. Case 1: |D(hV )| > 1. For all and-nodes η in D(hV ), let Fη be an induced subgraph of F on the nodes that are reachable by η or can reach η (note that η is reachable by itself), and Hη is the variables hS in H such that all nodes in D(hS) are not reachable by η. Fη has less and-nodes than F, as the nodes in D(hV ) cannot reach each other. For the BCN btp(F), if hV = η, then all variables in Hη are assigned with ⊤and all assigned and-nodes must be reachable by η. Then btp(Fη) is a sub-problem of btp(F) by removing values and the variables in Hη. For all η ∈D(hV ), a tuple τ having (hV , η) is a solution of btp(F) iff each hS ∈Hη is assigned with ⊤and τ[(X ∪H) \ Hη] is a solution of btp(Fη). In addition, a subgraph G having η is a certificate of F iff G is a certificate of Fη. So the lemma holds for case 1. Case 2: D(hv) = {η}. For all children ηc of η, let Fηc be an induced subgraph of F on all nodes reachable by ηc, and Hηc is the variables hS in H such that D(hS) has andnodes in Fηc. The constraints between hvars(ηc) and hV ensure that hvars(ηc) ̸= ⊤. Then for any two children η1, η2 of η, Fη1, Fη2 do not share any nodes and Hη1 ∩Hη2 = ∅, as vars(η1) ∩vars(η2) = ∅. Hence, an assignment τ over X ∪H is a solution of btp(F) iff for all children ηc of η, τ[Hηc ∪X] is a solution of btp(Fηc). In addition, a subgraph G is a certificate of F iff G has exactly one or-path from the root to η, and all edges from η, and for all children ηc of η, the subgraph of G including the nodes reachable by ηc is a certificate of Fηc. So the lemma holds for case 2. So by induction on k, the lemma holds for any F. Lemma 3. The BTP binary encoding (X ∪H, C) of a DNNF F is a BTPN. Proof. Let O be an ordering over X ∪H where (i) for any hV1, hV2 ∈H, if V2 ⊂V1, then hV1 is before hV2; (ii) variables in X are behind the variables in H. There is no constraint between the variables in X, so if btp(F) does not satisfy BTP w.r.t. O, it has a BT {(hV1, a1), (hV2, a2), (y, a3), (y, a4)} where y ∈X or y ∈H (i.e. y = hV3). Case 1: y ∈X. {(hV1, a1), (y, a4)} and {(hV2, a2), (y, a3)} are not consistent, so a1, a2 ̸= ⊤and y ∈V1∩V2. Then a1 cannot reach a2, otherwise {(hV1, a1), (y, a4)} is consistent due to {(hV2, a2), (y, a4)} is consistent. Similarly, a2 cannot reach a1. However, {(hV1, a1), (hV2, a2)} is consistent and a1, a2 ̸= ⊤and V1 ∩V2 ̸= ∅, which can imply that a1, a2 are connected (a contradiction). So y /∈X. Case 2: y = hV3. The ordering implies V1, V2 ̸⊂V3. {(hV1, a1), (hV3, a4)} and {(hV2, a2), (hV3, a3)} are not consistent, thus, a1, a2 ̸= ⊤, V3 ∩V1 ̸= ∅and V3 ∩V2 ̸= ∅. Then {(hV1, a1), (hV3, a3)} is consistent, so a3 = ⊤or a1 can reach a3. So a2 cannot reach a1, otherwise {(hV2, a2), (hV3, a3)} is consistent. Similarly, a1 cannot reach a2. Next, if V3 ̸⊂V1 and V3 ̸⊂V2, then a1, a2 cannot reach a3, a4, thus, a3 = a4 = ⊤(this is impossible). Hence, we have V3 ⊂V1 or V3 ⊂V2, which means V1 ∩V2 ̸= ∅and a1, a2 are connected (a contradiction). So y /∈H. Both cases are impossible, so btp(F) is a BTPN. Lemma 4. BTPN is strictly more succinct than DNNF. Proof. Lemma 2 and Lemma 3 show that for any DNNF F over variables X, the BTP binary encoding btp(F) is a BTPN encoding F. The number of variables in btp(F) is at most n + |X|, and the total size of all hidden variable The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8176 domains is less than 2(n+1), where n is the number of andnodes in F. Therefore, the size of btp(F) is polynomial in the size of F, and BTPN is more succinct than DNNF. In addition, 2-SAT can be encoded as polysize BTPN by enforcing path consistency (Cooper, Jeavons, and Salamon 2010), and monotone 2-SAT cannot be encoded as polysize DNNF (Bova et al. 2014). Therefore, we can get that BTPN is strictly more succinct than DNNF. CFG Constraints (Quimper and Walsh 2007) showed that the CFG constraint can be encoded as a polysize rooted DAG (and/or graph) G where each and-node in G has exactly 2 children that encodes tuples over 2 variable subsets partitioning a set of variables, which means that the rooted DAG G is a DNNF. So BTPN is more succinct than the DNNF constraint (based on Lemma 4) and the CFG constraint. Smart Table Constraints A smart tuple over variables X is a set S of specific unary and binary constraints such that the constraint graph of (X, S) is acyclic (Mairy, Deville, and Lecoutre 2015). Then (X, S) satisfies BTP w.r.t. an ordering over X such that every variable x ∈X is constrained by at most one variable before x. So the smart tuple (X, S) itself is a BTPN. A smart table (segmented table) R is a disjunction of a set of smart tuples (segmented tuples) where the tuples encoded by R is the union of the tuples encoded by the smart tuples (segmented tuples). Then the disjunction of BTPNs can be computed in polytime (see Lemma 6), thus, the smart table and segmented table can be encoded with polysize BTPN. Segmented Table Constraints A segmented tuple over variables X is a set S of specific unary constraints and table constraints such that for any 2 constraints c1, c2 ∈S, scp(c1) ∩scp(c2) = ∅(Audemard, Lecoutre, and Maamar 2020). The hidden variable encoding (Rossi, Petrie, and Dhar 1990) of (X, S) is acyclic, thus, it is also a BTPN encoding the segmented tuple. A segmented table R is a disjunction of a set of segmented tuples where the tuples encoded by R is the union of the tuples encoded by the segmented tuples. Therefore, the segmented table can be encoded as polysize BTPN. Other Ad-Hoc Constraints Except for the smart table and segmented table constraints, the CFG constraint is more succinct than the other 11 ad-hoc constraints discussed in (Wang and Yap 2023). Therefore, BTPN is strictly more succinct than all 14 ad-hoc constraints discussed in (Wang and Yap 2023). Permutation Constraints and Its Generalizations A permutation constraint c over r variables {x1, · · · xr} encodes that the r variables take distinct values between 1 and r. As shown in (R´egin 1994), the permutation constraint c can be regarded as a bipartite graph G, where literals are edges and each tuple in rel(c) is a perfect matching of G. Then (Razborov 1985) showed that there is no polysize monotone circuit determining whether G has a perfect matching, thus, the permutation constraint cannot be encoded as polysize BTPN. In addition, the AllDifferent, GCC and NValue constraints generalize the permutation constraint. So they all cannot be encoded as polysize BTPN. Channeling Constraints A channelling constraint encodes a connection between two sets of variables (Cheng et al. 1999; Walsh 2001). The permutation constraint can be modelled as a projection of the channelling constraint on a set of variables (Walsh 2001). Then the BTPN encoding a constraint can also encode its projections. Then there is no polysize BTPN encoding the permutation constraint, thus, there is no polysize BTPN encoding the channelling constraint. Circuit and Cycle Constraints A circuit constraint encodes Hamilton circuits on a graph (Beldiceanu and Contejean 1994). Then (Alon and Boppana 1987) showed that there is no polysize monotone circuit checking whether a graph has a Hamilton circuit. In addition, the circuit constraint is a kind of cycle constraints (Beldiceanu and Contejean 1994). So the circuit and cycle constraints cannot be encoded as polysize BTPN. Knapsack and Linear Constraints A knapsack constraint over r variables {v1, · · · , vr} is l ≤ Pr i=1 wivi ≤u (Trick 2003). We use cr to denote the constraint Pr i=1 ri ≤Pr i=1 vi ≤Pr i=1 ri where D(vi) = {rk|k ∈[1, r]}. There must be a variable in cr taking the value rr, as rr > Pr−1 i=1 rr−1. Then rk > Pk−1 i=1 rk−1 for all k ∈[1, r]. So by induction on k, we can get that the variables in cr take r different values from {rk|k ∈[1, r]}. Hence, we can use the consistency checker fcr to compute the consistency checker fc of a permutation constraint c over r variables x1, · · · , xr. For any literals L1 of variables in scp(c), fc(L1) is equal to fcr(L2) where L2 = {(vi, ra)|(xi, a) ∈L}. Therefore, the monotone circuit computing fcr can also be used to compute fc. Then the permutation constraint cannot be computed by polysize monotone circuit, and cr is a knapsack and linear constraint, therefore, there is no polysize monotone circuit computing the knapsack and linear constraint. 7 Time Complexity of Minimizing BTPN A constraint can be encoded with different BTPNs, thus, an interesting problem is to determine the minimum sized BTPN encoding the constraint. In this section, we show that two specific biclique cover problems (P1 and P2) on a bipartite graph G are NP-hard and can be solved by minimizing the BTPN encoding a constraint, where a biclique cover of a subset S of edges in G is a set of complete bipartite subgraphs (CBS’s) of G including all edges in S. Correspondingly, the BTPN minimization problem is NP-hard. Let G = (U, W, E) be a bipartite graph. Without loss of generality, we assume that all vertices in G are included by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8177 at least one edge in E. Note that the biclique covers of a subset of edges in E are not affected by removing the vertices which are not included by any edge in E. Then we construct a constraint cG between 2 variables xU and xW such that D(xU) = U ∪A ∪B and D(xW ) = W ∪A ∪B and rel(cG) = RE ∪RA ∪RB where |A| = |B| = 3|E| + 10 and (A ∪B) ∩(U ∪W) = ∅and A ∩B = ∅and RE ={{(xU, µ), (xW , ω)}|{µ, ω}∈E} and RA = {{(xU, a1), (xW , a2)}|a1, a2 ∈A} and RB = {{(xU, b), (xW , b)}|b ∈B}. The values in A, B are chosen to ensure that the minimum BCN encoding cG has exactly one hidden variable and 2 binary constraints including the hidden variable. Lemma 5. The minimum BCN P encoding cG has exactly one hidden variable and 2 constraints between the hidden variable and the 2 variables xW , xU. Proof. Let P1 = ({xW , xU, h}, {cW , cU}) be a BCN encoding cG with a hidden variable h where D(h) = E ∪ B ∪{a} and cW is defined as (h = a ∧xW ∈A) ∨(h ∈ E ∧xW ∈h)∨(h ∈B ∧xW = h) and cU is (h = a∧xU ∈ A) ∨(h ∈E ∧xU ∈h) ∨(h ∈B ∧xU = h). By the construction, the size of P1 is 4|A|+5|B|+3|E|+|W|+|U|+1. rel(cG) has |A|2 + |E| + |B| tuples, thus, P does not have any constraint between xW , xU. Then rel(cG) is not a universal relation, so xW , xU must be constrained by hidden variables in P. Let kW and kU be the number of constraints including xW and xU, respectively. Then the size of P is at least (kW +1)(|A|+|B|+|W|)+(kU+1)(|A|+|B|+|U|), as each value of xW , xU is included by at least 1 tuple on each constraint. The size of P is not greater than that of P1, thus, kW = 1 and kU = 1. Assume xW (xU) is constrained by a hidden variable hW (hU). For all b1 ∈B, there is b ∈D(hW ) such that (hW , b) can be extended to a solution of P having (xU, b1) and for all other b2 ∈B, {(xW , b2), (hW , b)} is not consistent on P due to {(xW , b2), (xU, b1)} ̸∈rel(cG). Hence, hW has at least |B| values. Similarly, hU has at least |B| values. So hW = hU, otherwise the size of P is greater than that of P1. xW and xU are constrained by the same hidden variable, which means the other hidden variables in P can be eliminated. So P has exactly one hidden variable and 2 constraints between the hidden variable and xW , xU. Therefore, the minimum BCN encoding cG here is also a BCT and BTPN. We then show that it is NP-hard to minimize the BCN, BCT and BTPN encoding cG. Theorem 3. The following 4 problems are NP-hard: P0) Determine the fewest number of cliques which include all of the vertices of a graph. P1) Determine the fewest number of vertices included in a biclique cover of a specified subset S of edges of G. P2) Determine the fewest number of vertices included in a biclique cover of the edges of G. P3) Find the minimum BCN/BCT/BTPN encoding cG. Proof. Theorem 8.1 in (Orlin 1977) shows that P0 is NPhard and can be solved by determining the fewest number of CBS’s included by a biclique cover of the subset S = {{µ1, ω1}, · · · , {µn, ωn}} of edges of a bipartite graph G. We then prove the NP-hardness of P1, P2, P3 by reducing (i) P0 to P1 (P0 ∝P1) and (ii) P1 to P2 (P1 ∝P2) and (iii) P2 to P3 (P2 ∝P3). P0 ∝P1. Let BS be a biclique cover of S having the fewest number of vertices, where S = S1 ∪S2 and S1 = {{µ1, ω1}, · · · , {µn, ωn}} and S2 = {{µ, ω1}, · · · , {µ, ωn}}. Each edge {µi, ωi} in S1 is only included by 1 CBS in BS, otherwise µi can be removed from some CBS’s to reduce the number of vertices. If there is a CBS G′ in BS which has {µ, ωi} but not {µi, ωi}, then we can remove ωi from G′ and add µ to a CBS having {µi, ωi}. Hence, we can assume {µi, ωi} and {µ, ωi} are included by the same CBS in BS. So the number of vertices in BS is equal to |BS| + 2n. Note that BS can also be regarded as a biclique cover of S1 (by removing µ). In addition, we can construct a biclique cover of S with |BS1| + 2n vertices by adding µ to all CBS’s in any biclique cover BS1 of S1. So |BS| is the fewest number of CBS’s covering S1, i.e. P1 is NP-hard. P1 ∝P2. Let Sc be the set of edges which are in G but not in S. We can construct a graph G1 by adding 3 edges e1 = {µe, ωe}, e2 = {µ, ωe} and e3 = {µe, ω} for each edge e = {µ, ω} in Sc. A CBS having µe or ωe can only have the vertices µe, ωe, µ or ω. So the CBS’s with the fewest vertices covering the 3 edges e1, e2, e3 are the CBS with the vertices µe, ωe, µ, ω, which can also cover e. Correspondingly, the biclique cover of G1 with the fewest number of vertices consists of a biclique cover of the edges in S with the fewest number of vertices and the |Sc| CBS’s covering the other edges. So P2 is also NP-hard. P2 ∝P3. Let P = ({xW , xU, h}, {cW , cU}) be a minimum BCN/BCT/BTPN encoding cG (based on Lemma 5). For each value a ∈D(h), we can construct a CBS Ga with the vertices µ ∈U, ω ∈W such that {(xU, µ), (h, a)} ∈ rel(cU) and {(xW , ω), (h, a)} ∈rel(cW ). Then Bh = {Ga|a ∈D(h), Ga is not empty} is a biclique cover of G, where the number of tuples including values of U ∪W is equal to the number of vertices in Bh. If Bh is not a biclique cover with the fewest vertices, then we can construct a smaller BCN/BCT/BTPN encoding cG by replacing the values {a ∈D(h)|Ga is not empty} with the CBS’s in a biclique cover BS having fewer vertices, where for each CBS G′ in BS and any vertices µ ∈U, ω ∈W in G′, the tuples {(xU, µ), (h, G′)} and {(xW , ω), (h, G′)} are constructed. Therefore, Bh must be a biclique cover of G with the fewest vertices, which means P3 is NP-hard. 8 Operations and Queries on BTPN We consider the operations and queries from (Darwiche and Marquis 2002). More details of the operations and queries on constraints can be found in (Wang and Yap 2023). Table 2 gives the complexity of computing them on BTPN. Theorem 4. The results in Table 2 hold. It is NP-hard to compute the conjunction between 2 BTPNs (A ∧B), as it is NP-hard on DNNF (Darwiche and Marquis 2002). The permutation constraint (a conjunction of binary constraints) cannot be encoded as polysize BTPN, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8178 so the conjunction of a set of BTPNs (V S) cannot be computed in polytime. Lemma 6 shows that the disjunction of BTPNs (the A ∨B and W S operations) can be computed in polytime. Then the negation of the permutation constraint can be encoded as polysize BTPN, thus, the negation of a BTPN (¬A) cannot be computed in polytime. The singleton forgetting (SFO) and forgetting (FO) operations can be implemented by the projection operation, while every BTPN (X, C) also encodes the projection sol(X, C)[V ] for any variables V ⊆X. Hence, the SFO and FO operations can be computed in polytime. In addition, the class of BTPNs is conservative, thus, the conditioning (CD) operation can be computed in polytime. BTPN is strictly more succinct than DNNF, so if NP is not in P, then there is no polytime algorithm to compute the validity (VA), implicant (IM), equivalence (EQ), sentential entailment (SE) and model counting (CT) queries (Darwiche and Marquis 2002). The class of BTPNs is conservative and solvable by AC, thus, the consistency (CO) and clausal entailment (CE) queries can be computed in polytime. The time compelxity of computing the model enumeration (ME) query on a BTPN P is polynomial in the sum of |sol(P)| and the size of P, as CO and CD are tractable (Darwiche and Marquis 2002). Lemma 6. The disjuction of a set of BTPNs {(X1, C1), · · · , (Xk, Ck)} can be computed in polytime. Proof. Assume X = Sk i=1 Xk and X = {x1, · · · , xn} and for all i ∈[1, k], Pi = (X, Ci) satisfies BTP w.r.t. an ordering Oi over X. Then let Y = {y1, · · · , yn} and Z = {z1, · · · , zn}. For all 1 ≤j ≤n, D(yj) = {ai|a ∈ D(Oi j), i ∈[1, k]} merges the domains of the k variables O1 j · · · Ok j , and D(zj) = {ai|a ∈D(xj), i ∈[1, k]} consists of k copies of the domain of xj. We can construct a BCN P over X ∪Y ∪Z where (i) {(yj1, ai1), (yj2, bi2)} is consistent on P iff i1 = i2 and {(Oi1 j1, a), (Oi1 j2, b)} is consistent on Pi1; (ii) yj1 = ai ⇔ zj2 = ai for all i, j1, j2 such that Oi j1 = xj2; (iii) zj ∈ {ai|i ∈[1, k]} ⇔xj = a for all xj ∈X and a ∈D(xj). For each i ∈[1, k] and a solution τ of Pi, τ corresponds to a solution τ ∪{(yj, ai)|(Oi j, a) ∈τ}∪{(zj, ai)|(xj, a) ∈τ} of P, where sol(P)[X] = Sk i=1 sol(Pi). Let O be an ordering y1 < · · · < yn < z1 < · · · < zn < x1 < · · · < xn. There is no constraint between the variables in Z, and each variable xj is only constrained by zj on P, thus, if P does not satisfy BTP w.r.t O, then P must have a BT {(yj1, ai 1), (yj2, ai 2), (vj3, ai 3), (vj3, ai 4)} where vj3 is in {yj3, zj3} and yj1 < yj2 < vj3 is a subsequence of O. In addition, (zj3, ai 3) and (zj3, ai 4) are only constrained by a variable yj4 in Y , i.e. constraint (ii), such that Oi j4 = xj3. Hence, vj3 is yj3 and the BT is {(yj1, ai 1), (yj2, ai 2), (yj3, ai 3), (yj3, ai 4)}. However, this implies that {(Oi j1, ai 1), (Oi j2, ai 2), (Oi j3, ai 3), (Oi j3, ai 4)} is a BT on Pi (a contradiction). So P satisfies BTP w.r.t O. P is a polysize BTPN which encodes the disjunction Wk i=1 Pi, therefore, the disjunction of BTPNs {(X1, C1), · · · , (Xk, Ck)} can be computed in polytime. A ∧B A ∨B V S W S ¬A SFO FO CD ◦ ✓ • ✓ • ✓ ✓ ✓ CO VA CE IM EQ SE CT ME ✓ ◦ ✓ ◦ ◦ ◦ ◦ ✓ Table 2: Operations and queries on BTPN: ✓(•, ◦) means the complexity of computing an operation or query is in polytime (not in polytime, not in polytime unless NP=P). 9 Discussion Many tractable classes have been proposed to generalize the class of BTPNs (Cohen et al. 2012; Naanaa 2013; Cohen et al. 2015; J´egou and Terrioux 2015; Cooper, J´egou, and Terrioux 2015; Cooper and ˇZivn´y 2016). In the future, it is worth to investigate whether those tractable classes can improve further the succinctness of BTPN. Usually, the efficiency of propagators is affected by constraint size, thus, an interesting line of research is exploring the most succinct tractable classes having polytime GAC propagators. Moreover, it is also possible to push the study of structural classes forward by identifying more succinct tractable classes, since BCT is already as succinct as the most general known tractable structural class. We show BTPN is strictly more succinct than DNNF, while it is as powerful as DNNF in terms of computing operations and queries. We propose BTPN as an interesting alternative to DNNF from a knowledge compilation perspective. This allows exploring subclasses of BTPNs to identify valuable alternatives to the subsets of DNNF in the knowledge compilation map (Darwiche and Marquis 2002). The circuit complexity of consistency checker is very useful for separating constraint representations. It can separate the DNNF, smart table and segmented table from the permutation constraint and a system of XOR constraints, correspondingly several questions posed in (Fargier and Marquis 2008; Wang and Yap 2023) can be resolved. In addition, various rules based on BTP and forbidden patterns have been proposed to merge values and eliminate variables (Cohen et al. 2013; Cooper et al. 2014; Cooper, El Mouelhi, and Terrioux 2016, 2019). A future research direction is to explore whether these rules can be extended to reduce BTPN. 10 Conclusion In this paper, we propose to encode constraints as Binary Constraint Networks satisfying Broken Triangle Property (called BTPNs). We prove that the consistency checker of the BTPN constraint can be computed by polysize monotone circuit, thereby, some global constraints cannot be encoded as polysize BTPN, such as the permutation, circuit and linear constraints. Then we show that BTPN is strictly more succinct than the DNNF constraint and all 14 ad-hoc constraints discussed in (Wang and Yap 2023). Moreover, we prove that the BTPN minimization problem is NP-hard. Finally, we also investigate the tractability of various operations and queries on the BTPN constraint. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8179 Acknowledgements We thank Prof. Roland Yap for his useful comments. This work was supported by the grant T1 251RES2219. References Alon, N.; and Boppana, R. B. 1987. The monotone circuit complexity of Boolean functions. Combinatorica, 7: 1–22. Amilhastre, J.; Fargier, H.; Niveau, A.; and Pralet, C. 2014. Compiling CSPs: A complexity map of (non-deterministic) multivalued decision diagrams. International Journal on Artificial Intelligence Tools, 23(04): 1460015. Audemard, G.; Lecoutre, C.; and Maamar, M. 2020. Segmented tables: An efficient modeling tool for constraint reasoning. In European Conference on Artificial Intelligence, 315–322. Beldiceanu, N.; and Contejean, E. 1994. Introducing global constraints in CHIP. Mathematical and computer Modelling, 20(12): 97–123. Bessiere, C.; Katsirelos, G.; Narodytska, N.; and Walsh, T. 2009. Circuit complexity and decompositions of global constraints. In International Joint Conference on Artificial Intelligence, 412–418. Bodirsky, M.; and Grohe, M. 2008. Non-dichotomies in constraint satisfaction complexity. In International Colloquium on Automata, Languages and Programming, 184–196. Bova, S.; Capelli, F.; Mengel, S.; and Slivovsky, F. 2014. Expander CNFs have exponential DNNF size. CoRR, abs/1411.1995. Bova, S.; Capelli, F.; Mengel, S.; and Slivovsky, F. 2016. Knowledge compilation meets communication complexity. In International Joint Conference on Artificial Intelligence, volume 16, 1008–1014. Bulatov, A. A. 2017. A dichotomy theorem for nonuniform CSPs. In IEEE 58th Annual Symposium on Foundations of Computer Science, 319–330. IEEE. Carbonnel, C.; and Cooper, M. C. 2016. Tractability in constraint satisfaction problems: a survey. Constraints, 21(2): 115–144. Cheng, B.; Choi, K. M. F.; Lee, J. H.-M.; and Wu, J. 1999. Increasing constraint propagation by redundant modeling: an experience report. Constraints, 4: 167–192. Cheng, K.; and Yap, R. H. C. 2010. An MDD-based generalized arc consistency algorithm for positive and negative table constraints and some global constraints. Constraints, 15(2): 265–304. Cohen, D.; Cooper, M.; Escamocher, G.; and ˇZivn´y, S. 2013. Variable elimination in binary CSP via forbidden patterns. In International Joint Conference on Artificial Intelligence, 517–523. Cohen, D. A.; Cooper, M. C.; Creed, P.; Marx, D.; and Salamon, A. Z. 2012. The tractability of CSP classes defined by forbidden patterns. Journal of Artificial Intelligence Research, 45: 47–78. Cohen, D. A.; Cooper, M. C.; Jeavons, P. G.; and ˇZivn´y, S. 2015. Tractable classes of binary CSPs defined by excluded topological minors. In International Joint Conference on Artificial Intelligence, 1945–1951. Cooper, M. C.; El Mouelhi, A.; and Terrioux, C. 2016. Extending broken triangles and enhanced value-merging. In International Conference on Principles and Practice of Constraint Programming, 173–188. Cooper, M. C.; El Mouelhi, A.; and Terrioux, C. 2019. Variable elimination in binary CSPs. Journal of Artificial Intelligence Research, 66: 589–624. Cooper, M. C.; El Mouelhi, A.; Terrioux, C.; and Zanuttini, B. 2014. On broken triangles. In International Conference on Principles and Practice of Constraint Programming, 9– 24. Cooper, M. C.; Jeavons, P. G.; and Salamon, A. Z. 2010. Generalizing constraint satisfaction on trees: Hybrid tractability and variable elimination. Artificial Intelligence, 174(9-10): 570–584. Cooper, M. C.; J´egou, P.; and Terrioux, C. 2015. A microstructure-based family of tractable classes for CSPs. In International Conference on Principles and Practice of Constraint Programming, 74–88. Cooper, M. C.; and ˇZivn´y, S. 2016. The power of arc consistency for CSPs defined by partially-ordered forbidden patterns. In The 31st Annual ACM/IEEE Symposium on Logic in Computer Science, 652–661. Darwiche, A. 1999. Compiling knowledge into decomposable negation normal form. In International Joint Conference on Artificial Intelligence, 284–289. Darwiche, A.; and Marquis, P. 2002. A knowledge compilation map. Journal of Artificial Intelligence Research, 17: 229–264. Dechter, R. 1990. On the expressiveness of networks with hidden variables. In AAAI Conference on Artificial Intelligence, 556–562. Dechter, R.; and Pearl, J. 1987. Network-based heuristics for constraint-satisfaction problems. Artificial intelligence, 34(1): 1–38. Dechter, R.; and Pearl, J. 1989. Tree clustering for constraint networks. Artificial Intelligence, 38(3): 353–366. Demeulenaere, J.; Hartert, R.; Lecoutre, C.; Perez, G.; Perron, L.; R´egin, J.-C.; and Schaus, P. 2016. Compact-Table: efficiently filtering table constraints with reversible sparse bit-sets. In International Conference on Principles and Practice of Constraint Programming, 207–223. Fargier, H.; and Marquis, P. 2008. Extending the knowledge compilation map: krom, horn, affine and beyond. In AAAI Conference on Artificial Intelligence, 442–447. Feder, T.; and Vardi, M. Y. 1998. The computational structure of monotone monadic SNP and constraint satisfaction: A study through Datalog and group theory. SIAM Journal on Computing, 28(1): 57–104. Freuder, E. C. 1982. A sufficient condition for backtrackfree search. Journal of the ACM, 29(1): 24–32. Gange, G.; and Stuckey, P. J. 2012. Explaining propagators for s-DNNF circuits. In International Conference on Integration of AI and OR Techniques in Constraint Programming, 195–210. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8180 Gharbi, N.; Hemery, F.; Lecoutre, C.; and Roussel, O. 2014. Sliced table constraints: Combining compression and tabular reduction. In International Conference on Integration of Artificial Intelligence and Operations Research techniques in Constraint Programming, 120–135. Grohe, M. 2007. The complexity of homomorphism and constraint satisfaction problems seen from the other side. Journal of the ACM, 54(1): 1–24. Grohe, M.; and Marx, D. 2014. Constraint solving via fractional edge covers. ACM Transactions on Algorithms, 11(1): 4. Jeavons, P.; Cohen, D.; and Gyssens, M. 1997. Closure properties of constraints. Journal of the ACM, 44(4): 527–548. Jefferson, C.; and Nightingale, P. 2013. Extending simple tabular reduction with short supports. In International Joint Conferences on Artificial Intelligence, 573–579. J´egou, P.; and Terrioux, C. 2015. The extendable-triple property: a new CSP tractable class beyond BTP. In AAAI Conference on Artificial Intelligence, 3746–3754. Katsirelos, G.; and Walsh, T. 2007. A compression algorithm for large arity extensional constraints. In International Conference on Principles and Practice of Constraint Programming, 379–393. Kuˇcera, P. 2023. Binary Constraint Trees and Structured Decomposability. In International Conference on Principles and Practice of Constraint Programming, 22:1–22:19. Mairy, J.-B.; Deville, Y.; and Lecoutre, C. 2015. The smart table constraint. In International Conference on Integration of Artificial Intelligence and Operations Research techniques in Constraint Programming, 271–287. Naanaa, W. 2013. Unifying and extending hybrid tractable classes of CSPs. Journal of Experimental & Theoretical Artificial Intelligence, 25(4): 407–424. Orlin, J. 1977. Contentment in graph theory: covering graphs with cliques. Indagationes Mathematicae (Proceedings), 80(5): 406–424. Pachet, F.; and Roy, P. 1999. Automatic generation of music programs. In International Conference on Principles and Practice of Constraint Programming, 331–345. Pesant, G. 2004. A regular language membership constraint for finite sequences of variables. In International Conference on Principles and Practice of Constraint Programming, 482–495. Quimper, C.-G.; and Walsh, T. 2006. Global grammar constraints. In International Conference on Principles and Practice of Constraint Programming, 751–755. Quimper, C.-G.; and Walsh, T. 2007. Decomposing global grammar constraints. In International Conference on Principles and Practice of Constraint Programming, 590–604. Razborov, A. 1985. Lower bounds on the monotone complexity of some Boolean function. In Soviet Math. Dokl., volume 31, 354–357. R´egin, J.-C. 1994. A filtering algorithm for constraints of difference in CSPs. In AAAI Conference on Artificial Intelligence, 362–367. R´egin, J.-C. 1996. Generalized arc consistency for global cardinality constraint. In AAAI Conference on Artificial Intelligence and Innovative Applications of Artificial Intelligence Conference, 209–215. Rossi, F.; Petrie, C. J.; and Dhar, V. 1990. On the equivalence of constraint satisfaction problems. In European Conference on Artificial Intelligence, 550–556. Stergiou, K.; and Walsh, T. 1999. Encodings of non-binary constraint satisfaction problems. In AAAI Conference on Artificial Intelligence, 163–168. Trick, M. A. 2003. A dynamic programming approach for consistency and propagation for knapsack constraints. Annals of Operations Research, 118: 73–84. Verhaeghe, H.; Lecoutre, C.; Deville, Y.; and Schaus, P. 2017. Extending Compact-Table to basic smart tables. In International Conference on Principles and Practice of Constraint Programming, 297–307. Verhaeghe, H.; Lecoutre, C.; and Schaus, P. 2018. CompactMDD: Efficiently filtering (s)MDD constraints with reversible sparse bit-sets. In International Joint Conference on Artificial Intelligence, 1383–1389. Walsh, T. 2001. Permutation problems and channelling constraints. In International Conference on Logic for Programming Artificial Intelligence and Reasoning, 377–391. Wang, R.; Xia, W.; Yap, R. H. C.; and Li, Z. 2016. Optimizing simple tabular reduction with a bitwise representation. In International Joint Conference on Artificial Intelligence, 787–795. Wang, R.; and Yap, R. H. 2023. The expressive power of ad-hoc constraints for modelling CSPs. In AAAI Conference on Artificial Intelligence, 4, 4104–4114. Wang, R.; and Yap, R. H. C. 2019. Arc consistency revisited. In International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research, 599–615. Wang, R.; and Yap, R. H. C. 2020. Bipartite encoding: A new binary encoding for solving non-binary CSPs. In International Joint Conference on Artificial Intelligence, 1184– 1191. Wang, R.; and Yap, R. H. C. 2022a. CNF encodings of binary constraint trees. In International conference on principles and practice of constraint programming. Wang, R.; and Yap, R. H. C. 2022b. Encoding multi-valued decision diagram constraints as binary constraint Trees. In AAAI Conference on Artificial Intelligence, 3850–3858. Yap, R. H. C.; Xia, W.; and Wang, R. 2020. Generalized arc consistency algorithms for table constraints: A summary of algorithmic ideas. In AAAI Conference on Artificial Intelligence, 13590–13597. Yuanlin, Z.; and Yap, R. H. 2000. Arc consistency on nary monotonic and linear constraints. In International Conference on Principles and Practice of Constraint Programming, 470–483. Zhuk, D. 2017. A proof of CSP dichotomy conjecture. In IEEE 58th Annual Symposium on Foundations of Computer Science, 331–342. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8181
2024
909
18,750
Learning Generalized Medical Image Segmentation from Decoupled Feature Queries Qi Bi1,2*, Jingjun Yi1,2*, Hao Zheng1†, Wei Ji3, Yawen Huang1, Yuexiang Li4†, Yefeng Zheng1 1Jarvis Research Center, Tencent YouTu Lab, ShenZhen, China 2School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China 3Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada 4Medical AI ReSearch (MARS) Group, Guangxi Medical University, Nanning, China {q bi, rsjingjuny}@whu.edu.cn, [email protected], {howzheng, yefengzheng}@tencent.com Abstract Domain generalized medical image segmentation requires models to learn from multiple source domains and generalize well to arbitrary unseen target domain. Such a task is both technically challenging and clinically practical, due to the domain shift problem (i.e., images are collected from different hospitals and scanners). Existing methods focused on either learning shape-invariant representation or reaching consensus among the source domains. An ideal generalized representation is supposed to show similar pattern responses within the same channel for cross-domain images. However, to deal with the significant distribution discrepancy, the network tends to capture similar patterns by multiple channels, while different cross-domain patterns are also allowed to rest in the same channel. To address this issue, we propose to leverage channel-wise decoupled deep features as queries. With the aid of cross-attention mechanism, the longrange dependency between deep and shallow features can be fully mined via self-attention and then guides the learning of generalized representation. Besides, a relaxed deep whitening transformation is proposed to learn channel-wise decoupled features in a feasible way. The proposed decoupled feature query (DFQ) scheme can be seamlessly integrate into the Transformer segmentation model in an end-to-end manner. Extensive experiments show its state-of-the-art performance, notably outperforming the runner-up by 1.31% and 1.98% with DSC metric on generalized fundus and prostate benchmarks, respectively. Source code is available at https: //github.com/BiQiWHU/DFQ. Introduction Despite the rapid development of deep learning techniques, most existing medical image segmentation approaches assume that the training and testing samples follow the same statistical distribution. Unfortunately, this assumption may not be fulfilled in many practical medical scenarios. In practice, it is notoriously taxing and expertise-demanding to annotate large amount of segmentation ground truth (Wang et al. 2020; Ouyang et al. 2020; Zhou, Qi, and Shi 2022; Cui et al. 2021; Yao, Hu, and Li 2022). In this regard, medical images are usually collected from a variety of hospitals *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. †Corresponding author Channel 56 Channel 35 Source Channel 100 Channel 59 Channel 134 S1 S2 S3 Target T Shallow Features Deep Features Figure 1: Key challenges to learn domain generalized medical segmentation (visualized with GradCAM). (1) Similar cross-domain features rest in multiple channels with redundancy (in blue boxes); (2) Cross-domain feature misalignment among the same channel (in red boxes). and annotated by different annotators with different levels of expertise (Ji et al. 2021; Reiß et al. 2021). Consequently, the domain shift inevitably exists among these data sources, and thus leads to the high requirement on the generalization ability for medical image segmentation models. In the past few years, the area of domain adaptation for medical image segmentation has been extensively studied. Its pre-requisite is that samples from the target domain are involved in training (Bian et al. 2021; You et al. 2022; Chen et al. 2019), and can only generalize to the target domain seen in training (Zhou, Qi, and Shi 2022). In contrast, the domain generalization paradigm allows the learnt representation to be generalized to any unseen target domains, which significantly alleviates the aforementioned dilemma of annotated data (Liu et al. 2021a; Wang et al. 2020; Zhou, Qi, and Shi 2022; Hu et al. 2023). In general, learning domain generalized medical image segmentation is both technically and clinically significant, as it predicts reliable segmentation results from a variety of scanners, annotators and hospitals. Existing domain generalized medical image segmentation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 810 methods can be summarized into two categories. The one is to learn shape-invariant features from multiple source domains (Liu, Dou, and Heng 2020; Liu et al. 2021a), and the other one is to explicitly learn the inter-domain shift among multiple source domains (Wang et al. 2020; Zhou, Qi, and Shi 2022; Hu et al. 2023). Unfortunately, these methods may not be able to handle the feature distribution variation on arbitrary unseen domains under different imaging conditions (e.g., illumination, image contrast, and scanning). Due to the aforementioned domain shift problem, medical images from different domains may have dramatically different activation patterns among the same channel of a deep learning model (shown in red boxes of Fig. 1). The feature misalignment is particularly obvious for shallower features, which are more sensitive to the variation of imaging conditions. To capture the required pattern in each domain, the network tends to learn a similar pattern in multiple channels, which further leads to varying degrees of feature redundancy across images from different domains (shown in blue boxes of Fig. 1). Feature redundancy helps the model to perform well on the training data from various domains even in the presence of single-channel mismatches, while negatively affecting the generalization ability to unseen domains. In this paper, we are motivated to address the feature misalignment, which helps the models to build a more expressive cross-domain medical image representation and improves the their generalization on unseen target domains. First, we propose to minimize the channel-wise correlation among cross-domain medical images, which helps remove the feature redundancy and maximize the per-channel representation ability. A more expressive per-channel and less redundant representation in the feature encoding stage in turn benefits the generalization on arbitrary unseen target domains. To this end, we propose a relaxed deep whitening transformation, which can be integrated into existing deep segmentation models in a feasible and learnable fashion. On the other hand, the feature de-correlation may not necessarily warrant medical images from different domains have similar activation patterns within the same channel. To further address the intra-channel feature misalignment, we turn to the decoding stage, and propose to use the selfattention mechanism as an implicit constriction. Specifically, we use the decoupled deeper features as the query, and the shallow features as the key and value. The inherent longdependency between decoupled deeper and shallower features restricts the overall framework to learn domain generalized representation from the scratch. Overall, the proposed decoupled feature query (DFQ) learning scheme is integrated into Transformer segmentation models (Xie et al. 2021; Shim et al. 2023) in a learnable fashion. Extensive experiments validate the effectiveness of the proposed DFQ on two standard domain generalized medical image segmentation benchmarks, namely, optic cup/disk segmentation on fundus images (Wang et al. 2020) and prostate segmentation on magnetic resonance imaging (MRI) (Liu, Dou, and Heng 2020). On both benchmarks, samples from one domain are used as unseen target domain, while samples from the rest domains are used as source domains. Finally, visualized segmentation predictions and feature space analysis are presented to further validate the effectiveness of the proposed method. Our contributions can be summarized as follows. • We propose to learn generalized medical image representation from decoupled feature queries (DFQ), which addresses the feature misalignment from cross-domain medical images. The proposed decoupled feature query scheme can be seamlessly integrated into Transformer segmentation models to achieve the better domain generalization performance. • A relaxed deep whitening transformation is proposed to de-correlate the features in a learnable and flexible way. • The proposed framework outperforms the state-of-the-art by at least 1.31% and 1.98% DSC on Fundus and Prostate benchmarks, respectively. Related Work Medical image segmentation has been developed rapidly owing to the stronger representation from deep learning techniques (Bi et al. 2022; Ji et al. 2022; Li et al. 2021a). In the early deep learning era, U-Net (Ronneberger, Fischer, and Brox 2015) and its variants (Zhou et al. 2018; Azad et al. 2021; Daza, P´erez, and Arbel´aez 2021) were dominant for medical image segmentation. Later on, DeepLab (Chen et al. 2017) and its modifications (Gu et al. 2019; Feng et al. 2022) became the dominant trend. More recently, Vision Transformer (ViT) has shown stronger feature representation power than convolutional neural networks (Xie et al. 2021; Shim et al. 2023). Its self-attention mechanism is capable to mine the long-range dependencies (Liu et al. 2021b). Consequently, ViT based medical segmentation pipelines have recently drawn extensive attention (Cao et al. 2022; Gao, Zhou, and Metaxas 2021). In addition, medical image segmentation under weakly-supervised (Pan et al. 2022), semi-supervised (Wu et al. 2022) and multiannotation (Ji et al. 2021) scenarios have also been studied. Domain generalization has been extensively studied in both computer vision and machine learning communities under the non task-specific settings (Xu et al. 2021; Mahajan, Tople, and Sharma 2021; Li et al. 2021b). On the other hand, domain generalized segmentation in the computer vision community usually focuses on the driving scenes under the single domain generalization setting (Pan et al. 2018; Huang et al. 2019; Peng et al. 2022; Pan et al. 2019; Choi et al. 2021; Xu et al. 2022; Peng et al. 2022; Lee et al. 2022; Zhao et al. 2022; Zhong et al. 2022; Li et al. 2023; Bi, You, and Gevers 2023). In contrast, the key challenges in generalized medical image segmentation lie in the great style variations from multiple source domains for training. Domain generalized medical image segmentation intends to learn a semantic representation generalized to any unseen target domain by learning from only source domains. Specially, (Liu, Dou, and Heng 2020) incorporated metalearning to learn shape robustness representation. (Zhang et al. 2020) proposed a deep staked transformation, which augmented the images from all domains under a variety of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 811 𝑓𝑄 4 … Patch Embedding Transformer Encoder Transformer Block 1 Transformer Block 2 Transformer Block 3 Transformer Block 4 ෩𝑭1 ෩𝑭2 ෩𝑭3 ෩𝑭4 Learning from Decoupled Feature Queries C Cross Attention Cross Attention Cross Attention Generalized Representation Decoding MLP Segmentation Head 𝑓𝑄 3 𝑓𝑄 2 𝑓𝐾, 𝑓𝑉 ℒ𝑅𝐷𝑊𝑇 𝑖 : relaxed deep whitening transformation for 𝑭𝑖 C feature concatenation Reducing Channel Redundancy ℒ𝑹𝑫𝑾𝑻 𝒊 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 ● ● Hamdard product outer product ෩𝑭𝑖 ෩𝑭𝑖 Figure 2: Framework of the proposed Decoupled Feature Query (DFQ). After feature extraction from a Transformer encoder, it consists three key steps, namely, reducing channel redundancy, learning from decoupled feature queries, and generalized representation learning. Reducing channel redundancy is implemented by our proposed relaxed deep whitening transformation. data augmentation transformations. (Liu et al. 2021a) proposed a boundary-oriented episodic learning to enhance the shape robustness. (Wang et al. 2020) focused on learning robust medical semantics under the style variations. (Zhou, Qi, and Shi 2022) used the mixup strategy to enhance the shape diversity, and an additional reconstruction head to learn the domain diversity. (Hu et al. 2023) focused on the content enhancement for generalized medical image segmentation. Methodology Problem Formulation & Overview Given K source domains D1, D2, · · · , DK and an unseen target domain DK+1. For domain Dk, the joint image and label pair is denoted as {(x(k) n , y(k) n )}Nk n=1, where k = 1, 2, · · · , K, and Nk refers to the sample number in domain Dk. The learning objective is to learn a segmentation model Fθ : x →y using all the source domains D1, D2, · · · , DK, and generalize well on the unseen target domain DK+1 = {(x(K+1) n )}NK+1 n=1 . Fig. 2 gives an overview of the proposed framework, where F1 ∈ RC1×H1W1, F2 ∈ RC2×H2W2, F3 ∈ RC3×H3W3 and F4 ∈RC4×H4W4 denote the image features from the first, second, third and forth Transformer block (from high to low resolution), respectively. After feature encoding of the medical image, three key steps (namely reducing channel redundancy, learning from de-correlated feature queries, and decoding generalized representation) are involved to yield the robust feature for domain generalized medical image segmentation. Reducing Channel Redundancy To deal with the distribution discrepancy across domains, deep neural network tends to extract similar patterns in multiple channels, which inevitably leads to feature redundancy. The decoupling of channel-wise correlation can reduce the redundancy and maximize per-channel representation ability, which helps to learn more expressive features generalizable to arbitrary source domains and unseen target domains. A common channel de-correlation approach is the whitening transformation (Li et al. 2017). Take the image feature from the first block of the Transformer encoder F1 ∈ RC1×H1W1 as an example. Its whitened feature eF1 can be mathematically computed as eF1 = Σ −1 2 µ (F1 −µ · 1T), (1) where the mean vector and covariance matrix can be computed as µ = 1 HW F1 · 1 ∈RC1×1, (2) Σµ = 1 HW (F1 −µ · 1T)(F1 −µ · 1T)T ∈RC1×C1. (3) To integrate the whitening transformation in a learnable fashion, a basic solution is to follow (Cho et al. 2019). The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 812 so-called deep whitening transformation (DWT) drives Σµ towards the identity matrix I ∈RC1×C1, LF1 DW T = E[||Σµ −I||1]. (4) Let Σµ(i, i) and Σµ(i, j) denote a diagonal and an offdiagonal element of Σµ respectively, where 0 ≤i, j ≤N, and i ̸= j. Then, denoting F† 1 = F1 −µ · 1T, Eq. 4 can be decomposed as ||Σ(i, i) −1||1 = || |F† 1,i||F† 1,i| W · H −1||1, (5) ||Σ(i, j)||1 = || |F† 1,i||F† 1,j| cos θ W · H ||1. (6) Eq. 5 poses a numerical constraint on the diagonal, and ideally, the impact of Eq. 6 in the feature space is to decouple F† 1,i and F† 1,j by forcing the off-diagonal to be orthogonal. However, ||Σ(i, j)||1 can also be reduced by decreased |F† 1,i| or |F† 1,j|, providing a shortcut for reducing LF1 W T when learning decorrelated representations is extremely harder for some channels. Consequently, it can be found that the channel correlation still exists in the results of DWT. To resolve this problem, we normalize the F1 by eF1 = F1 −µ · 1T σ · 1T (7) before calculating the covariance matrix. After that, |eF1,i| = 1, ||Σ(i, i)−1||1 becomes a constant and ||Σ(i, j)||1 is only correlated with the angle between two channels. We can use a strict upper triangular matrix U to approximate the learning objective, U(i,j) = 1 i<j 0 i ≥j 0 ≤i, j ≤N, (8) LF1 RDW T = E[||Σµ ⊙U||1], (9) where ⊙denotes the Hadamard product. Compared with the original DWT, the magnitude constraint on the diagonal is relaxed by the normalized input. LF1 RDW T only focuses on the correlation between channels and is more effective to reduce the feature redundancy. Moreover, under the supervision of LF1 RDW T , eF1 can be directly used as the whiteningtransformed feature in the following. For F2, F3 and F4, similarly, we can learn their whitening transformation by minimizing LF2 RDW T , LF3 RDW T and LF4 RDW T , respectively. For simplicity, their whitened counterpart is denoted as eF2, eF3 and eF4, respectively. Learning from De-correlated Feature Queries The channel-wise decoupled features enhance the representation ability of deep neural networks in cross-domain scenarios. However, the relaxed whitening transformation loss cannot warrant that medical images from different domains show similar per-channel feature response, which is also crucial for learning a generalized model to unseen target datasets. To this end, we turn to the long-dependency inherent in the self-attention mechanism. Compared with deep semantic features, the shallow features directly face the distribution discrepancy across domains, resulting in severer intra-channel misalignment. When decoding the high-level features, the query is generated from deep features while the key and value are based on shallow features. The feature misalignment in shallow layers can lead to different attention maps in such a self-attention process, which further results in unstable feature aggregation for the decoding of deep representations. Under this correlation, the deep feature queries impose an implicit constraint on the consistency of shallow representations across different domains. Specifically, for the channel-wise decoupled features eFi from the ith Transformer block, where i = 2, 3, 4, a linear transformation f i Q is used to generate the query, Qi = f i Q(eFi). (10) For the channel-wise decoupled features from the first Transformer block, the key and value can be computed as K = fK(eF1), V = fV (eF1), (11) where fK and fV are the linear transformations to generate the key and value, respectively. Then, the cross-attention for the features from the ith Transformer block can be computed as Attention(Qi, K, V) = Softmax(QiK √dk )V, (12) where Softmax denotes the softmax normalization function. After the feed-forward layer and normalization, let F ′ 1, F ′ 2, F ′ 3 and F ′ 4 denote the learnt generalized representation from the first, second, third and forth Transformer blocks, respectively. Decoding Generalized Representation The final step is to decode these generalized representations for medical segmentation prediction. This feature fusion is implemented by a linear layer parameterized by weight W1 and bias b1, presented as F = W1[F ′ 1, F ′ 2, F ′ 3, F ′ 4] + b1, (13) where [·, ·] denotes the concatenation function. Then, F is fed into the semantic segmentation head for final prediction. The standard binary cross-entropy loss and Dice loss, which for simplicity we denote as Lseg, are used to minimize the difference between the final prediction and the ground truth. The total loss function L is a combination between Lseg and Li RDW T for each feature, L = Lseg + λ · 4 X i=1 Li RDW T , (14) where we set λ to be 1 × 10−4. Notice that, Li RDW T is a channel-wise sum while Lseg is sample-wise and singlechannel. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 813 Method Domain-1 Domain-2 Domain-3 Domain-4 Domain-5 Domain-6 Average DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ Intra-domain 89.53 1.39 88.42 1.44 87.65 1.67 83.01 3.58 83.39 2.99 84.97 2.00 86.16 2.18 DeepAll 89.16 2.09 87.31 1.27 74.12 3.02 88.85 2.36 83.22 3.51 88.39 1.67 85.18 2.32 BigAug 90.68 1.80 89.52 1.00 84.86 1.86 89.04 1.59 73.24 5.94 89.10 1.16 86.07 2.23 SAML 91.00 1.26 89.26 1.12 85.76 1.87 89.60 1.21 81.60 3.29 89.91 0.96 87.86 1.62 FedDG 91.41 1.29 89.95 0.97 85.10 2.63 89.13 1.51 76.69 4.52 90.63 1.03 87.15 1.99 DoFE 89.79 1.33 87.42 1.57 84.90 2.13 88.56 1.52 86.47 1.93 87.72 1.33 87.48 1.64 RAM-DSIR 87.56 1.04 90.20 0.81 86.92 2.23 88.72 1.16 87.17 1.81 87.93 1.15 88.08 1.37 DCAC 91.76 0.98 90.51 0.89 86.30 1.77 89.13 1.53 83.39 2.46 90.56 0.85 88.61 1.41 DFQ (Ours) 88.28 0.84 91.66 0.63 89.00 2.24 90.16 0.67 89.57 1.43 90.83 0.59 89.92 1.07 Table 1: Performance comparison of the proposed method and existing methods on domain generalized prostate segmentation. Method Domain-1 Domain-2 Domain-3 Domain-4 Average DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ Intra-domain 80.06 20.13 73.13 24.91 83.80 11.20 84.46 8.99 86.46 13.58 DeepAll 79.04 20.32 73.02 24.99 82.26 12.01 84.85 8.39 85.75 13.91 BigAug 80.37 19.50 74.73 22.64 85.39 10.07 86.47 8.32 86.88 13.25 SAML 81.03 19.31 76.61 19.31 85.40 9.99 86.06 8.86 87.60 12.36 FedDG 81.66 18.79 76.31 19.98 85.23 10.86 85.27 8.94 87.29 12.64 DoFE 81.95 18.59 78.31 16.40 85.51 10.06 86.61 8.28 88.14 11.61 RAM-DSIR 85.48 16.05 78.82 14.01 87.44 9.02 85.84 8.29 88.94 10.32 DCAC 81.43 19.20 77.72 17.15 86.80 9.14 87.68 7.12 88.47 11.32 DFQ (Ours) 87.30 15.72 81.92 13.05 88.95 7.70 87.47 6.55 90.57 9.52 Table 2: Performance comparison of the proposed method and existing methods on domain generalized optic cup segmentation. Average refers to the average of optic & disk results on all four settings. Best performance is highlighted in bold. Method Domain-1 Domain-2 Domain-3 Domain-4 DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ Intra-domain 95.82 7.53 87.79 18.75 93.20 9.64 93.41 7.51 DeepAll 95.82 7.63 87.34 18.70 91.37 11.40 92.27 7.83 BigAug 95.59 7.75 87.40 18.89 92.04 11.09 93.05 7.75 SAML 95.74 7.66 87.29 19.20 93.92 8.62 94.76 5.90 FedDG 95.47 7.81 86.34 19.57 93.36 9.12 94.68 6.02 DoFE 96.04 7.05 89.20 15.75 93.23 9.76 94.28 6.99 RAM-DSIR 95.75 7.12 89.43 13.86 94.67 7.11 94.10 7.06 DCAC 96.54 6.35 87.85 18.28 94.28 8.11 95.40 5.20 DFQ (Ours) 6.50 6.01 92.52 12.09 95.04 7.05 94.85 5.84 Table 3: Performance comparison of the proposed method and existing methods on domain generalized optic disk segmentation. Best performance is highlighted in bold. Implementation Details Mix Transformer (MiT-B3) (Xie et al. 2021) is used as the backbone. For the final MLP before the segmentation head, the embedding dimension is set 768. Following prior work (Zhou, Qi, and Shi 2022), the model was trained 400 epochs with an initial learning rate 5 × 10−4 on the Fundus benchmark, and 200 epochs with an initial learning rate 3 × 10−4 on the Prostate benchmark. The data pre-processing strictly follows the prior works (Wang et al. 2020; Zhou, Qi, and Shi 2022), where the fundus images were firstly centered cropped into a size of 800×800 pixels. Both prostate images and the cropped fundus images were resized into 256×256 pixels as input. Experiments & Analysis Datasets & Evaluation Protocols DG Fundus benchmark (Wang et al. 2020) consists of four optic cup/disc segmentation datasets, namely, DrishtiGS (Sivaswamy et al. 2015), RIM-ONE-r3 (Fumero et al. 2011), REFUGE (train) (Orlando et al. 2020), and REFUGE (val) (Orlando et al. 2020), which we denote as Domain-1, Domain-2, Domain-3 and Domain-4, respectively. DG Prostate benchmark (Liu, Dou, and Heng 2020) consists of 116 T2-weighted MRI cases from six domains, which we denote from Domain-1 to Domain-6. Evaluation metrics include Dice Similarity Coefficient (DSC) and Average Surface Distance (ASD), which strictly follow the prior medical image segmentation works. Comparison with SOTA The compared state-of-the-art domain generalized medical segmentation methods include BigAug (Zhang et al. 2020), SAML (Liu, Dou, and Heng 2020), FedDG (Liu et al. 2021a), DoFE (Wang et al. 2020), RAM-DSIR (Zhou, Qi, and Shi 2022) and DCAC (Hu et al. 2023). Following prior works, two baseline settings are involved: Intra-domain refers to training and testing on the same domain, while DeepAll refers to aggregating samples from all source doThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 814 mains for training a deep model. Results on the Prostate benchmark are reported in Table 1. The proposed method significantly outperforms existing state-of-the-art methods. Compared with the secondbest, the ASD metric on the first, second, forth, fifth and sixth domain is improved by 0.14%, 0.18%, 0.49%, 0.38% and 0.26%, respectively. The DSC metric also outperforms all existing methods on five out of six domains by up to 2.08%. Besides, we achieve the state-of-the-art average DSC of 89.92% and average ASD of 1.07%. Compared with the recent state-of-the-art RAM-DSIR (Zhou, Qi, and Shi 2022) and DCAC (Hu et al. 2023), the average ASD is improved by 0.30% and 0.34%, respectively. Results on the Fundus benchmark are reported in Table 2 and Table 3. The proposed method significantly outperforms existing state-of-the-art methods. Compared with the second-best RAM-DSIR, it achieves an average DSC gain of 1.63% and ASD improvement of 0.80%. On all the four target domains, the ASD of optic cup outperforms the secondbest by up to 1.52%. On three of four target domains, the ASC of optic cup outperforms the state-of-the-art by up to 1.77%. On three out of four target domains, the DSC of optic cup outperforms the state-of-the-art by up to 3.10%. Ablation Studies On Each Component. The proposed DFQ framework consists of three key components, namely, segmentation backbone, feature as query and relaxed deep whitening transformation (RDWT), which we denote as Bb., FQ and RT, respectively. For fair evaluation, when there is no FQ component, the features from the backbone are directly fused into the segmentation head by an MLP. Table 4 reports the results on the Fundus benchmark. The use of feature queries leads to an improvement of 0.94% in DSC and 0.88% in ASD against the baseline. Our RT can further leads to an improvement of 1.16% in DSC and 0.68% in ASD. On Each Scale. The decoupled key and value are from the first Transformer block, which we denote as F1. The decoupled queries are from the second, third and forth Transformer blocks, which we denote as F2, F3 and F4, respectively. We investigate the impact from decoupled queries and style-invariant key & values. Results are reported in Table 5. Both using style decoupled key and value (F1) and using style decoupled queries (F2, F3, F4) positively contribute to the segmentation results, but the use of style decoupled queries contribute more to the overall performance. The style-decoupling queries on F2, F3, F4 lead to a DSC improvement of 1.33%, 1.14%, 0.85%, and an ASD improvement of 0.19%, 0.10% and 0.12%, respectively. Understanding DFQ To evaluate if the channel redundancy and misalignment is well handled by the proposed DFQ, we compare it with the baseline (with only a Transformer encoder, feature query and a decoder) and DFQ with conventional deep whitening transformation. We denote the three settings as RDWT, Baseline and DWT, respectively. On Reducing Channel Redundancy. The feature queries from the first to the forth block are computed with the coBlock1 Shallow Block2 Block3 Block4 Deep Baseline DWT RDWT Figure 3: Visualization of the covariance matrix of feature queries, extracted by: baseline (left), conventional deep whitening transformation (DWT; middle), and the proposed relaxed deep whitening transformation (RDWT; right). The more yellow/purple, the higher/lower response. Block1 Shallow Block2 Block3 Block4 Deep Baseline DWT RDWT Domain 1 Domain 2 Domain 3 Domain 4 Domain 5 Domain 6 Figure 4: T-SNE visualization of the feature space from baseline (left), conventional deep whitening transformation (DWT; middle), and the proposed relaxed deep whitening transformation (RDWT; right). Zoom in for a better view. variance matrix, and are visualized in Fig. 3. The more yellow/purple, the higher/lower response. Experiments are conducted on the Prostate benchmark. From the left to right, we show the covariance matrix from the baseline, DWT, and RDWT, respectively. Ideally, a fully channel-decoupled covariance matrix eliminates the responses from all the offdiagonal regions. The proposed DFQ scheme shows the best The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 815 Component Domain-1 Domain-2 Domain-3 Domain-4 Bb. FQ RT DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ ✓ 84.92 17.29 78.49 15.94 86.28 10.30 85.54 8.35 ✓ ✓ 86.16 16.48 80.39 13.54 87.13 9.68 86.69 7.45 ✓ ✓ ✓ 87.30 15.72 81.92 13.05 88.95 7.70 87.47 6.55 Table 4: Ablation studies on each component. Experiments on the optic cup segmentation of the Fundus benchmark. Style-decoupling Domain-1 Domain-2 Domain-3 Domain-4 Domain-5 Domain-6 F1 F2 F3 F4 DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ DSC↑ ASD↓ ✓ 85.87 1.25 88.15 0.86 84.55 2.99 87.92 0.87 85.83 2.05 87.29 0.86 ✓ ✓ 87.41 0.99 89.40 0.81 85.97 2.56 88.55 0.92 87.56 1.72 88.74 0.75 ✓ ✓ ✓ 87.82 0.92 90.97 0.68 87.89 2.48 89.86 0.75 88.35 1.56 89.58 0.73 ✓ ✓ ✓ ✓ 88.28 0.84 91.66 0.63 89.00 2.24 90.16 0.67 89.57 1.43 90.83 0.59 Table 5: Ablation Studies on the style-decoupled queries from different blocks. Experiments on the Prostate benchmark. BigAug SAML FedDG RAM-DSIR DCAC Ours DeepAll Figure 5: Exemplar domain generalized segmentation results of the proposed method and the state-of-the-art methods. The first and second rows are results from the Fundus benchmark. The third and forth rows are results from the Prostate benchmark. Ideally, the green and blue segmentation predictions should coincide the red ground truth. Zoom in for a better view. performance in eliminating the off-diagonal elements. On Cross-domain Feature Alignment. From left to right, Fig. 4 shows the feature space of the baseline, DWT and RDWT by t-SNE visualization. The proposed DFQ allows the samples from different domains to be more uniformly mixed, and thus helps minimize the domain gap. Visualized Segmentation Results Some exemplar segmentation results are visualized in Fig. 5. Compared with existing methods, the proposed method shows a more precise and reasonable prediction. Conclusion In this paper, we proposed a decoupled feature query (DFQ) learning scheme for domain generalized medical image segmentation, which aims to address the feature misalignment among cross-domain medical images. To enhance the perchannel representation ability and reduce the channel redundancy, we proposed a relaxed deep whitening transformation (RDWT). To learn similar channel-wise feature patterns from different domains, we innovatively used decoupled deep features as queries to guide the entire framework. Extensive experiments show the state-of-the-art performance of the proposed method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 816 References Azad, R.; Bozorgpour, A.; Asadi-Aghbolaghi, M.; Merhof, D.; and Escalera, S. 2021. Deep frequency re-calibration U-Net for medical image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3274–3283. Bi, Q.; You, S.; and Gevers, T. 2023. Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation. arXiv preprint arXiv:2307.00371. Bi, Q.; Zhou, B.; Qin, K.; Ye, Q.; and Xia, G.-S. 2022. All grains, one scheme (AGOS): Learning multigrain instance representation for aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing, 60: 1–17. Bian, C.; Yuan, C.; Ma, K.; Yu, S.; Wei, D.; and Zheng, Y. 2021. Domain adaptation meets zero-shot learning: an annotation-efficient approach to multi-modality medical image segmentation. IEEE Transactions on Medical Imaging, 41(5): 1043–1056. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; and Wang, M. 2022. Swin-Unet: Unet-like pure Transformer for medical image segmentation. In European Conference on Computer Vision, 205–218. Chen, C.; Dou, Q.; Chen, H.; Qin, J.; and Heng, P.-A. 2019. Synergistic image and feature adaptation: Towards crossmodality domain adaptation for medical image segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 865–872. Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Cho, W.; Choi, S.; Park, D. K.; Shin, I.; and Choo, J. 2019. Image-to-image translation via group-wise deep whitening-and-coloring transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10639–10647. Choi, S.; Jung, S.; Yun, H.; Kim, J. T.; Kim, S.; and Choo, J. 2021. RobustNet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11580–11590. Cui, H.; Wei, D.; Ma, K.; Gu, S.; and Zheng, Y. 2021. A unified framework for Generalized low-shot medical image segmentation with scarce data. IEEE Transactions on Medical Imaging, 40(10): 2656–2671. Daza, L.; P´erez, J. C.; and Arbel´aez, P. 2021. Towards robust general medical image segmentation. In Medical Image Computing and Computer Assisted Intervention, 3–13. Feng, W.; Wang, L.; Ju, L.; Zhao, X.; Wang, X.; Shi, X.; and Ge, Z. 2022. Unsupervised domain adaptive fundus image segmentation with category-level regularization. In International Conference on Medical Image Computing and Computer Assisted Intervention, 497–506. Fumero, F.; Alay´on, S.; Sanchez, J. L.; Sigut, J.; and Gonzalez-Hernandez, M. 2011. RIM-ONE: An open retinal image database for optic nerve evaluation. In International Symposium on Computer-based Medical Systems, 1–6. Gao, Y.; Zhou, M.; and Metaxas, D. N. 2021. UTNet: a hybrid Transformer architecture for medical image segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention, 61–71. Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; and Liu, J. 2019. CE-Net: Context encoder network for 2D medical image segmentation. IEEE Transactions on Medical Imaging, 38(10): 2281–2292. Hu, S.; Liao, Z.; Zhang, J.; and Xia, Y. 2023. Domain and content adaptive convolution based multi-source domain generalization for medical image segmentation. IEEE Transactions on Medical Imaging, 42(1): 233–244. Huang, L.; Zhou, Y.; Zhu, F.; Liu, L.; and Shao, L. 2019. Iterative normalization: beyond standardization towards efficient whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4874– 4883. Ji, W.; Li, J.; Bi, Q.; Liu, J.; Cheng, L.; et al. 2022. Promoting Saliency From Depth: Deep Unsupervised RGB-D Saliency Detection. In International Conference on Learning Representations. Ji, W.; Yu, S.; Wu, J.; Ma, K.; Bian, C.; Bi, Q.; Li, J.; Liu, H.; Cheng, L.; and Zheng, Y. 2021. Learning calibrated medical image segmentation via multi-rater agreement modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12341–12351. Lee, S.; Seong, H.; Lee, S.; and Kim, E. 2022. WildNet: Learning domain generalized semantic segmentation from the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9936–9946. Li, J.; Ji, W.; Bi, Q.; Yan, C.; Zhang, M.; Piao, Y.; Lu, H.; et al. 2021a. Joint semantic mining for weakly supervised RGB-D salient object detection. Advances in Neural Information Processing Systems, 34: 11945–11959. Li, L.; Gao, K.; Cao, J.; Huang, Z.; Weng, Y.; Mi, X.; Yu, Z.; Li, X.; and Xia, B. 2021b. Progressive domain expansion network for single domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 224–233. Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; and Yang, M.-H. 2017. Universal style transfer via feature transforms. Advances in Neural Information Processing Systems, 30. Li, Y.; Zhang, D.; Keuper, M.; and Khoreva, A. 2023. Intrasource style augmentation for improved domain generalization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 509–519. Liu, Q.; Chen, C.; Qin, J.; Dou, Q.; and Heng, P.-A. 2021a. FedDG: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1013–1023. Liu, Q.; Dou, Q.; and Heng, P.-A. 2020. Shape-aware metalearning for generalizing prostate MRI segmentation to unseen domains. In International Conference on Medical Image Computing and Computer Assisted Intervention, 475– 485. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 817 Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021b. Swin Transformer: Hierarchical vision Transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10012–10022. Mahajan, D.; Tople, S.; and Sharma, A. 2021. Domain generalization using causal matching. In International Conference on Machine Learning, 7313–7324. Orlando, J. I.; Fu, H.; Breda, J. B.; Van Keer, K.; Bathula, D. R.; Diaz-Pinto, A.; Fang, R.; Heng, P.-A.; Kim, J.; Lee, J.; et al. 2020. REFUGE challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Medical Image Analysis, 59: 101570. Ouyang, C.; Biffi, C.; Chen, C.; Kart, T.; Qiu, H.; and Rueckert, D. 2020. Self-supervision with superpixels: Training few-shot medical image segmentation without annotation. In European Conference Computer Vision, 762–780. Pan, J.; Bi, Q.; Yang, Y.; Zhu, P.; and Bian, C. 2022. Labelefficient hybrid-supervised learning for medical image segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2026–2034. Pan, X.; Luo, P.; Shi, J.; and Tang, X. 2018. Two at once: Enhancing learning and generalization capacities via IBNNet. In European Conference on Computer Vision, 464–479. Pan, X.; Zhan, X.; Shi, J.; Tang, X.; and Luo, P. 2019. Switchable whitening for deep representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1863–1871. Peng, D.; Lei, Y.; Hayat, M.; Guo, Y.; and Li, W. 2022. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2594–2605. Reiß, S.; Seibold, C.; Freytag, A.; Rodner, E.; and Stiefelhagen, R. 2021. Every annotation counts: Multi-label deep supervision for medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9532–9542. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention, 234–241. Shim, J.-h.; Yu, H.; Kong, K.; and Kang, S.-J. 2023. FeedFormer: Revisiting Transformer decoder for efficient semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2263–2271. Sivaswamy, J.; Krishnadas, S.; Chakravarty, A.; Joshi, G.; Tabish, A. S.; et al. 2015. A comprehensive retinal image dataset for the assessment of glaucoma from the optic nerve head analysis. JSM Biomedical Imaging Data Papers, 2(1): 1004. Wang, S.; Yu, L.; Li, K.; Yang, X.; Fu, C.-W.; and Heng, P.A. 2020. Dofe: Domain-oriented feature embedding for generalizable fundus image segmentation on unseen datasets. IEEE Transactions on Medical Imaging, 39(12): 4237– 4248. Wu, Y.; Ge, Z.; Zhang, D.; Xu, M.; Zhang, L.; Xia, Y.; and Cai, J. 2022. Mutual consistency learning for semisupervised medical image segmentation. Medical Image Analysis, 81: 102530. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with Transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Xu, Q.; Yao, L.; Jiang, Z.; Jiang, G.; Chu, W.; Han, W.; Zhang, W.; Wang, C.; and Tai, Y. 2022. DIRL: Domaininvariant representation learning for generalizable semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2884–2892. Xu, Q.; Zhang, R.; Zhang, Y.; Wang, Y.; and Tian, Q. 2021. A Fourier-based framework for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14383–14392. Yao, H.; Hu, X.; and Li, X. 2022. Enhancing pseudo label quality for semi-supervised domain-generalized medical image segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 3099–3107. You, C.; Zhou, Y.; Zhao, R.; Staib, L.; and Duncan, J. S. 2022. SimCVD: Simple contrastive voxel-wise representation distillation for semi-supervised medical image segmentation. IEEE Transactions on Medical Imaging, 41(9): 2228–2237. Zhang, L.; Wang, X.; Yang, D.; Sanford, T.; Harmon, S.; Turkbey, B.; Wood, B. J.; Roth, H.; Myronenko, A.; Xu, D.; et al. 2020. Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation. IEEE Transactions on Medical Imaging, 39(7): 2531–2540. Zhao, Y.; Zhong, Z.; Zhao, N.; Sebe, N.; and Lee, G. H. 2022. Style-hallucinated dual consistency learning for domain generalized semantic segmentation. In European Conference on Computer Vision, 535–552. Zhong, Z.; Zhao, Y.; Lee, G. H.; and Sebe, N. 2022. Adversarial style augmentation for domain generalized urbanscene segmentation. In Advances in Neural Information Processing Systems. Zhou, Z.; Qi, L.; and Shi, Y. 2022. Generalizable medical image segmentation via random amplitude mixup and domain-specific image restoration. In European Conference on Computer Vision, 420–436. Zhou, Z.; Rahman Siddiquee, M. M.; Tajbakhsh, N.; and Liang, J. 2018. UNet++: A nested U-Net architecture for medical image segmentation. In International Workshop on Deep Learning in Medical Image Analysis, 3–11. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 818
2024
91
18,751
What Are the Rules? Discovering Constraints from Data Boris Wiegand1,2, Dietrich Klakow2, Jilles Vreeken3 1 SHS – Stahl-Holding-Saar, Dillingen, Germany 2 Saarland University, Saarbr¨ucken, Germany 3 CISPA Helmholtz Center for Information Security, Germany [email protected], [email protected], [email protected] Abstract Constraint programming and AI planning are powerful tools for solving assignment, optimization, and scheduling problems. They require, however, the rarely available combination of domain knowledge and mathematical modeling expertise. Learning constraints from exemplary solutions can close this gap and alleviate the effort of modeling. Existing approaches either require extensive user interaction, need exemplary invalid solutions that must be generated by experts at great expense, or show high noise-sensitivity. We aim to find constraints from potentially noisy solutions, without the need of user interaction. To this end, we formalize the problem in terms of the Minimum Description Length (MDL) principle, by which we select the model with the best lossless compression of the data. Solving the problem involves model counting, which is #P-hard to approximate. We therefore propose the greedy URPILS algorithm to find high-quality constraints in practice. Extensive experiments on constraint programming and AI planning benchmark data show URPILS not only finds more accurate and succinct constraints, but also is more robust to noise, and has lower sample complexity than the state of the art. Introduction Constraint programming, the holy grail of programming (Bart´ak 1999), separates the concerns of modeling a problem and finding a solution. As modeling the problem requires the rarely available combination of both domain knowledge and mathematical modeling expertise, learning constraints from data enables broader application of constraint programming (O’Sullivan 2010). Handcrafted solutions are often recorded for real-world assignment problems like scheduling and staff rostering, and thus provide a promising knowledge base to mine constraints. Existing approaches do not satisfactorily solve this task. Active learning (Bessiere et al. 2013; Tsouros and Stergiou 2020; Belaid et al. 2022) needs thousands of queries even for simple problems, which is intractable if a human expert must label these queries. Passive learning approaches (Pawlak and Krawiec 2017; Kumar et al. 2020; Prestwich et al. 2021) need invalid examples, i.e., non-solutions, in their training set. Those are usually not collected and experts must create them at great expense. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. State-of-the-art methods to learn constraints purely from valid solutions either suffer from a limited constraint language, resulting in a long list of hard-to-read constraints, and need a lot of data (Prestwich 2021), or cannot learn from real-world data because they are not robust to noise (Kumar et al. 2019; Kumar, Kolb, and Guns 2022). Furthermore, although learning conditions for actions in AI planning is closely related to learning constraints for constraint programming, none of the existing approaches is directly applicable to AI planning problems. Most of AI planning specific work (Arora et al. 2018; Aineto, Celorrio, and Onaindia 2019; Segura-Muros, P´erez, and Fern´andez-Olivares 2021) considers constraint learning as only one of many problems to solve. Not focusing on constraint learning prevents these methods from being effective on this task. To overcome all these limitations, we formalize the problem of learning constraints from exemplary solutions in terms of the Minimum Description Length (MDL) principle, by which we select the model with the best lossless compression of the data. Since solving the problem exactly involves #P-hard model counting, we propose the greedy URPILS algorithm for Unveiling Rules from PosItive LabelS. Through extensive experiments on both constraint programming and AI planning benchmark data, we empirically show that URPILS discovers more accurate and succinct constraints with less constraint terms, is more robust to noise, and has lower sample complexity than the state of the art. In summary, our main contributions are as follows. We (a) formalize the problem of learning constraints from exemplary solutions in terms of the MDL principle, (b) propose an efficient heuristic to discover constraints for both constraint programming and AI planning, (c) provide an extensive empirical evaluation, (d) make code, data and additional details publicly available in the supplementary materials. In the next section, we introduce necessary notation and basic concepts we use in the paper. Then, we formalize the problem in terms of MDL. Next, we propose our greedy URPILS algorithm and describe how to adapt URPILS to AI planning problems. After giving an overview of related work, we provide an extensive empirical evaluation on benchmark datasets. Finally, we discuss limitations, outline potential future work and draw a conclusion. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8182 Preliminaries Before we formalize the problem, we introduce notation and basic concepts we use in the paper. Boolean Constraint Programming Assume we are given a list of object sets O1, . . . , Ok and their Cartesian product X = Qk i=1 Oi. As an example, consider the 8-Queens problem, where we want to place eight queens on a 8×8 chess board, such that no two queens attack each other. We define an object set O1 = {Q1, . . . , Q8} for queens, and an object set O2 = {S1, . . . , S64} for squares on the board. An assignment is a boolean function fa : X → {0, 1}, e.g., fa(Q1, S42) = 1 means queen Q1 is on square S42. We call fa a valid assignment, if it satisfies a set of constraints, i.e., a model M = {C1, . . . , Cm}, with Ci : X → {0, 1} and fa is valid iff ∀x ∈X ∀Ci ∈M : Ci(x) = 1. For a given model M, we denote the set of valid assignments by FM. We define constraints by a boolean algebra over the assignment fa, a set of boolean relations between objects FB = {f1, . . . , f|FB|} with fi : Q j∈{1,...,k}+ Oj →{0, 1}, and arithmetic expressions over a set of numeric relations FR = {f1, . . . , f|FR|} with fi : Q j∈{1,...,k}+ Oj →R. In the 8-Queens example, we assign rows and columns to squares. Formally, we define FR = {fx, fy} with fx : O2 →{1, . . . , 8} and fy : O2 →{1, . . . , 8}. The constraint that no more than one queen may be placed in a row can then be written as ∀(q1, q2, s1, s2) ∈O1 ×O1 ×O2 ×O2 : (s1 ̸= s2 ∧fx(s1) = fx(s2)) →(fa(q1, s1) →¬fa(q2, s2)). Our goal is to find constraints like these from a dataset of exemplary valid assignments D = {f 1 a, . . . , f n a }. Minimum Description Length Principle We use the Minimum Description Length (MDL) principle (Rissanen 1978; Gr¨unwald 2007) for model selection. MDL identifies the best model as the one with the shortest lossless description of the given data. In MDL, we only compute code lengths, but are not concerned with actual code words. Formally, given a set of models M, the best model is defined by arg minM∈M L(M) + L(D | M), in which L(M) is the length in bits of the description of M, and L(D | M) is the length of the data encoded with the model. This form of MDL is known as two-part or crude MDL. Although one-part or refined MDL provides stronger theoretical guarantees, it is only computable in specific cases (Gr¨unwald 2007). Therefore, we use two-part MDL. Next, we formalize our problem in terms of MDL. MDL for Constraint Learning From a set of exemplary assignments, we aim to discover a succinct set of constraints fitting and explaining the observed data and generalizing well to unseen data. To account for potential noise in real-world data, we need a noise-robust discovery approach. Thus, we formalize the problem of constraint discovery from exemplary solutions in terms of the MDL principle. To this end, we define length of the data encoding L(D | M), length of the model encoding L(M), and finally give a formal problem definition. Data Encoding for Constraint Programming To encode a dataset D, we encode all its assignments, i.e., L(D | M) = X fa∈D L(fa | M) . An empty model without constraints has |FM| = 2|X| valid assignments, and we need |X| bits to choose one. The more constraints the model contains, the smaller the set of valid assignments, and the cheaper it is to identify the actual one. As real-world data is often noisy, there may not exist a valid assignment matching the exemplary data exactly. To ensure a lossless encoding, we have to encode the errors of the best fitting assignment. We denote the number of errors by error(M | fa) = min f ′ a∈FM X x∈X 1f ′ a(x)̸=fa(x)(x) . To encode the errors, we first specify their number by the MDL-optimal encoding for integers z ≥1 (Rissanen 1983) defined as LN(z) = log c0 + log z + log log z + . . ., and we sum only the positive terms, and c0 is set to 2.865064 to satisfy the Kraft inequality for a lossless encoding. Then, we encode the incorrect assignment values by a data-tomodel code (Li and Vit´anyi 1993), i.e., an index to choose error(M | fa) out of |X| values. In summary, we have L(fa | M) = log |FM| + LN (1 + error(M | fa)) + log  |X| error(M | fa)  . This gives us a lossless encoding of the data. Model Encoding Next, we compute the length of the model encoding by L(M) = LN(|M| + 1) + X C∈M L(C) , i.e., we encode the number of constraints, which can be zero, and encode each constraint. We first define a grammar of our constraint language for complex real-world problems by C →⟨CV ⟩“|” ⟨CF ⟩: ⟨CT ⟩ CV →ϵ | ∀x ∈X | ∀x, y ∈X CF →ϵ | ⟨v⟩= ⟨v⟩| ⟨v⟩̸= ⟨v⟩| ⟨fB⟩(⟨v⟩) | ⟨NF⟩| ¬⟨CF ⟩| ⟨CF ⟩∧⟨CF ⟩| ⟨CF ⟩∨⟨CF ⟩ v →x⟨i⟩| y⟨i⟩ i →1 | . . . | k fB →one of FB NF →⟨NE⟩< ⟨NE⟩| ⟨NE⟩≤⟨NE⟩| ⟨NE⟩= ⟨NE⟩ NE →⟨z ∈R⟩| ⟨fR⟩(⟨v⟩) | ⟨NE⟩⟨⊙⟩⟨NE⟩| “|”⟨NE⟩“|” | ⌊⟨NE⟩⌋| ⌈⟨NE⟩⌉ fR →one of FR ⊙→+ | −| · | / CT →fa(x) | fa(y) | fa(X⟨j⟩) | ¬⟨CT ⟩| ⟨CT ⟩∧⟨CT ⟩| ⟨CT ⟩∨⟨CT ⟩| ⟨COUNT⟩ j →1 | . . . | |X| COUNT →⟨NE⟩≤ X ⟨v⟩ fa(x) ≤⟨NE⟩. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8183 We conceptually split a constraint C into three parts, i.e., C = (CV , CF , CT ). In CV , we can define variables of object tuples in X. In CF , we filter the possible values of these variables: we can test for equality and inequality of variables, we can query values of boolean and numerical relations, and we can compose complex filters with boolean operators. A numeric filter NF compares the values of two numeric expressions NE, which are any real number, any numeric relation, or a composite of arithmetic operations. The target of any constraint is to define the set of valid assignments. In CT , we restrict the valid values of an assignment fa by a boolean expression over fa. In its simplest form, CT requires fa to be true for a variable defined by CV and CF . We can also require fa to be true for one specific parameter combination Xj with j ∈{1, . . . , |X|}. We can compose more complex constraints using boolean operators. In many real-world problems, we can distribute some kind of budget. For instance, if we assign shifts to employees during rostering, employees require a minimal and maximal workload. We model such COUNT constraints by a lower and upper bound on a sum over the assignment values of fa. When computing the encoded length of a constraint, we want to avoid any undue bias and therefore assume that whenever we have multiple modeling choices, all options are equally likely. Formally, we use our defined constraint grammar to recursively compute L(C) by L(A) = log |A| + X ⟨α⟩∈A L(α) , where A is a nonterminal in the grammar, and we first encode which of the |A| branches we produce, before we encode all remaining nonterminals. In the special case of ⟨z ∈R⟩, we compute the encoded length by LR(z) (Marx and Vreeken 2019), where we represent z up to a userspecified precision p by the smallest integer shift s such that z · 10s ≥10p. We then encode shift, shifted digit and sign, i.e., LN(s) + LN(⌈z · 10s⌉) + 1. Altogether, this gives us a lossless encoding of the model. Formal Problem Definition Using our MDL score, we now formally state our problem. Minimal Constraint Model Problem Given a set D of assignments f 1 a, . . . , f n a , find the constraint model M minimizing the total encoded cost L(D, M) = L(D | M) + L(M). Solving this problem optimally is intractable in practice. Potentially, we have up to 2|X| valid assignments, i.e., we face an exponentially growing search space for constraints. Moreover, our MDL score does not exhibit properties such as monotonicity or submodularity that we can exploit to efficiently find an optimal solution. We give a counterexample for both properties in the supplementary materials. Additionally, even computing L(D | M) is hard by itself. Finding a valid assignment f ′ a for M that is nearest to a given assignment fa corresponds to finding a valid assignment having maximal Manhattan distance to fa with negated values, which in general is NP-hard (Crescenzi and Rossi 2002). Computing the number of valid assignments |FM| is equivalent to counting the solutions of a boolean formula, which is #P-complete, i.e., at least as hard as NP-complete (Valiant 1979). Researchers have proposed algorithms like GANAK (Sharma et al. 2019), SHARPSAT-TD (Korhonen and J¨arvisalo 2021) or APPROXMC (Soos and Meel 2019) to tackle the problem. Dependent on the complexity of the formula, these approaches take seconds, minutes or even hours (Fichte, Hecher, and Hamiti 2021), which is too slow for evaluating many constraint candidates during search. The URPILS Algorithm Since solving the minimal constraint model problem optimally is intractable, we resort to greedy solutions. Estimating the Number of Valid Assignments To compute L(fa | M), we must count the number of valid assignments |FM| for a given model M. We use an approximation, which is fast to compute and still enables useful comparison of constraint candidates. We estimate the number of valid assignments based on a standard algorithm for exact counting (Zhou, Yin, and Zhou 2010). First, we transform our constraint model M into a boolean function of conjunctive normal form (CNF), where each possible parameter combination of fa corresponds to a boolean variable. Next, we compute the constraint graph G of the formula, which is an undirected graph with variables as nodes, and two variables are connected if they occur together in a clause. We count the number of valid assignments separately for disconnected, i.e., independent, components and get the total count by multiplying the result of each component. If the graph is small enough and contains less than five variables, it is feasible to count the number of valid assignments exactly by enumeration. Otherwise, we use a polynomial-time approximation. If clauses in the CNF contain at most two variables, the number of valid assignments corresponds to the number of independent sets in G (Dahll¨of, Jonsson, and Wahlstr¨om 2005), where an independent set is any set of non-adjacent nodes. We compute a lower bound by (Sah et al. 2019) |FM| ≥ Y v∈V (deg v + 2)1/(deg v+1), with V being the set of nodes in G and deg v denotes the degree of node v. The number of variables per clause only depends on the target part CT of a constraint. If CT has the form fa(·) or ¬fa(·), we have one variable per clause, whereas implications like fa(x) →fa(y) result in two variables per clause. In these cases, our lower bound leads to valid results. As we will show later in the experiments, these unary and binary relationships between variables are sufficient to describe most of the constraints in many problems. Count constraints, however, in general lead to clauses with more than two variables. For instance, let x1, . . . , x6 be boolean variables and consider the constraint P6 i=1 xi = 3. Then, the CNF of this constraint is Q4 i=1 Q5 j=i Q6 k=j(xi + xj + xk), i.e., we have three variables per clause. Thus, we need to compute |FM| differently for count constraints. For The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8184 a single equality constraint Pn i=1 xi = a, we have n a  satisfying assignments. We generalize this for inequality constraints a ≤Pn i=1 xi ≤b by Pb i=a n i  . Since we do not know what the intersection of the valid assignments for multiple count constraints looks like, because enumerating them is intractable, we can only make assumptions. We assume all count constraints equally contribute to the final count, and thus divide the mean of the individual counts by the number of constraints, i.e., |FM| = &P C∈M F{C} |M|2 ' . By this, we can estimate the number of valid assignments. Estimating the Best Fitting Valid Assignment To compute L(fa | M), we also need to compute error(M | fa), i.e., the minimal number of values we need to change in a valid assignment of M to get fa. Since we must repeat this computation many times during our search for constraints, we want this to be as fast as possible. As in counting the number of valid assignments, enumerating all assignments to find error(M | fa) is intractable. In contrast to error(M | fa), the number of unsatisfied clauses in the CNF formula of the model is cheap to compute. The more clauses are unsatisfied, the more variables we expect must be flipped to satisfy the formula, and hence the higher is error(M | fa). We estimate the number of variables we must flip to satisfy the formula by using the coupon collector’s problem (P´olya 1930)(Feller 1968, p. 225): If we assume that for each of the m unsatisfied clauses, we draw one of |V | variables with replacement to flip, the expected number of flipped variables is error(M | fa) =  |V | −|V | · (1 −1 |V |)m  . The value of error(M | fa) is 0 if no clause is unsatisfied, increases with m and does not exceed the number of variables |V |. By this, we can compute L(D, M). Discovering a Good Constraint Model We now want to minimize L(D, M) for a given dataset D, i.e., we want to discover a good constraint model in feasible time. Since computing L(D | M) is harder for models with COUNT constraints, we search for these at the end. Many satisfiability and optimization problems contain a set of relatively simple constraints, even if they also contain a set of very complex constraints. Simple constraints typically involve none or only one feature relation in the filtering part CF . Therefore, we propose our method URPILS, in which we split the search for constraints into three stages. We give the pseudocode of URPILS in Algorithm 1. Starting with an empty model, we first search for the lowhanging fruit and generate simple constraint candidates. In constraint programming, we are often interested in modeling the pairwise relationship between variables. For example, we require in Sudoku that two cells in the same row do not have the same value. Hence, we generate a set of Algorithm 1: URPILS input : dataset D output: set of constraints M 1 M ←∅; 2 M ←FILTER(M, D, SIMPLECANDS(D)); 3 M ←FILTER(M, D, COMPLEXCANDS(M, D)); 4 M ←FILTER(M, D, COUNTCANDS(D)); 5 return M; Algorithm 2: FILTER input : current model M, dataset D, set of candidates Q output: extended model M 1 sort Q by L(D, ·); 2 foreach C′ ∈Q do 3 foreach C ∈M do 4 if C′ V = CV ∧C′ T = CT then 5 C′ F ←C′ F ∨CF ; 6 M ′ ←M\{C}; 7 break; 8 M ′ ←M ′ ∪{C′}; 9 if L(D, M ′) < L(D, M) then 10 M ←M ′; 11 return M; simple candidates with all constraints of the form ∀x, y ∈ X | CF : fa. In CF , we compare the values of at most one boolean and numerical relation, e.g. f(x1) < f(y1) with f ∈FR. To restrict the pairwise assignment values of x and y, we generate implications of the type fa(x) →fa(y) and fa(x) →¬fa(y) for CT . For further reference, we provide pseudocode for SIMPLECANDS in the supplementary. We filter the generated candidates in the FILTER subroutine for which we give the pseudocode in Algorithm 2. We test the most promising candidates first through prioritizing candidates by their individual gain. To minimize model complexity, we try to merge each candidate C′ with an existing constraint C ∈M. We can merge constraints if they share the same variable and target part. For example, we merge ∀x, y ∈X | g(x1) < g(y1) : fa(x) →¬fa(y) and ∀x, y ∈X | h(x2) = h(y2) : fa(x) →¬fa(y) into ∀x, y ∈ X | g(x1) < g(y1) ∨h(x2) = h(y2) : fa(x) →¬fa(y). If a candidate improves our score, we add it to the model. A model with simple constraints gives us a good baseline from which we search for constraints with a more complex filtering part. Instead of an intractable search over all possible filtering expressions, we map the problem to a simpler binary classification problem. We test for each pair x, y ∈X whether the single implication fa(x) →fa(y) improves the fit on the data, i.e., it leads to a lower L(D | M). We later repeat the search for fa(x) →¬fa(y). By this, we get a set of positive and a set of negative implications as targets of a binary classification. We generate features by a recursive enumeration of all possible CF using our defined constraint The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8185 grammar. To avoid infinite recursion and combinatorial explosion, we do not generate CF with conjunctions or disjunctions, and we limit the number of numerical operators. Finally, we look for a set of features best explaining the division into positive and negative implications, which gives us a good candidate for CF . For reference, we provide details and pseudocode for COMPLEXCANDS in the supplementary. In the last stage of URPILS, we search for count constraints. To this end, we create candidates for different input partitions of fa similar to the existing COUNTOR algorithm (Kumar et al. 2019). Formally, we create constraints of the form ∀x ∈X | CF : a ≤P fa(x) ≤b, where we generate candidates for a, b ∈N by observations in D. We generate an empty CF , and we generate all possible CF = f(xi) with f ∈FB and i ∈{1, . . . , k}. Again, we use FILTER to select which candidates we add to our final model. This gives us a set of constraints from exemplary assignments. URPILS for AI Planning Next we show how to adapt URPILS for AI planning problems, in which actions change the state of an environment until a predefined goal state is reached. We reuse notation and define a state by boolean and numerical relations between objects from different object sets. We write f j i to refer to relation fi at state j. W.l.o.g we consider a single action a. We denote the assignment at state j by f j a, and f j a(x) = 1 if a is executed with objects x at state j and f j a(x) = 0 else. As before, we aim to find constraints M for valid assignments and thus preconditions to execute a. As valid assignments satisfy P x∈X fa(x) = 1, the empty model has |X| instead of 2|X| valid assignments. To encode errors efficiently, we specify for each assignment in the data if it is valid for M. If we knew the number of valid and invalid assignments beforehand, we could compute the lengths of optimal prefix-codes. To avoid any arbitrary choices, we use prequential codes (Gr¨unwald 2007), which are asymptotically optimal without requiring initial knowledge of the code distribution. If an assignment is valid, we encode it via an index over all valid assignments, otherwise we use an index over all other assignments. Formally, we have LAI(D | M) = |D| X i=1 −log  usgi f i a ∈FM + ϵ usgi 0 + usgi 1 + 2ϵ  + log |FM| , if f i a ∈FM log(|X| −|FM|), otherwise, where usgi x is how often code x has been used up to the i-th assignment, and ϵ with standard choice 0.5 is for smoothing. This gives us an efficient encoding for AI planning data. We also incorporate that fa(x) = 1 for exactly one x into our search candidate generation. A single one in fa means we neither need constraints on pairwise relationships of fa nor count constraints. Instead, we search for constraints telling us when we are not allowed to execute an action. This means we create candidates ∀x ∈X | CF : ¬fa(x), where CF compares boolean and numerical relations. We give pseudocode in the supplementary materials. Related Work Learning constraints for constraint programming is a widely studied problem. Active learning approaches (Bessiere et al. 2013; Tsouros and Stergiou 2020; Belaid et al. 2022) derive constraints by asking queries in the form of partial or complete solutions and non-solutions. Even for simple problems, these approaches may require thousands of queries, which limits their applicability if a human must label these queries. Therefore, researchers proposed to learn constraints from a static set of both solutions and non-solutions (Pawlak and Krawiec 2017; Kumar et al. 2020; Prestwich et al. 2021). While handcrafted solutions are usually recorded in real-world applications like scheduling and rostering, nonsolutions representing forbidden behavior often are not collected. Thus, data and label acquisition as a bottleneck still can prevent application of such methods in practice. Recent work finds constraints from solutions only. This often results in methods working in narrow contexts, such as integer linear programming (Meng and Chang 2021), scheduling sequences (Picard-Cantin et al. 2016) or tabular spreadsheets (Kolb et al. 2017). COUNTOR (Kumar et al. 2019) infers count constraints. It, however, cannot handle noise, can only create simple expressions, and does not consider redundancy between constraints. COUNTCP (Kumar, Kolb, and Guns 2022) extends COUNTOR by a richer modeling language and reduced redundancy, but still does not handle noise. MINEACQ (Prestwich 2021) selects constraints by permutation testing. In contrast to COUNTOR and COUNTCP, MINEACQ does not find quantified constraints, which can lead to a large result set. CABSC (Coulombe and Quimper 2022) also selects constraints by counting valid assignments, but needs user-provided knowledge of constraints and does not handle noise. In AI planning, many approaches try to infer domain models from exemplary execution plans (Arora et al. 2018). Strictly assuming no noise, FAMA (Aineto, Celorrio, and Onaindia 2019) formalizes the problem as a planning problem itself. PLANMINER (Segura-Muros, P´erez, and Fern´andez-Olivares 2021) translates the problem to a rule-based classification task, and PLANMINER-N (SeguraMuros, Fern´andez-Olivares, and P´erez 2021) improves PLANMINER’s noise-handling. AI planning domain acquisition methods tackle multiple tasks, treating the learning of action constraints as an unfocused subproblem. In contrast to all other methods above, URPILS discovers a succinct set of constraints with low sample complexity, is robust to noise, and can be applied to a broad domain of optimization, satisfiability and planning problems. Experiments Now, we evaluate URPILS on constraint programming and AI planning datasets. Since both domains have specialized state-of-the-art methods that are not applicable to both problems, we split the experiments into two. We conducted all our experiments on a PC with Windows 10, an Intel i7-6700 CPU and 32 GB of memory. To ensure reproducibility, we make code and data publicly available in the extra materials.1 1https://eda.rg.cispa.io/prj/urpils The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8186 Random 8-Queens 9-Sudoku-easy 9-Sudoku-hard 8-Teams-DRR GraphColor MultipleKnapsack Rostering 0.0 0.2 0.4 0.6 0.8 1.0 Test F1 score URPILS MINEACQ COUNTOR COUNTCP Figure 1: [URPILS discovers high-quality constraints] Average F1 score on the test set for ten independent runs on training sets with 1000 randomly drawn examples, for constraints discovered by URPILS, MINEACQ, COUNTOR and COUNTCP. Error bars show standard deviation. Experiments on Constraint Programming Datasets We start by comparing URPILS with the state of the art from related work. While COUNTOR and COUNTCP have no hyperparameters, we must generate candidate constraints for MINEACQ and set parameters τ and ρ to control the acceptance threshold of its permutation test for candidate selection. To ensure MINEACQ can find all necessary constraints to model the datasets without providing too much knowledge about the ground-truth constraints, we generate all pairwise implications fa(x) →fa(y) and fa(x) →¬fa(y). By a manual hyperparameter search, we find τ = 10 and ρ = 0.001 lead to the best results. We experiment on datasets with different characteristics. To test if the constraint learners find spurious results, we create a synthetic dataset Random, where we uniformly and randomly sample values for fa. We also uniformly and randomly sample values for boolean and numerical relations in the dataset, i.e., there is no dependency to fa, and the ground truth is an empty model without any constraints. Besides, we evaluate on datasets with non-empty groundtruth constraints. 8-Queens contains examples for positioning eight queens on a chessboard such that no two queens attack each other. Since modelers may include knowledge about the problem into the modeled relations, we create two versions of a 9×9 Sudoku dataset. In 9-Sudoku-easy, we specify for each cell its row, column and block number. In 9-Sudoku-hard, we only specify row and column. For 8Teams-DRR, we generate data of eight teams in a double round-robin competition, i.e., fa(x, y, z) = 1 if on match day x team y plays against team z, each team plays twice against each other on 14 match days, and we require symmetry between first and second half of the matches. In GraphColor, we generate a random undirected graph with ten nodes and twenty edges, and valid assignments are node colorings where two neighbors have different colors. For MultipleKnapsack, we assign twenty items of different weight and value to three knapsacks of limited size. Our last dataset, Rostering, contains an instance of a nurse rostering problem2, where boolean relations differentiate shift types and 2http://www.schedulingbenchmarks.org/nrp/ 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Noise proportion Test F1 score URPILS MINEACQ COUNTOR COUNTCP 10 200 400 600 800 0.0 0.2 0.4 0.6 0.8 1.0 Training Examples Test F1 score Figure 2: [URPILS is noise-robust with low sample complexity] Mean test F1 score over ten independent runs on the GraphColor problem, for URPILS, MINEACQ, COUNTOR, and COUNTCP dependent on the proportion of noisy examples in the training set (left). Mean F1 score on the 5-Queens test set with 10% noise on a varying number of training examples (right). Error bars show standard error. numerical relations model the start times, end times and durations of shifts. We provide details about all datasets and their ground-truth constraints in the supplementary. Quality of Discovered Constraints To see how well the discovered constraints match the ground-truth, we generate valid assignments for all datasets and split them into training and test set. For the test set, we additionally generate examples violating the ground-truth constraints. First, we run all methods on the training data. Then, we classify test examples positive if they satisfy all found constraints and negative otherwise. We report the F1 score with 1000 training examples in Figure 1. We see that URPILS in contrast to its competitors achieves almost perfect F1 score on all datasets. On 9-Sudoku-hard, URPILS does not find the block constraint in all runs, but on average still performs best. Noise Robustness To evaluate noise-robustness, we inject noise into the training data by adding invalid assignments. We report test F1 score on GraphColor dependent on noise proportion in Figure 2 (left). We see URPILS recovers the ground-truth for up to 60% noise and is on par for higher noise levels. The F1 score of COUNTOR and COUNTCP drops for significantly less noise to 2 3, i.e., a model that accepts all test examples with recall 1 and precision 0.5. We also test noise-robustness on the queens problem. MINEACQ shows much better F1 score with same training set size for lower dimensional problems. Furthermore, runtime of all methods is lower for smaller problems. To enable many runs, we evaluate on 5-Queens, reducing the problem to five queens on a 5×5 chessboard. We report F1 score on the test set with 10% noise on the training set dependent on the number of training examples in Figure 2 (right). We see COUNTCP and especially COUNTOR pick up noise and discover bad generalizing constraints. MINEACQ performs better, but needs 800 examples to achieve 100% F1 score. URPILS is not only robust to noise. It also achieves 100% F1 score with ten training examples. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8187 DLogDrive FloortileRight GripperPick Hanoi NPuzzle SatTurnTo VisitAll ZTravelBoard ZTravelRefuel 0.0 0.2 0.4 0.6 0.8 1.0 Dataset Test F1 score URPILS FAMA PLANMINER Figure 3: [AI Planning Results] Average F1 score with standard error on the test sets of AI planning datasets for ten independent runs for URPILS, FAMA, and PLANMINER. 0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 Noise proportion Test F1 score URPILS FAMA PLANMINER 0 0.1 0.2 0.3 0.4 0.5 0 2 4 6 8 10 Noise proportion Model size ||M|| Figure 4: [URPILS is noise-robust on AI planning data] Average test F1 score over ten independent runs for URPILS, FAMA and PLANMINER under varying noise proportion on the training data (left). Discovered model size under increasing noise (right). Error bars show standard error. Model Size Across all datasets URPILS finds a compact set of constraints with a total of only 33 to 90 literals of our constraint grammar. COUNTOR and COUNTCP show similar results, but tend to need more constraint terms for equal F1 score. Since MINEACQ does not look for quantified expressions, it produces sets with 1280 literals for 4x4Sudoku and 106 literals for 8-Teams-DRR. For reference, we show exemplary discovered constraints for MINEACQ, COUNTOR and COUNTCP as well as complete results for model complexity in the supplementary materials. Experiments on AI Planning Datasets Finally, we evaluate URPILS on AI planning benchmark datasets (Aineto, Celorrio, and Onaindia 2019) and compare to the state-of-the-art methods FAMA and PLANMINER from related work. Unfortunately, the authors of PLANMINER-N have not published code for their method and did not respond to our emails. As before, we generate a test set for each dataset with valid and invalid executions of an action in the corresponding planning domain. We report the classification F1 score for each method in Figure 3. We see that URPILS beats the state of the art by a wide margin. In our last experiment, we evaluate noise-robustness on the Hanoi dataset. We report F1 score and the number of relations in the discovered constraints for varying noise proportion in Figure 4. If the data contains noise, FAMA does not find any constraints. PLANMINER seems to pick up noise and finds constraints with a poor F1 score on the test set. In contrast to that, URPILS is very robust to sensible amounts of noise. If the noise level increases, URPILS finds fewer constraints, i.e., it does not find spurious constraints. Discussion In our experiments, we empirically show URPILS not only finds more accurate constraints, but also finds more succinct constraints, is more robust to noise, and has lower sample complexity than the state of the art. Nonetheless, URPILS has its limitations, and we see interesting research directions to overcome them. First, despite using a rich modeling language, we cannot model everything. As we see in Figure 1, URPILS does not achieve a 100% F1 score on MultipleKnapsack, because, with our current constraint language, we cannot model that the sum of the item weights in a knapsack must not exceed its capacity. We need a new type of constraint to model bounds on the sum of numerical relation values. However, we see computing the number of valid assignments for such models is even harder than for count constraints, and thus is a challenging problem. Ideally, we would extend our constraint language to the global constraint catalog (Beldiceanu, Carlsson, and Rampon 2012), which lists a large set of reusable constraints for constraint programming. Second, the size of a satisfiability problem massively impacts the runtime and sample complexity of URPILS. While URPILS finds all constraints in the majority of runs from 40 examples of 4-Sudoku-hard, it only finds all constraints one out of ten times from 1000 examples of 9-Sudoku-hard. However, the rules of 4×4 and 9×9 Sudoku are basically the same, and many problems have constraints that are independent of the problem size. We, therefore, think it is promising to study how to reduce the size of a given problem as a preprocessing step. Other ways to improve performance on high dimensional problems may include expert knowledge to restrict the large search space of constraints, e.g. by symmetries in the assignments or active learning. Conclusion To close the gap between domain experts and mathematical modeling experts in constraint programming and AI planning, we studied the problem of discovering constraints from exemplary solutions. We formalized the problem in terms of the Minimum Description Length (MDL) principle, by which we select the model with the best lossless compression of the data. Since solving the problem involves #Phard model counting, we proposed the greedy URPILS algorithm to find high-quality constraints in practice. Through extensive experiments on both constraint programming and AI planning benchmark datasets, we empirically showed URPILS not only discovers more accurate constraints, but also finds more succinct constraints, is more robust to noise, and has lower sample complexity than the state of the art. To apply URPILS on more complex problems, potential future work involves extending its modeling language and improving its efficiency on high dimensional problems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8188 References Aineto, D.; Celorrio, S. J.; and Onaindia, E. 2019. Learning action models with minimal observability. Journal of Artificial Intelligence (AIJ), 275: 104–137. Arora, A.; Fiorino, H.; Pellier, D.; M´etivier, M.; and Pesty, S. 2018. A review of learning planning action models. The Knowledge Engineering Review, 33: e20. Bart´ak, R. 1999. Constraint programming: In pursuit of the holy grail. In Proceedings of the 8th Annual Conference of Doctoral Students (WDS), Prague, Czech Republic, 555– 564. Belaid, M.-B.; Belmecheri, N.; Gotlieb, A.; Lazaar, N.; and Spieker, H. 2022. GEQCA: Generic Qualitative Constraint Acquisition. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI), Virtual Event, 3690–3697. Beldiceanu, N.; Carlsson, M.; and Rampon, J.-X. 2012. Global constraint catalog, 2nd edition (revision a). Technical report, Swedish Institute of Computer Science. Bessiere, C.; Coletta, R.; Hebrard, E.; Katsirelos, G.; Lazaar, N.; Narodytska, N.; Quimper, C.-G.; and Walsh, T. 2013. Constraint acquisition via partial queries. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI), Beijing, China, 475–481. Coulombe, C.; and Quimper, C.-G. 2022. Constraint Acquisition Based on Solution Counting. In 28th International Conference on Principles and Practice of Constraint Programming (CP), Haifa, Israel. Crescenzi, P.; and Rossi, G. 2002. On the Hamming distance of constraint satisfaction problems. Theoretical Computer Science, 288(1): 85–100. Dahll¨of, V.; Jonsson, P.; and Wahlstr¨om, M. 2005. Counting models for 2SAT and 3SAT formulae. Theoretical Computer Science, 332(1-3): 265–291. Feller, W. 1968. Introduction to Probability Theory and Its Applications, volume 1. Wiley, 3 edition. Fichte, J. K.; Hecher, M.; and Hamiti, F. 2021. The Model Counting Competition 2020. ACM Journal of Experimental Algorithmics (JEA), 26: 1–26. Gr¨unwald, P. 2007. The Minimum Description Length Principle. MIT Press. Kolb, S.; Paramonov, S.; Guns, T.; and De Raedt, L. 2017. Learning constraints in spreadsheets and tabular data. Machine Learning, 106: 1441–1468. Korhonen, T.; and J¨arvisalo, M. 2021. Integrating Tree Decompositions into Decision Heuristics of Propositional Model Counters. In 27th International Conference on Principles and Practice of Constraint Programming (CP), Virtual Event, 8:1–8:11. Kumar, M.; Kolb, S.; and Guns, T. 2022. Learning Constraint Programming Models from Data Using GenerateAnd-Aggregate. In 28th International Conference on Principles and Practice of Constraint Programming (CP), Haifa, Israel. Kumar, M.; Kolb, S.; Teso, S.; and De Raedt, L. 2020. Learning MAX-SAT from Contextual Examples for Combinatorial Optimisation. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI), New York, NY, 4493–4500. Kumar, M.; Teso, S.; De Causmaecker, P.; and De Raedt, L. 2019. Automating personnel rostering by learning constraints using tensors. In Proceedings of the 31st International Conference on Tools with Artificial Intelligence (ICTAI), Portland, OR, 697–704. Li, M.; and Vit´anyi, P. 1993. An Introduction to Kolmogorov Complexity and its Applications. Springer. Marx, A.; and Vreeken, J. 2019. Telling cause from effect by local and global regression. Knowledge and Information Systems, 60(3): 1277–1305. Meng, T.; and Chang, K.-W. 2021. An Integer Linear Programming Framework for Mining Constraints from Data. In Proceedings of the 38th International Conference on Machine Learning (ICML), Virtual Event, 7619–7631. O’Sullivan, B. 2010. Automated modeling and solving in constraint programming. In Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI), Atlanta, GA, 1493–1497. Pawlak, T. P.; and Krawiec, K. 2017. Automatic synthesis of constraints from examples using mixed integer linear programming. European Journal of Operational Research, 261(3): 1141–1157. Picard-Cantin, ´E.; Bouchard, M.; Quimper, C.-G.; and Sweeney, J. 2016. Learning parameters for the sequence constraint from solutions. In 22nd International Conference on Principles and Practice of Constraint Programming (CP), Toulouse, France, 405–420. P´olya, G. 1930. Eine Wahrscheinlichkeitsaufgabe in der Kundenwerbung. Zeitschrift f¨ur Angewandte Mathematik und Mechanik, 10(1): 96–97. Prestwich, S. 2021. Unsupervised Constraint Acquisition. In Proceedings of the 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Virtual Event, 256–262. Prestwich, S. D.; Freuder, E. C.; O’Sullivan, B.; and Browne, D. 2021. Classifier-based constraint acquisition. Annals of Mathematics and Artificial Intelligence, 89: 655– 674. Rissanen, J. 1978. Modeling by shortest data description. Automatica, 14(1): 465–471. Rissanen, J. 1983. A Universal Prior for Integers and Estimation by Minimum Description Length. The Annals of Statistics, 11(2): 416–431. Sah, A.; Sawhney, M.; Stoner, D.; and Zhao, Y. 2019. The number of independent sets in an irregular graph. Journal of Combinatorial Theory, Series B, 138: 172–195. Segura-Muros, J. ´A.; Fern´andez-Olivares, J.; and P´erez, R. 2021. Learning Numerical Action Models from Noisy Input Data. arXiv:2111.04997. Segura-Muros, J. ´A.; P´erez, R.; and Fern´andez-Olivares, J. 2021. Discovering relational and numerical expressions from plan traces for learning action models. Applied Intelligence, 51(11): 7973–7989. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8189 Sharma, S.; Roy, S.; Soos, M.; and Meel, K. S. 2019. GANAK: A Scalable Probabilistic Exact Model Counter. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), Macao, China, 1169–1176. Soos, M.; and Meel, K. S. 2019. BIRD: engineering an efficient CNF-XOR SAT solver and its applications to approximate model counting. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), Honolulu, HI, 1592–1599. Tsouros, D. C.; and Stergiou, K. 2020. Efficient multiple constraint acquisition. Constraints, 25(3-4): 180–225. Valiant, L. G. 1979. The Complexity of Enumeration and Reliability Problems. SIAM Journal on Computing, 8(3): 410–421. Zhou, J.; Yin, M.; and Zhou, C. 2010. New worst-case upper bound for #2-SAT and #3-SAT with the number of clauses as the parameter. In Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI), Atlanta, GA, 217–222. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8190
2024
910
18,752
SAT-Based Tree Decomposition with Iterative Cascading Policy Selection Hai Xia, Stefan Szeider Algorithms and Complexity Group, TU Wien, Austria {hxia,sz}@ac.tuwien.ac.at Abstract Solvers for propositional satisfiability (SAT) effectively tackle hard optimization problems. However, translating to SAT can cause a significant size increase, restricting its use to smaller instances. To mitigate this, frameworks using multiple local SAT calls for gradually improving a heuristic solution have been proposed. The performance of such algorithmic frameworks heavily relies on critical parameters, including the size of selected local instances and the time allocated per SAT call. This paper examines the automated configuration of the treewidth SAT-based local improvement method (TW-SLIM) framework, which uses multiple SAT calls for computing tree decompositions of small width, a fundamental problem in combinatorial optimization. We explore various TW-SLIM configuration methods, including offline learning and realtime adjustments, significantly outperforming default settings in multi-SAT scenarios with changing problems. Building upon insights gained from offline training and realtime configurations for TW-SLIM, we propose the iterative cascading policy—a novel hybrid technique that uniquely combines both. The iterative cascading policy employs a pool of 30 configurations obtained through clustering-based offline methods, deploying them in dynamic cascades across multiple rounds. In each round, the 30 configurations are tested according to the cascading ordering, and the best tree decomposition is retained for further improvement, with the option to adjust the following ordering of cascades. This iterative approach significantly enhances the performance of TWSLIM beyond baseline results, even within varying global timeouts. This highlights the effectiveness of the proposed iterative cascading policy in enhancing the efficiency and efficacy of complex algorithmic frameworks like TW-SLIM. Introduction Over the last two decades, SAT-solver technology has made tremendous progress; SAT instances with up to over a million variables and clauses can be solved routinely (Fichte et al. 2023). However, for many combinatorial optimization problems, the encoding to SAT entails a significant blow-up in size (cubic or worse), significantly limiting the feasible instance size of the combinatorial problem. To make SAT still applicable to large combinatorial problem instances, Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. researchers have developed new algorithmic frameworks where SAT solvers are called multiple times, where each call deals only with a small part of the combinatorial instance. SAT-based Local Improvement (SLIM), a particular structure-driven form of Large Neighborhood Search (LNS) (Pisinger and Ropke 2010), is such an algorithmic framework where multiple local SAT calls improve a global heuristic solution (Lodha, Ordyniak, and Szeider 2016; Fichte, Lodha, and Szeider 2017; Lodha, Ordyniak, and Szeider 2017; Peruvemba Ramaswamy and Szeider 2021a,b; Schidler and Szeider 2021; Ramaswamy and Szeider 2022; Schidler 2022; Kulikov, Pechenev, and Slezkin 2022; Reichl, Slivovsky, and Szeider 2023). However, the performance of algorithmic frameworks like SLIM that rely on multiple SAT calls heavily depends on several critical parameters: how large the selected local part should be and how much time each individual SAT call should have at its disposal. Moreover, since the instance evolves and changes during the solving time, dynamic aspects must be considered, such as when to switch from one configuration to another and in what order. Although there is a large bulk of work on automated algorithm configuration and selection (see, e.g., Schede et al.’s (2022) survey), there needs to be a rigorous study on techniques to configure such a complex algorithmic framework like SLIM that involves multiple SAT calls and requires the consideration of dynamic aspects. In this paper, we provide such a study, showcasing the widely studied problem of finding a small-width tree decomposition of a graph (MINTW). We used the TW-SLIM approach for this problem by Fichte, Lodha, and Szeider (2017), who followed the SLIM paradigm to compute small-width tree decompositions for large graphs based on repeated SAT calls. We developed and compared a wide range of approaches to configure TW-SLIM. As a baseline, we develop approaches based purely on offline learning on training data and purely on a real-time configuration, respectively. These relatively standard approaches significantly improve performance over the handtuned default configuration of TW-SLIM. Based on these encouraging preliminary results, we propose the iterative cascading policy (ICP), a hybrid approach that combines offline and real-time methods in a new way. The iterative cascading policy uses a pool of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8191 30 configurations obtained by a constrained clusteringbased offline approach and deploys them along a dynamic cascade. This goes beyond static cascading portfolios (Eiben et al. 2019; Roussel 2012; Streeter 2018) that run individual configurations along one static cascade. Iterative cascading is arranged in several rounds, where a cascade of configurations is tried in each round. The best-so-far tree decomposition is only replaced at the end of the round when the best tree decomposition of the current round is found. Each round can change the linear ordering for its cascade by considering the updated features of instances. According to the performance of the configurations among different clusters of instances, the algorithm switched to the most suitable cascading ordering and updated the tree decomposition for further improvements. We can boost the performance of TW-SLIM significantly beyond the baseline results with iterative cascading. We conducted our experiments over a comprehensive set of over 3000 benchmark graphs from various real-world applications. We randomly split these graphs 80 : 20 into a training set on which we performed the offline training and a test set on which we report the observed performance. The primary objective function counts the total sum of improvements (TΣ) of treewidth, a good proxy for the number of instances whose treewidth could be reduced (T#). The standard hand-tuned configuration used by Fichte, Lodha, and Szeider (2017) gives TΣ = 398 in the default 7800-second timeout they used for their experiments. With the clustering-based offline configuration, this value increases to 456 and 500, depending on whether the configuration is selected with AutoFolio (Lindauer et al. 2015) or the constrained-clustering model integrated by us. The latter, when applied with our devised dynamical adaptation (i.e., a new configuration is chosen according to the dynamic changes of the instance), gets up to TΣ = 619, already a significant gain over the original hand-tuned configuration. Finally, our new iterative cascading policy can boost the value to 728. We provide empirical data with shorter global timeouts, from 100 to 7800 seconds. For shorter timeouts, the ordering with a cascade becomes even more critical for good performance, and it makes sense to reduce the number of configurations within a cascade to cover the space of configurations faster. With these experiments, we can differentiate between variants of the iterative cascading policy and see that those that include offline training and work with smaller cascades have an advantage if less time is available for real-time improvement. Still, the iterative cascading policy has a clear lead over the other more conventional approaches, even for the shorter timeouts. Related Work Many algorithm configuration tools have been proposed to tune hyperparameters. The overview table in Schede et al.’s comprehensive survey (2022) shows that only 2 of the 42 considered configuration tools adjust the configuration dynamically during the algorithm’s run, and the others are static. From the aspect of the training setting, there are also only 2 with real-time training. In general, most algorithms regard the configuration problem as a black-box optimization problem and use the offline paradigm (Ans´otegui, Sellmann, and Tierney 2009; Hutter, Hoos, and Leyton-Brown 2011; Lindauer et al. 2021; L´opez-Ib´a˜nez et al. 2016), where the configurator receives all training instances as the tuning begins and then searches for a suitable configuration. Only few dynamic methods were proposed to adjust the real-time configurations for better performance (Fitzgerald, Malitsky, and O’Sullivan 2015; Fitzgerald et al. 2014). These configurators received a stream of changing problems and were required to solve dynamic problem instances with suitable configurations. To configure algorithms for instances with different features, some researchers also proposed instancespecific algorithm selection methods (Kadioglu et al. 2010; Lindauer et al. 2015; Xu et al. 2008), where a (portfolio) classifier learned to predict the best algorithms (configurations) by features. Cascading portfolio scheduling arranges diverse policies or configurations in a linear order which optimizes a sequential run on training instances (Eiben et al. 2019; Roussel 2012; Streeter 2018). Our setting is special, given the nature of our instances, consisting of the input graph and a heuristically computed initial tree decomposition. The configuration needs to consider features of both parts, where one part (the graph) remains steady through the solving time, and the other part (the tree decomposition) is subject to change. Most of the studies in the literature focus on static problems with significantly shorter timeouts, whereas we work with a global timeout (7800 seconds) which is typical for SLIM algorithms (Kulikov, Pechenev, and Slezkin 2022; Lodha, Ordyniak, and Szeider 2016, 2017; Peruvemba Ramaswamy and Szeider 2021a,b; Ramaswamy and Szeider 2022, 2020; Reichl, Slivovsky, and Szeider 2023; Schidler 2022; Schidler and Szeider 2021). SLIM for Tree Decompositions In this section, we introduce some basic relevant concepts on graphs and tree decompositions and outline the SLIM approach to treewidth computations. All graphs considered are finite and simple. We define a graph G in terms of its set V (G) of vertices and set E(G) of edges. We denote an edge between vertices u, v ∈V (G) by uv or, equivalently, vu. A subgraph H of G induced by a set X ⊆V (G) has V (H) = X and E(H) = { uv ∈E(G) | u, v ∈X }. A tree decomposition (TD) of a graph G is a pair T = (T, χ) where T is a tree and χ is a mapping that assigns each tree node t ∈V (T) a subset χ(t) ⊆V (G) such that (i) for all uv ∈E(G) there is some t ∈V (T) with u, v ∈χ(t), and (ii) for each v ∈V (G) the set { t ∈V (T) | v ∈χ(t) } indices a connected subtree of T. The sets χ(t), t ∈V (T) are called bags. The width w(T ) of T is maxt∈V (T ) |χ(t)| −1, and the treewidth of G is the smallest width over all its tree decompositions (Bodlaender 1993; Kloks 1996). We consider the optimization problem MINTW, which takes as input a graph G and asks for a tree decomposition of G of the smallest width. MINTW is NP-hard (Arnborg, Corneil, and Proskurowski 1987). Because of its intractability, exact algorithms apply only to small graphs (Bannach, Berndt, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8192 Ehlers 2017; Samer and Veith 2009; Tamaki 2022), whereas for large graphs, heuristics are used (Abseher, Musliu, and Woltran 2017; Bodlaender and Koster 2010) that compute a possibly suboptimal upper bound for the treewidth. Fichte, Lodha, and Szeider (2017) proposed an algorithm for MINTW using the SAT-based local improvement (SLIM) metaheuristic. We refer to this algorithm as TWSLIM. Subsequently, we outline its workflow, thereby introducing relevant parameters. Let G be the input graph to MINTW. First, an initial tree decomposition T = (T, χ) is computed with a heuristic method. Next, the following improvement step is repeatedly performed: A subtree S of T is selected such that the size of X = S t∈V (S) χ(t) does not exceed a local budget (parameter: lb). Let S = (S, χS) be the local tree decomposition with χS being the restriction of χ to S. Now S is a tree decomposition of the subgraph GS of G induced by X with w(S) ≤w(T ). To ensure replacement consistency (that we can substitute S in T with a tree decomposition of smaller width), we add certain edges to GS: We obtain the augmented local graph G∗ S from GS by adding for each st ∈E(T) such that s ∈V (S) and t /∈V (S), all the edges uv with u, v ∈χ(s) ∩χ(t). As shown by Fichte, Lodha, and Szeider (2017), S is still a tree decomposition of G∗ S, and more importantly, we can replace in T the local tree decomposition S with any new tree decomposition S∗, resulting in a tree decomposition T ∗(we explain below how S∗is obtained). For w = maxt∈V (T )\V (S) |χ(t)| −1, we have w(T ∗) ≤max(w, w(S∗)) (Fichte, Lodha, and Szeider 2017, Observation 3); thus, by reducing the with of local tree decompositions, we can eventually reduce the width of the global tree decomposition. Since G∗ S is sufficiently small (its number of vertices is at most lb), we can compute its treewidth exactly using a SAT encoding. That is, we set k = w(S) and generate a propositional formula F(G∗ S, k −1), which is satisfiable if and only if the treewidth of G∗ S is at most k−1, and feed this to a SAT solver with a specified SAT timeout (parameter st). The limit st is important, as we want to run the solver long enough to return a satisfying assignment, but stop the solver before it determines unsatisfiability, which takes an order of magnitude longer. This way, we can save precious time that is better used by trying different local instances. If the SAT solver determines that F(G∗ S, k −1) is satisfiable, we try a further reduction F(G∗ S, k −2), and so forth, until we reach a local timeout (parameter: lt). From the last satisfiable SAT call, we can read off the new tree decomposition S∗that we insert in T instead of S, giving us the tree decomposition T ∗of G. If the SAT solver does not determine that F(G∗ S, k −1) is satisfiable (either by determining its unsatisfiability or reaching the st limit), we can run the solver on F(G∗ S, k), which is guaranteed to be satisfiable since the treewidth of G∗ S is at most k. However, the satisfying assignment found will most likely give rise to a tree decomposition S∗that is different from S. Hence, replacing S with S∗in T will not reduce the treewidth of T but will shuffle it (whether a shuffle takes place is controlled by a flag parameter: sf ), so that future atParameter name Parameter meaning local budget (lb) budget of vertices for SLIM local time (lt) timeout for the local improvement SAT time (st) call timeout for the SAT solver switch flag (sf) replace GS if w(G∗ S) = w(G) Table 1: Key parameters and their meaning tempts for improvement will have a better chance to escape a local optimum. Figure 1 illustrates one improvement step of TW-SLIM. TW-SLIM terminates if either a global timeout (parameter: gt) is reached or there have been a certain number of nonimprovement steps (parameter: ni). Table 1 shows the key parameters that we configure for TW-SLIM. Experimental Setup In this section, we present the setting of experiments from different aspects. We provide an external link for source code and detailed results (Xia and Szeider 2023). Instances To obtain a comprehensive evaluation, we collected a large set of benchmark graphs from various collections (see Table 2). Since the proposed algorithms are aimed at TDs of large graphs that exact methods cannot solve, the graphs with fewer than 100 vertices are not considered. Graphs with over 106 vertices are also filtered out because even the heuristic methods cannot work with them due to memory overflows. This restriction results in a total of 3331 instances. After the whole data set is split randomly into a training set (80%) and a test set (20%), the training set and test set have 2664 and 667 instances, respectively. Setup All experiments are carried out on a Linux (Ubuntu 18.04.6 LTS) Sun Grid Engine cluster with 3 nodes, where each node has two AMD EPYC 7402 CPUs (each has 24 cores with a frequency of 2.80GHz). Due to the limitation in compatibility, we use different Python versions for different parts of the employed components. Components related to TWSLIM and AutoFolio run on Python 2.7.5 and Python 3.5, respectively. All other components run on Python 3.9.12. For a fair comparison, we set the same global search timeout to 7800 seconds as recommended by the original paper Instances URLs (https://) functions sdcc.sourceforge.net PACE16 pacechallenge.org/2016/treewidth PACE17 pacechallenge.org/2017/treewidth UAI www.ics.uci.edu/∼dechter/software.html Roadnet www.diag.uniroma1.it/challenge9 TWlib webspace.science.uu.nl/∼bodla101/treewidthlib Table 2: Data sources of all considered instances The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8193 A B C D SAT Figure 1: One single SAT-based improvement step within TW-SLIM. From A to B: a subtree that induces a local instance whose size does not exceed the local budget is selected; from B to C: an improved tree decomposition for the local instance is computed with a SAT call; from C to D, the original subtree is replaced by an improved subtree according to replacement consistency properties. of TW-SLIM (Fichte, Lodha, and Szeider 2017), the heuristic for computing the initial TD to HTD (version: v0.9.5beta) (Abseher, Musliu, and Woltran 2017), the local solver to Jdrasil (Bannach, Berndt, and Ehlers 2017), and the SAT solver to Glucose (Audemard and Simon 2018). Optimization Objectives For an instance (G, T ) consisting of the input graph G and the heuristically computed initial tree decomposition T , let T ∗be the tree decomposition at the end of a TD-SLIM run. If w(T ∗) < w(T ), then the run was successful, as the width of the initial tree decomposition was improved. One measure for the performance of a configuration is to count the total number of improved instances (T#). A more finegrained measurement takes the total sum of improvements (TΣ). Clearly TΣ > T# as an improved instance contributes at least 1 to T#. It turned out that taking TΣ as the objective yields excellent results also for T#, even better than with T# as the objective (this is plausible since TΣ gives more detailed feedback to the configurator than TΣ). Therefore, we take TΣ as the objective throughout the experiments, but also report on T#. We would like to note that we always first run the heuristic and measure the improvement SLIM gains over the initial heuristic solution. Thus, for an instance that we consider easy, the heuristic could provide a reasonable upper bound for the treewidth, which made the instance more challenging for TW-SLIM to improve. For many applications of tree decompositions, like exact probabilistic reasoning, the worstcase time complexity is exponential in the treewidth, which means that even tiny reductions in the treewidth yield significant performance improvements. Offline Configuration In this section, we introduce a series of offline configurators as baselines for further improvements. Optimizing for One Configuration over the Entire Data Set (SB-all) Given the set of training instances, we use the state-of-theart algorithm configuration tool SMAC (Hutter, Hoos, and Leyton-Brown 2011) to search for the most promising TWSLIM configuration. Then, the performance of the configuration is examined on test instances. The setting of SMAC throughout our experiments is listed in Table 3. Parameter name Value configuration lb ∈[50, 300], 100 as default lt ∈[90, 5000], 1800 as default st ∈[90, 5000], 900 as default sf ∈{0, 1}, 1 as default wallclock-limit 48 hours cutoff-time 4 hours memory limit 300 GB objective total improvements of widths limit resources False model type Gaussian process acquisition function expected improvement random state 42 Table 3: The setting of SMAC Algorithms Training set Test set hand-tuned TW-SLIM 1361/640 398/182 SB-all 1674/683 457/175 Table 4: Performance comparison between original handtuned TW-SLIM and SB-all on 2664 training instances and 667 test instances. All results have the format TΣ/T#. Table 4 shows the performance of the systematically hand-tuned TW-SLIM (the default configuration of TW-SLIM (Fichte, Lodha, and Szeider 2017)) and the automatically-tuned TW-SLIM (SB-all). On the training instances, the automatically-tuned TW-SLIM exhibits significant improvements on both measurements: TΣ increased by 313 and T# increased by 43. On the test instances, we observe TΣ increased by 59 and T# decreased by 7. Clustering-Based Offline Configuration (CC) For the instance-specific configuration of TW-SLIM, we consider the nine features listed in Table 5, where seven features describe properties of the TD (and may change during the optimization process), and the other two features describe properties of the input graph (that remains constant during the optimization process). We utilize constrained Kmeans (Bradley, Bennett, and Demiriz 2000) to cluster training instances into clusters because it can control the number The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8194 Feature Aspect Range Number of bags TDs [2, 418434] Largest bag size TDs [2, 3082] Number of vertices Graphs [100, 468913] Sum of bag sizes TDs [300, 2404836] Smallest bag size TDs [2, 400] Sum of out degrees TDs [1, 418433] Number of leaves TDs [1, 159582] Depth TDs [1, 306] Number of edges Graphs [100, 863026] Table 5: Selected features of problem instances Figure 2: The distribution of training instances is visualized by T-NES. Different colors of dots are instances within different clusters; Numbers (0 to 29) are 30 clustering labels. of instances within each cluster. We set the number of clusters to 30, and add the minimum (50) and maximum (200) number of instances in each cluster as clustering constraints. Therefore, we can balance the distribution of training instances, which is also the issue reported by previous algorithms (Kadioglu et al. 2010). Therefore, SMAC can search for promising configurations among different clusters of instances separately. We can use the same clustering model to predict the best configurations for test instances by classifying their features into the most similar clusters. We refer to this method by CC-sta (for clustering-clustering-static). To examine the effectiveness of our constrained clustering with the selected nine features, we apply the high-dimensional visualization tool, t-distributed stochastic neighbor embedding (Hinton and Roweis 2002) (T-SNE), to visualize clusters of training instances learned by constrained K-means. From the distribution of different clusters in Figure 2, most instances with similar features are clustered into the same cluster, which implies the effectiveness of the constrained K-means clustering. Algorithms Training set Test set CA with 1-hour training ✗ 456/185 CA with 2-hour training ✗ 456/185 CA with 6-hour training ✗ 408/169 SB-all 1674/683 457/175 CC-sta 2143/822 500/190 SB 1789/718 468/185 VB 2621/826 738/222 Table 6: Performance comparison between SB-all, CC-sta, and CA on 2664 training instances and 667 test instances. All results have the format TΣ/T#. As a comparison, we utilize the state-of-the-art portfolio algorithm selector AutoFolio (Lindauer et al. 2015), which tries to predict the best configurations for new instances according to their features. We refer to this method as Cluster-AutoFolio (CA). We use the same training instances to obtain the feature table and the same configurations (one from each cluster) for the performance table. Different training times are examined, and the performance of CA with 1, 2, and 6 hours of training time are given in Table 6, where both the theoretical performances of the single best (SB) and the virtual best (VB) are calculated according to the survey (Schede et al. 2022). From the results in Table 6, CA with a 1 or 2-hour training exhibits the best performance among different training times. Somewhat surprisingly, the performance of CA is worse than CC-sta and even worse than the performance of the theoretical SB. However, CC-sta outperforms SB-all significantly with respect to TΣ, bridging 43% and 11% of the gap between the SB and the VB on the training set and test set, respectively. With respect to T#, CC-sta can bridge the gap between the SB and the VB by 96% and 14% on the training set and test set, respectively. Table 6 also shows the performance of SB-all and CC-sta on the training set and test set. The settings of SMAC for the whole set of training instances and a cluster of training instances are the same. For the former task, SMAC had access to 80 cores, and for the latter task, SMAC was given 8 cores. For both measurements TΣ and T#, CC-sta outperforms the SB-all SMAC. Compared with the performance of the hand-tuned TW-SLIM, CC-sta results in increases in TΣ (102) and T# (8), and there is also an increase compared with SB-all: TΣ (43) and T# (15). Dynamic Configuration Selection Offline methods must always obtain some training instances first and then apply the policy learned from the training set to a similar test set. However, in the SLIM context, optimized subgraphs result in changing instances. Hence, applying the same configuration within the whole optimization will be ineffective. Accordingly, we propose the dynamic variant CC-dyn of CC-sta to adapt to the dynamics of the problem during the run. In CC-dyn, if the configuration predicted by the clustering model can improve a TD, the next configuration will be selected by the same model again according The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8195 to the updated features. Otherwise, if a configuration fails to improve a TD further, a different configuration will be selected randomly from the pool consisting of the 30 configurations until the global timeout is reached. As a comparison, we also use the state-of-the-art offline configurator SMAC in an online fashion, where SMAC tunes during runtime for the specific instance within different wallclock limits (the timeout including configuring and tree decomposition). This new and atypical use of SMAC makes sense in a setting with a longer timeout to accommodate real-time tuning. Except for the wallclock limit, other corresponding parameters are also set according to Table 3. We refer to the per-instance SMAC algorithm as SB-one. To examine the performance of CC-dyn, we set different global timeouts from 100 to 7800 seconds, which are also the wallclock limits used by SB-one. From Table 7, within the default timeout (7800 seconds) used by TW-SLIM, CCdyn increases TΣ significantly by 119 from CC-sta. Compared with SB-all and hand-tuned TW-SLIM, TΣ is improved by 56% and 35%, respectively. As the timeout decreases below 500 seconds, CC-dyn deteriorates so heavily that its performance is worse than CC-sta. The inefficiency is probably caused by the “trial-and-error” application of some configurations during the running, which can give an impetus for long-term optimization. The contrast between CC-dyn and SB-one is similar, but SB-one can outperform more significantly within 500 seconds of timeout: with less than 100 seconds, SB-one obtains around 40% more TΣ than both CC-sta and CC-dyn, because we set the good-enough default configuration (lb = 50, lt = 1800, st = 900, sf = 1) for SB-one according to the original hand-tuned TW-SLIM, and SB-one can use the default configuration even at the beginning. However, CC-sta and CC-dyn have to run with a better configuration only after trying different configurations. Besides, improvements are calculated for configurations with a 7800-second timeout when promising configurations are searched among clusters. Therefore, within only 100 seconds, it is possible that the recommended configuration is not as good as it is expected. In general, CC-dyn can improve the performance of CCsta by applying the adaptive selection policy within the default timeout (7800 seconds). With shorter optimization timeouts, the performance of CC-dyn may deteriorate due to the exploration of promising configurations. Therefore, building upon the insights we gained so far through our systematic study, we will propose in the next section the iterative cascading policy for applying the configurations more efficiently to boost the performance of TW-SLIM in various timeout scales. Iterative Cascading Policy In this section, we introduce our iterative cascading policy, which combines offline and dynamic methods in a new way, boosting the overall performance given all different timeouts. Algorithm Framework The starting point for this policy is cascading portfolio scheduling (Eiben et al. 2019; Streeter 2018), where one linAlgorithm 1: Dynamic ICP Input: Initial graph G, heuristic tree decomposition T , a set of cascading orderings of configurations S = {C1, C2, ..., Cn} where Ci = [Ci,1, Ci,2, ..., Ci,m], algorithm selector Classifier(), feature extractor Extractor() Output: Final tree decomposition T ∗ 1 i ←0, Tb ←T , Tc ←T // initialization; 2 F ←Extractor(G, Tc) // get the updated features; 3 l ←Classifier(F) // get the clustering label; 4 while Not timeout do 5 if i = m then 6 i ←0; 7 Tb ←Tc // update the problem instance; 8 F ←Extractor(G, Tc); 9 l ←Classifier(F); 10 end 11 else 12 i ←i + 1; 13 ∆t ←TW-SLIMCl,i(G, Tb) // apply Cl,i; 14 if w(Tt) < w(Tc) then 15 Tc ←Tt // update the best TD of this round; 16 end 17 end 18 end 19 T ∗←Tc; ear ordering of m configurations (the cascade) is computed offline; the configurations are then run sequentially following this order until a timeout is reached. Differently, we can use the m = 30 configurations obtained with the clustering method (see Section 5) to form such a cascade, and we can adaptively assign different cascading orders to different instances based on their updated features, according to the average performance on the training data from different clusters. Meanwhile, we sort the configurations by calculating their improvements over the training instances, and we can use two metrics of improvements (TΣ and T#) for sorting. Sometimes, the global timeout is reached before all the m configurations of the cascade have been run, so running the most promising configurations first is essential. If the first cascade could be completed, we take the tree decomposition of the lowest width obtained within all the m configurations and start a new round with this as the initial tree decomposition. The static version of ICP (ICP-sta) now uses the same cascade in the next round and repeats this process until the global timeout is reached. The dynamic version of ICP (ICP-dyn) also proceeds in rounds. However, we formulate 30 different cascades according to the performance of configurations on the 30 clusters of training instances (the same clusters we used in CC). Then, every time a new round begins, the clustering model (the same as used in CC) is applied to select one of the 30 cascades by the updated instance features. This way, the cascade is always up-to-date and adjusted in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8196 Algorithm TS CA 100 seconds 500 seconds 1000 seconds 3000 seconds 7800 seconds Hand-tuned O S ✗ ✗ ✗ ✗ 398/182 CA O S ✗ ✗ ✗ ✗ 456/185 SB-all O S ✗ ✗ ✗ ✗ 457/175 CC-sta O S 234/132 395/165 418/172 445/179 500/190 CC-dyn O D 223/128 386/163 493/179 571/197 619/200 SB-one R S 319/160 457/177 490/184 534/190 587/194 ICP-sta (TΣ) O D 361/168 504/194 562/203 704/220 728/219 ICP-sta (T#) O D 334/161 503/190 541/196 672/212 691/213 ICP-dyn (TΣ) O D 296/155 471/182 541/194 658/209 712/215 ICP-dyn (T#) O D 338/163 506/188 574/200 682/212 710/213 ICP-sta (TΣ) (ub) O D 364/181 494/200 543/206 672/217 724/222 Table 7: Performance comparison between different algorithms with timeouts from 100 to 7800 seconds. All results have the format TΣ/T#. (TΣ) and (T#) mean sort the cascading priority according to TΣ and T#, respectively. (ub) means filtering the configuration set. TS and CA label the methods according to the criteria proposed by Schede et al. (2022): TS: training setting (O: offline, R: real-time); CA: configuration adjustment (D: dynamic, S: static). real-time to the current tree decomposition. A pseudocode of ICP-dyn is shown in Algorithm 1 (ICP-sta follows as a more straightforward special case). We also consider a variant of ICP-sta, labeled (ub), which operates with cascades formed by a subset of the 30 configurations. If we can cover the configuration space with fewer best-performing configurations, we can pass through each cascade quicker and consequently carry out more rounds and update the ordering more often. Accordingly, we remove all configurations with “unique best number” ub = 0, meaning these configurations cannot achieve the sole-best performance on an training instances. After filtering, there are 20 configurations left to form a cascade. Experimental Analysis With different settings, we have a series of variants for ICP. We examine these variants along 5 global timeouts for a fair comparison, and we compare ICP with the algorithms proposed above (CC-sta, CC-dyn, SB-one, SB-all, and CA). Table 7 presents the overall results of different algorithms and their key properties of techniques. Both static and dynamic ICPs outperform other algorithms significantly in all given timeouts: ICP-sta (TΣ) obtain 228 and 141 more total improvements than the variants of SMAC (SB-all and SB-one within 7800 seconds). Moreover, ICP surpasses the portfolio-based algorithm selector AutoFolio (CA) by 40% TΣ. When comparing ICP-sta and ICP-dyn, we observe within the timeout range (100, 1000), that ICP-dyn (T#) is slightly better than ICP-stat (TΣ), but for other timeouts, the static ICP is better. For ICP-sta, sorting the cascading priority queue by TΣ is better, which can result in a higher TΣ. In contrast, ICP-dyn prefers to have T# as the metric for formulating the priority queues. When it comes to the effect of the configuration filter according to ub, the filter can even improve the performance further within 100 seconds timeout. ICP with the filter can still achieve a similar performance level with long timeout: with 7800 seconds, the performance of ICP-sta (TΣ) and ICP-sta (TΣ) (ub) is similar, even though the latter one has fewer configurations in the cascading priority queue. In general, even though offline algorithm configurators dominate the research domain, we discover that offline configurators are not good at configuring the process where the instance is constantly changing, like in TW-SLIM. In our dynamic context, we can boost the performance further by incorporating knowledge learned during offline configuration (30 configurations in our CC), adaptive selection methods (CC-dyn), and cascading methods (ICP). Conclusions and Future Work We have investigated automatically configuring the complex algorithmic framework TW-SLIM for treewidth minimization, which, at its core, uses a SAT solver locally for a large-scale optimization problem. We adapted clusteringbased automated algorithm configuration to our highly dynamic setting, which allowed us to improve significantly over the original hand-tuned configuration. Here, we observed that selecting the configuration through clustering performed better than through an algorithm selector. This finding is interesting as it contrasts the results obtained in a different setting on the configuration of MaxSAT solvers (Kadioglu et al. 2010). Our new iterative cascading policy (ICP) provides a significant additional boost in performance gain. This remarkable performance is due to ICP’s ability to self-refine in realtime, learning from one round to the other and simultaneously adapting to a dynamic change in the instance. Our results give a good picture of the potential of automated algorithm configuration for a complex algorithmic framework that includes multiple calls to a SAT solver. Although we consider a particular optimization problem as our concrete target (MINTW), we are confident that our findings are relevant to many other problems that can be tackled with SLIM, LNS, or other frameworks that utilize multiple SAT calls. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8197 Acknowledgments The project leading to this publication has received funding from the European Union’s Horizon 2020 research and innovation programme under the Maria Skłodowska-Curie grant agreement No. 101034440, the Austrian Science Fund (projects P36420 and P36688), and the Vienna Science and Technology Fund (project ICT19-065). References Abseher, M.; Musliu, N.; and Woltran, S. 2017. HTD - A Free, Open-Source Framework for (Customized) Tree Decompositions and Beyond. In Salvagnin, D.; and Lombardi, M., eds., Integration of AI and OR Techniques in Constraint Programming - 14th International Conference, CPAIOR 2017, Padua, Italy, June 5-8, 2017, Proceedings, volume 10335 of Lecture Notes in Computer Science, 376– 386. Springer Verlag. Ans´otegui, C.; Sellmann, M.; and Tierney, K. 2009. A Gender-Based Genetic Algorithm for the Automatic Configuration of Algorithms. In Gent, I. P., ed., Principles and Practice of Constraint Programming - CP 2009, 142–157. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 9783-642-04244-7. Arnborg, S.; Corneil, D. G.; and Proskurowski, A. 1987. Complexity of Finding Embeddings in a k-Tree. SIAM J. Algebraic Discrete Methods, 8(2): 277–284. Audemard, G.; and Simon, L. 2018. On the Glucose SAT Solver. International Journal on Artificial Intelligence Tools, 27(1): 1840001:1–1840001:25. Bannach, M.; Berndt, S.; and Ehlers, T. 2017. Jdrasil: A Modular Library for Computing Tree Decompositions. In Iliopoulos, C. S.; Pissis, S. P.; Puglisi, S. J.; and Raman, R., eds., 16th International Symposium on Experimental Algorithms, SEA 2017, June 21-23, 2017, London, UK, volume 75 of LIPIcs, 28:1–28:21. Schloss Dagstuhl - LeibnizZentrum fuer Informatik. Bodlaender, H. L. 1993. A Tourist Guide through Treewidth. Acta Cybernetica, 11: 1–21. Bodlaender, H. L.; and Koster, A. M. C. A. 2010. Treewidth computations. I. Upper bounds. Information and Computation, 208(3): 259–275. Bradley, P. S.; Bennett, K. P.; and Demiriz, A. 2000. Constrained K-means Clustering. Microsoft Research, Redmond, 20(0): 0. Eiben, E.; Ganian, R.; Kanj, I.; and Szeider, S. 2019. The Parameterized Complexity of Cascading Portfolio Scheduling. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alch´e-Buc, F.; Fox, E. B.; and Garnett, R., eds., Proceedings of NeurIPS 2019, the Thirty-third Conference on Neural Information Processing Systems, 7666–7676. Fichte, J. K.; Berre, D. L.; Hecher, M.; and Szeider, S. 2023. The Silent (R)evolution of SAT. Communications of the ACM, 66(6): 64–72. Fichte, J. K.; Lodha, N.; and Szeider, S. 2017. SAT-Based Local Improvement for Finding Tree Decompositions of Small Width. In Gaspers, S.; and Walsh, T., eds., Theory and Applications of Satisfiability Testing - SAT 2017 - 20th International Conference, Melbourne, VIC, Australia, August 28 - September 1, 2017, Proceedings, volume 10491 of Lecture Notes in Computer Science, 401–411. Springer Verlag. Fitzgerald, T.; Malitsky, Y.; and O’Sullivan, B. 2015. ReACTR: Realtime Algorithm Configuration through Tournament Rankings. In Twenty-Fourth International Joint Conference on Artificial Intelligence. AAAI. Fitzgerald, T.; Malitsky, Y.; O’Sullivan, B.; and Tierney, K. 2014. ReACT: Real-time Algorithm Configuration through Tournaments. In Proceedings of the International Symposium on Combinatorial Search, volume 5(1), 62–70. Hinton, G. E.; and Roweis, S. 2002. Stochastic Neighbor Embedding. Advances in Neural Information Processing Systems, 15. Hutter, F.; Hoos, H. H.; and Leyton-Brown, K. 2011. Sequential Model-based Optimization for General Algorithm Configuration. In Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5, 507–523. Springer. Kadioglu, S.; Malitsky, Y.; Sellmann, M.; and Tierney, K. 2010. ISAC - Instance-Specific Algorithm Configuration. In Coelho, H.; Studer, R.; and Wooldridge, M. J., eds., ECAI 2010 - 19th European Conference on Artificial Intelligence, Lisbon, Portugal, August 16-20, 2010, Proceedings, volume 215 of Frontiers in Artificial Intelligence and Applications, 751–756. IOS Press. Kloks, T. 1996. Treewidth of circle graphs. International Journal of Foundations of Computer Science, 7: 111–120. Kulikov, A. S.; Pechenev, D.; and Slezkin, N. 2022. SATBased Circuit Local Improvement. In Szeider, S.; Ganian, R.; and Silva, A., eds., 47th International Symposium on Mathematical Foundations of Computer Science, MFCS 2022, August 22-26, 2022, Vienna, Austria, volume 241 of LIPIcs, 67:1–67:15. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Lindauer, M.; Eggensperger, K.; Feurer, M.; Biedenkapp, A.; Deng, D.; Benjamins, C.; Ruhkopf, T.; Sass, R.; and Hutter, F. 2021. SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. In ArXiv: 2109.09831. Lindauer, M.; Hoos, H. H.; Hutter, F.; and Schaub, T. 2015. Autofolio: An Automatically Configured Algorithm Selector. Journal of Artificial Intelligence Research, 53: 745–778. Lodha, N.; Ordyniak, S.; and Szeider, S. 2016. A SAT Approach to Branchwidth. In Creignou, N.; and Berre, D. L., eds., Theory and Applications of Satisfiability Testing - SAT 2016 - 19th International Conference, Bordeaux, France, July 5-8, 2016, Proceedings, volume 9710 of Lecture Notes in Computer Science, 179–195. Springer Verlag. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8198 Lodha, N.; Ordyniak, S.; and Szeider, S. 2017. SATEncodings for Special Treewidth and Pathwidth. In Gaspers, S.; and Walsh, T., eds., Theory and Applications of Satisfiability Testing - SAT 2017 - 20th International Conference, Melbourne, VIC, Australia, August 28 - September 1, 2017, Proceedings, volume 10491 of Lecture Notes in Computer Science, 429–445. Springer Verlag. L´opez-Ib´a˜nez, M.; Dubois-Lacoste, J.; P´erez C´aceres, L.; Birattari, M.; and St¨utzle, T. 2016. The Irace Package: Iterated Racing for Automatic Algorithm Configuration. Operations Research Perspectives, 3: 43–58. Peruvemba Ramaswamy, V.; and Szeider, S. 2021a. Learning Fast-Inference Bayesian Networks. Advances in Neural Information Processing Systems, 34. Peruvemba Ramaswamy, V.; and Szeider, S. 2021b. Turbocharging Treewidth-Bounded Bayesian Network Structure Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5): 3895–3903. Pisinger, D.; and Ropke, S. 2010. Large neighborhood search. In Handbook of Metaheuristics, 399–419. Springer Verlag. Ramaswamy, V. P.; and Szeider, S. 2020. MaxSAT-Based Postprocessing for Treedepth. In Simonis, H., ed., Principles and Practice of Constraint Programming - 26th International Conference, CP 2020, Louvain-la-Neuve, Belgium, September 7-11, 2020, Proceedings, volume 12333 of Lecture Notes in Computer Science, 478–495. Springer. Ramaswamy, V. P.; and Szeider, S. 2022. Learning Large Bayesian Networks with Expert Constraints. In Cussens, J.; and Zhang, K., eds., Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, volume 180 of Proceedings of Machine Learning Research, 1592–1601. PMLR. Reichl, F.; Slivovsky, F.; and Szeider, S. 2023. Circuit Minimization with QBF-Based Exact Synthesis. In ThirtySeventh AAAI Conference on Artificial Intelligence, AAAI 2023, 4087–4094. AAAI Press. Roussel, O. 2012. Description of ppfolio 2012. In A. Balint, e. a., ed., Proceedings of SAT Challenge 2012, 47. University of Helsinki. Samer, M.; and Veith, H. 2009. Encoding Treewidth into SAT. In Theory and Applications of Satisfiability Testing - SAT 2009, 12th International Conference, SAT 2009, Swansea, UK, June 30 - July 3, 2009. Proceedings, volume 5584 of Lecture Notes in Computer Science, 45–50. Springer Verlag. Schede, E.; Brandt, J.; Tornede, A.; Wever, M.; Bengs, V.; H¨ullermeier, E.; and Tierney, K. 2022. A Survey of Methods for Automated Algorithm Configuration. Journal of Artificial Intelligence Research, 75: 425–487. Schidler, A. 2022. SAT-Based Local Search for Plane Subgraph Partitions (CG Challenge). In Goaoc, X.; and Kerber, M., eds., 38th International Symposium on Computational Geometry, SoCG 2022, June 7-10, 2022, Berlin, Germany, volume 224 of LIPIcs, 74:1–74:8. Schloss Dagstuhl Leibniz-Zentrum f¨ur Informatik. Schidler, A.; and Szeider, S. 2021. SAT-based Decision Tree Learning for Large Data Sets. In Proceedings of AAAI’21, the Thirty-Fifth AAAI Conference on Artificial Intelligence. AAAI Press. Streeter, M. 2018. Approximation Algorithms for Cascading Prediction Models. In Dy, J. G.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, volume 80 of JMLR Workshop and Conference Proceedings, 4759–4767. JMLR.org. Tamaki, H. 2022. Heuristic Computation of Exact Treewidth. In Schulz, C.; and Uc¸ar, B., eds., 20th International Symposium on Experimental Algorithms, SEA 2022, July 25-27, 2022, Heidelberg, Germany, volume 233 of LIPIcs, 17:1–17:16. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik. Xia, H.; and Szeider, S. 2023. SAT-Based Tree Decomposition with Iterative Cascading Policy Selection. Zendod.org Online Repository. DOI 10.5281/zenodo.10407175, https://zenodo.org/records/10407175. Xu, L.; Hutter, F.; Hoos, H. H.; and Leyton-Brown, K. 2008. SATzilla: Portfolio-based Algorithm Selection for SAT. Journal of Artificial Intelligence Research, 32: 565– 606. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8199
2024
911
18,753
Engineering an Exact Pseudo-Boolean Model Counter* Suwei Yang1,2,3, Kuldeep S. Meel3,4 1GrabTaxi Holdings 2Grab-NUS AI Lab 3National University of Singapore 4University of Toronto Abstract Model counting, a fundamental task in computer science, involves determining the number of satisfying assignments to a Boolean formula, typically represented in conjunctive normal form (CNF). While model counting for CNF formulas has received extensive attention with a broad range of applications, the study of model counting for Pseudo-Boolean (PB) formulas has been relatively overlooked. Pseudo-Boolean formulas, being more succinct than propositional Boolean formulas, offer greater flexibility in representing real-world problems. Consequently, there is a crucial need to investigate efficient techniques for model counting for PB formulas. In this work, we propose the first exact Pseudo-Boolean model counter, PBCount, that relies on knowledge compilation approach via algebraic decision diagrams. Our extensive empirical evaluation shows that PBCount can compute counts for 1513 instances while the current state-of-the-art approach could only handle 1013 instances. Our work opens up several avenues for future work in the context of model counting for PB formulas, such as the development of preprocessing techniques and exploration of approaches other than knowledge compilation. 1 Introduction Propositional model counting involves computing the number of satisfying assignments to a Boolean formula. Model counting is closely related to the Boolean satisfiability problem where the task is to determine if there exists an assignment of variables such that the Boolean formula evaluates to true. Boolean satisfiability and model counting have been extensively studied in the past decades and are the cornerstone of an extensive range of real-life applications such as software design, explainable machine learning, planning, and probabilistic reasoning (Bacchus, Dalmao, and Pitassi 2003; Narodytska et al. 2019; Jackson 2019; Fan, Miller, and Mitra 2020). Owing to decades of research, there are numerous tools and techniques developed for various aspects of Boolean satisfiability and model counting, from Boolean formula preprocessors to SAT solvers and model counters. *The full version of the paper is available at https://arxiv.org/abs/2312.12341 and code is available at https://github.com/grab/pbcount Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. The dominant representation format of Boolean formulas is Conjunctive Normal Form (CNF), and accordingly, the tools in the early days focused on CNF as the input format. Over the past decade and a half, there has been considerable effort in exploring other representation formats: one such format that has gained significant interest from the community is Pseudo-Boolean (PB) formulas, which are expressed as the conjunction of linear inequalities. PB formulas are shown to be more succinct than CNF formulas and natural for problems such as Knapsack, sensor placement, binarized neural networks, and the like. Furthermore, PB formulas are able to express constraints more succinctly compared to Boolean formulas in CNF (Berre et al. 2018). As an example, a single PB constraint is sufficient to express atmost-k and at-least-k types of cardinal constraints whereas the equivalent in CNF would require a polynomial number of clauses (Sinz 2005). On a higher level, an arbitrary CNF clause can be expressed with a single PB constraint but the converse is not true (Berre et al. 2018). The past decade has witnessed the development of satisfiability solving techniques based on the underlying proof systems naturally suited to PB constraints, and accordingly, the state-ofthe-art PB solvers, such as RoundingSat significantly outperform CNF solvers on problems that are naturally encoded in PB (Elffers and Nordstr¨om 2018; Devriendt 2020; Devriendt et al. 2021). In contrast to satisfiability, almost all the work in the context of model counting has focused on the representation of Boolean formulas in Conjunctive Normal Form (CNF), with the sole exception of the development of an approximate model counter for PB formulas (Yang and Meel 2021). The primary contribution of this work is to address the aforementioned gap through the development of a native scalable exact model counter, called PBCount, for PB formulas. PBCount is based on the knowledge compilation paradigm, and in particular, compiles a given PB formula into algebraic decision diagrams (ADDs) (Bahar et al. 1993), which allows us to perform model counting. We perform extensive empirical evaluations on benchmark instances arising from different applications, such as sensor placement, multi-dimension knapsack, and combinatorial auction benchmarks (Gens and Levner 1980; Blumrosen and Nisan 2007; Latour, Sen, and Meel 2023). Our evaluations highlighted the efficacy of PBCount against exThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8200 isting state-of-the-art CNF model counters. In particular, PBCount is able to successfully count 1513 instances while the prior state of the art could only count 1013 instances, thereby demonstrating significant runtime improvements. It is worth remarking that PBCount achieves superior performance with substantially weaker preprocessing techniques in comparison to techniques employed in CNF model counters, making a strong case for the advantages of native PB model counting and reasoning. Furthermore, given the crucial importance of preprocessing techniques for CNF counting, we hope our work will motivate the development of preprocessing techniques for PB model counting. The rest of the paper is organized as follows: We discuss the preliminaries and existing counting algorithm in Section 2. In Section 3, we discuss existing works and how they relate to our approach, which we detail in Section 4. Following that, we analyze the empirical results of PBCount against existing tools in Section 5 and conclude in Section 6. 2 Preliminaries Boolean Formula A Boolean variable can take values true or false. A literal is either a Boolean variable or its negation. Let F be a Boolean formula. F is in conjunctive normal form (CNF) if F is a conjunction of clauses, where each clause is a disjunction of literals. F is satisfiable if there exists an assignment τ of variables of F such that F evaluates to true. We refer to τ as a satisfying assignment of F and denote the set of all τ as Sol(F). Model counting for Boolean formula F refers to the task of determining |Sol(F)|. Pseudo-Boolean Formula A PB constraint is either an equality or inequality of the form Pn i=1 aixi□k where x1, ..., xn are Boolean literals, a1, ..., an, and k are integers, and □is one of {≥, =, ≤}. We refer to a1, ..., an as term coefficients in the PB constraint, where each term is of the form aixi. A PB formula, G, consists of a set of PB constraints. G is satisfiable if there exists an assignment τ of all variables of G such that all its PB constraints hold. PB model counting refers to the computation of |Sol(G)| where Sol(G) is the set of all satisfying assignments of G. Projected Model Counting Let G be a formula defined over the set of variables X. Let Vi, Vj be subsets of X such that Vi ∩Vj = ∅and Vi ∪Vj = X. Projected model counting of G on Vi refers to the number of assignments of all variables in Vi such that there exists an assignment of variables in Vj that makes G evaluate to true (Aziz et al. 2015). In the evaluations, CNF model counter baselines perform projected model counting on the original variables in the PB formula, to avoid additional counts due to auxiliary variables introduced in the PB to CNF conversion process. Algebraic Decision Diagram An algebraic decision diagram (ADD) is a directed acyclic graph representation of a function f : 2X →S where X is the set of Boolean variables that f is defined over, and S is an arbitrary set known as the carrier set. We denote the function represented by an ADD ψ as Func(ψ). The internal nodes of ADD represent decisions on variables x ∈X and the leaf nodes represent s ∈S. In this work, we focus on the setting where S ⊂Z. As an example, an ADD representing 3x1 +4x2 is shown in Figure 1. In the figure, a dotted arrow from an internal node represents when the corresponding variable is set to false and a solid arrow represents when it is set to true. x1 x2 x2 0 4 3 7 Figure 1: An ADD representing 3x1 + 4x2 In addition, we make use of Apply and ITE operations on ADDs (Bryant 1986; Bahar et al. 1993). The Apply operation takes as input a binary operator ▷◁, two ADDs ψ1, ψ2, and outputs an ADD ψ3 such that the Func(ψ3) = Func(ψ1) ▷◁Func(ψ2). The ITE operation (if-then-else) involves 3 ADDs ψ1, ψ2, ψ3, where carrier set of ψ1 is restricted to {0, 1}. ITE outputs an ADD that is equivalent to having 1 valued leaf nodes in ψ1 replaced with ψ2 and 0 valued leaf nodes with ψ3. Relation of Pseudo-Boolean Constraint to CNF Clause Given an arbitrary CNF clause D, one could always convert D to a PB constraint. Given that D is of the form Wm i=1 li, where l1, ..., lm are Boolean literals, D can be represented by a single PB constraint Pm i=1 aili ≥1 where all coefficients a1, ..., ai, ...am are 1. However, there are PB constraints that require polynomially many CNF clauses to represent. An example would be Pm i=1 li ≥k which requires at least k of m literals to be true. We refer the reader to the Appendix for statistics of the number of variables and clauses before and after PB to CNF conversion for benchmarks used. 2.1 Model Counting with ADDs In this work, we adapt the existing dynamic programming counting algorithm of ADDMC (Dudek, Phan, and Vardi 2020a), shown in Algorithm 1, to perform PB model counting with ADDs. This includes using the default ADDMC configurations for ADD variable ordering (MCS) and cluster ordering ρ (BOUQUET TREE). The algorithm takes in a list φ of ADDs, representing all constraints, and an order ρ of which to process the ADDs. ADD ψ is initialized with value 1. According to cluster ordering ρ, cluster ADDs ψj are formed using the Apply operation with × operator on each of the individual constraint ADDs of constraints in the cluster. The cluster ADD ψj is combined with ψ using the same Apply operation. If variable x does not appear in later clusters in ρ, it is abstracted out from ψ (early projection process in ADDMC) using ψ ←W(¯x) × ψ[x 7→ 0]+W(x)×ψ[x 7→1] in line 8, where W(·) is the user provided literal weight function. In unweighted model counting, W(·) is 1 for all literals. Once all clusters have been processed, the unprocessed variables x of the formula G are abstracted out using the same operation as before (line 10). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8201 After all variables are abstracted out, ψ is a constant ADD that represents the final count. Algorithm 1: computeCount(φ, ρ) Input: φ - list of ADD, ρ - cluster merge ordering Output: model count 1: ψ ←constantADD(1) 2: for cluster Aj ∈ρ do 3: ψj ←constantADD(1) 4: for constraint Ci ∈Aj do 5: ψj ←ψj × φ[Ci] 6: ψ ←ψ × ψj 7: for each x ∈ψ where x not in later clusters in ρ do 8: ψ ←W(¯x) × ψ[x 7→0] + W(x) × ψ[x 7→1] 9: for all unprocessed variable x do 10: ψ ←W(¯x) × ψ[x 7→0] + W(x) × ψ[x 7→1] 11: return getValue(ψ) 3 Related Work Boolean Formula Preprocessing Boolean formula preprocessing involves simplifying a given formula to reduce runtimes of downstream tasks such as determining satisfiability of the formula (SAT-solving) and model counting. Preprocessing is crucial to modern SAT solvers and model counters’ performance improvements in recent decades. There are numerous preprocessing techniques introduced over the years by the research community, some of which are unit propagation, bounded variable elimination, failed literal probing, and vivification (Dowling and Gallier 1984; Berre 2001; E´en and Biere 2005; Piette, Hamadi, and Sais 2008). In this work, we adapt some of the SAT preprocessing techniques, namely unit propagation and a variant of failed literal probing, to simplify PB formulas. Search-Based Model Counters Among the numerous existing CNF model counters, we can classify them into two main categories – search-based model counters and decision diagram-based model counters. Notable existing search-based model counters include GPMC, Ganak, and Sharpsat-TD (Ryosuke Suzuki and Sakai 2017; Sharma et al. 2019; Korhonen and J¨arvisalo 2021). Search-based model counters work by setting values to variables in a given formula in an iterative manner, which is equivalent to implicitly exploring a search tree. In addition, search-based model counters adapt techniques such as sub-component caching from SAT solving for more efficient computation. Decision Diagram-Based Model Counter Decision diagram-based model counters employ knowledge compilation techniques to compile a given formula into directed acyclic graphs (DAGs) and perform model counting with these DAGs. Some of the recent decision diagram-based model counters are D4, ExactMC, ADDMC, and its related variant DPMC (Lagniez and Marquis 2017; Dudek, Phan, and Vardi 2020a,b; Lai, Meel, and Yap 2021). D4 and ExactMC compile the formula in a top-down manner into the respective decision diagram forms. In contrast, ADDMC and DPMC (decision diagram mode) perform bottom-up compilations of algebraic decision diagrams (ADDs). In this work, we based PBCount on ADDMC and introduced techniques to compile a PB constraint directly into an ADD and employ the same counting approach in ADDMC. Pseudo-Boolean Conversion One way to perform PB model counting is to convert the PB formula to a Boolean formula and use existing CNF model counters. A notable tool for the conversion of PB to CNF is PBLib (Philipp and Steinke 2015). PBLib implements various encodings to convert PB formulas into CNF form, some of which include cardinality networks, sorting networks, and BDD-based encodings (E´en and S¨orensson 2006; Ab´ıo et al. 2011, 2013). In this work, we use default settings for the PBEncoder binary provided as part of PBLib to perform the required conversions. We subsequently compare PBCount against stateof-the-art CNF model counters. It is worth noting that the model counting task for PB formula becomes a projected model counting task of the corresponding CNF formula, as previously mentioned in Section 2. 4 Approach We show the overall flow of PBCount in Figure 2. We first preprocess the PB formula using propagation and assumption probing. Subsequently, we compile each of the PB constraints into an algebraic decision diagram (ADD). Next, we merge constraint ADDs using Apply operation and perform model counting by abstracting out variables (Section 2.1). The model count would be the value after all variables are abstracted out. Without loss of generality, the algorithms described in this work handle PB constraints involving ‘=’ and ‘≥’ operators, as ‘≤’ type constraints can be manipulated into ‘≥’ type constraints. Preprocess (Section 4.1) Compile into individual ADD (Section 4.2) Count with ADDs (Section 2.1) PB formula G Model Count PBCount Figure 2: Overall flow of our PB model counter PBCount. Shaded boxes indicate our contributions. 4.1 Preprocessing Propagate Assumption Probing PB formula G PB formula G′ Preprocessing Figure 3: Preprocessing of PB formula The preprocessing phase of PBCount performs assumption probing and unit propagation (Biere, J¨arvisalo, and Kiesl 2021). PBCount repeatedly performs unit propagation and assumption probing until no change is detected, as shown in Algorithm 2. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8202 Algorithm 2: Preprocess(G) Input: G - PB formula Output: G′ - preprocessed PB formula 1: mapping ←[]; G′ ←G 2: repeat 3: for all single variable constraint C ∈G′ do 4: mapping ←mapping ∪InferDecision(C) 5: G′ ←propagate(G′, mapping) 6: for all variable x ∈G′ do 7: mapping ←mapping ∪AssumProbe(G′, x) 8: G′ ←propagate(G′, mapping) 9: until G′ does not change 10: return G′ Sign Manipulation Let C be the PB constraint −3x1 − 4x2 ≤−3. One can multiply both sides of the constraint by −1 to form 3x1 + 4x2 ≥3. In addition, one would be able to switch the sign of the coefficient of x2 as follows. 3x1 + 4x2 ≥3 3x1 + 4(1 −¯x2) ≥3 3x1 −4¯x2 ≥−1 In general, one is able to manipulate the sign of any term coefficient as shown in the example above. We use the above technique to optimize PB constraint compilation approaches which we discuss in later sections. Propagation Propagation in the Pseudo-Boolean context refers to the simplification of the PB constraints if decisions on some PB variables can be inferred. In particular, one might be able to infer decisions on PB variable xi from PB constraint Cj when the constraint is of either 1) aixi ≥k or 2) aixi = k forms. We defer the details of the InferDecision algorithm to the Appendix. Algorithm 3: AssumProbe(G, xi) Input: G - PB formula, xi - assumption variable Output: mapping of variable values 1: temp, mapping ←[] 2: for all constraint C ∈G[xi 7→1] do 3: temp ←temp ∪InferDecision(C) 4: for all constraint C ∈G[xi 7→0] do 5: temp ←temp ∪InferDecision(C) 6: for all variable xj, where j ̸= i do 7: if exactly one literal of xj in temp then 8: mapping ←mapping ∪temp[xj] 9: return mapping Assumption Probing Assumption probing can be viewed as a weaker form of failed literal probing (Biere, J¨arvisalo, and Kiesl 2021) as well as single step look ahead propagation process. For an arbitrary variable xi ∈G, where G is the PB formula, assumption probing involves performing propagation and decision inference independently for when xi = 0 and xi = 1. If another variable xj is inferred to have the same value assignment τ[xj] in both cases, then it can be inferred that xj should be set to τ[xj] in all satisfying assignments of G. Algorithm 3 illustrates the process for a single variable xi, and in the preprocessing stage, we perform assumption probing on all variables in G. 4.2 Pseudo-Boolean Constraint Compilation In this work, we introduce two approaches, namely topdown and bottom-up, to compile each constraint of a PB formula into an ADD. We use T, k, and eq in place of PB constraint C when describing the compilation algorithms. T refers to the term list, which is a list of aixi terms of C. k is the constraint constant and eq indicates if C is ‘=’ constraint. x1 x2 0 1 Figure 4: An ADD ψ1 representing 3x1 + 4x2 ≥3 Bottom-up ADD Constraint Compilation In order to compile an ADD which represents a PB constraint of the following form Pn i=1 aixi[≥, =, ≤]k, we first start compiling the expression Pn i=1 aixi from literal and constant ADDs as shown by line 3 of Algorithm 4. A constant ADD which represents integer ai is a single leaf node that has value ai. A literal ADD comprises of an internal node, which represents variable x, and true and false leaf nodes, which represent the evaluated values of the literal if x is set to true and false. With the literal and constant ADDs, we use Apply with × operator to form ADDs for each term aixi. We use Apply with + operator on term ADDs to form the ADD representing expression Pn i=1 aixi. As an example, the ADD ψ for the expression 3x1 + 4x2 is shown in Figure 1. To account for the inequality or equality, we look at the value of leaf nodes in expression ADD ψ and determine if they satisfy the constraint (lines 4 to 10). We replace the leaf nodes with 1 node if the constraint is satisfied and 0 node otherwise, the resultant ADD is illustrated in Figure 4. Top-down ADD Constraint Compilation In contrast to the bottom-up ADD compilation approach, the top-down ADD compilation for a given PB constraint involves the ifthen-else (ITE) operation for decision diagrams. We only consider PB constraints that involve = or ≥as mentioned previously. The top-down compilation algorithm (Algorithm 5) makes use of recursive calls of Algorithm 6 to construct an ADD that represents a given PB clause. In particular, Algorithms 5 and 6 work by iterating through the terms of the PB constraint using idx. The algorithms build the subADDs when the literal at position idx evaluates to true for the if-then case and otherwise for the else case of the ITE operation while updating the constraint constant k (lines 2-3 of Algorithm 5 and lines 9-10 of Algorithm 6). Notice that the top-down compilation approach allows for early termination when the current k value is negative for ≥k case. However, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8203 Algorithm 4: compileConstraintBottomUp(T, k, eq) Input: T - term list, k - constraint value, eq - indicator if constraint is ‘=’ type Output: ψ - constraint ADD 1: ψ ←constantADD(0) 2: for term t in T do 3: ψ += constantADD(t.coeff) × literalADD(t.literal) 4: for node n in LeafNode(ψ) do 5: if eq is true & n.value = k then 6: n.value ←1 7: else if eq is false & n.value ≥k then 8: n.value ←1 9: else 10: n.value ←0 11: return ψ early termination is possible only if all unprocessed coefficients are positive, implying that k in subsequent recursive calls cannot increase. One way would be to sort the term list T in ascending order of term coefficients, processing terms with negative coefficients before positive coefficients. Algorithm 5: compileConstraintTopDown(T, k, eq) Assumption: T is in ascending order of term coefficients or all coefficients are non-negative Input: T - term list, k - constraint value, eq - indicator if constraint is ‘=’ type Output: ψ - constraint ADD 1: ψ ←literalADD(T[0].literal) 2: ψlo ←compileTDRecur(T, k, eq, 1) 3: ψhi ←compileTDRecur(T, k −T[0].coeff, eq, 1) 4: ψ.ITE(ψhi, ψlo) 5: return ψ Optimizations for Bottom-up Compilation In the bottom-up compilation approach, an ADD is built from the individual literal and constant ADDs to represent the expression, before subsequently having leaf node values converted to 1 and 0 depending on if the PB constraint is satisfied. In the process, an ADD could be exponential in size with respect to the number of variables processed. In order to minimize the intermediate ADD during the compilation process, we introduce an optimization for bottom-up compilation. The key idea is to increase the number of shared sub-components of the intermediate ADD, and this amounts to processing the PB constraint terms in a manner that results in fewer distinct subset sums of term coefficients as every distinct subset sum requires a separate leaf node. To this end, we optimize the compilation process by sorting the terms according to the absolute values of their coefficients in ascending order. Subsequently, we manipulate the coefficients, using x = (1 −¯x), of the terms such that alternate terms have coefficients of different signs. We defer the pseudo code to the Appendix. Algorithm 6: compileTDRecur(T, k, eq, idx) Input: T - term list, k - current constraint value, eq-input constraint equality, idx-index of current term in T Output: ψ - constraint ADD from idx to end of T 1: if T[idx].coeff ≥0 then 2: isPos ←true 3: if eq & isPos & k < 0 then 4: return constantADD(0) 5: else if !eq & isPos & k ≤0 then 6: return constantADD(1) 7: else if idx < T.length then 8: ψ ←literalADD(T[idx].literal) 9: ψlo ←compileTDRecur(T, k, eq, idx + 1) 10: ψhi ←compileTDRecur(T, k −T[idx].coeff, eq, idx + 1) 11: return ψ.ITE(ψhi, ψlo) 12: else 13: if eq & k = 0 then 14: return constantADD(1) 15: else 16: return constantADD(0) Optimizations for Top-down Compilation Similarly, we also introduce optimizations for the top-down compilation approach. Recall that one would only be able to perform early termination for PB constraints of the form P aixi ≥k after all negative coefficient terms have been processed. To this end, we manipulate all coefficients to be positive and adjust k accordingly so that early termination is possible. Furthermore, we sort the terms in descending value of the term coefficients as larger coefficients are more likely to satisfy the constraint. We defer the pseudo code to the Appendix. Algorithm 7: compileConstraintDynamic(T, k, eq) Input: T - term list, k - constraint value, eq-input constraint equality Output: ψ - constraint ADD Cond 1: T.length ≤25 and k < 25th percentile of T.coeff Cond 2: k < 25th percentile of T.coeff and unique coefficient rate ≥0.9 and unique adjacent difference rate ≥0.85 1: if cond 1 or cond 2 then 2: bottomUp ←false 3: else 4: bottomUp ←true 5: if bottomUp then 6: return optimizeCompileBottomUp(T, k, eq) 7: else 8: return optimizeCompileTopDown(T, k, eq) Dynamic Compilation A PB formula can include more than one PB constraint. As we will show in a case study in the experiments section, the choice of compilation approach has a substantial impact on overall runtime. To this end, we introduce a dynamic heuristic (Algorithm 7) to select the appropriate compilation approach and perform optimization of the compilation process as previously discussed. In AlgoThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8204 rithm 7, we choose top-down compilation if either condition 1 or 2 is met. Conditions 1 and 2 are designed to be in favor of the botttom-up compilation approach, we provide performance analysis in the experiments section. 5 Experiments We performed extensive empirical evaluations to compare the runtime performance of PBCount with state-of-the-art exact model counters. Our empirical evaluation focuses on benchmarks arising from three application domains: sensor placement, auctions, and multi-dimensional knapsack. Through our evaluations and analysis, we sought to answer the following research questions: RQ 1 How does the runtime performance of PBCount compare to that of the state-of-the-art approaches? RQ 2 How does the dynamic compilation approach impact the runtime performance of PBCount? Setup We performed our evaluations on machines with AMD EPYC 7713 processors. Each benchmark instance is provided with 1 core, 16GB memory, and a timeout of 3600 seconds. Since all the state-of-the-art exact model counters take CNF as input, we employed the CNF model counters with the help of PB to CNF conversion tool PBLib1 (Philipp and Steinke 2015). We evaluated PBCount against stateof-the-art projected counters: DPMC, D42 and GPMC; D4 and GPMC are among the winners of the Projected counting track at Model Counting Competition 2022 and 2023. Benchmarks We generated 3473 benchmarks of the following application areas – sensor placement, auctions, and multi-dimension knapsack. We detail the benchmark statistics (number of variables and constraints) in the Appendix. • The sensor placement benchmark setting (1473 instances after removal of 0 counts) is adapted from prior work on identifying code sets (Latour, Sen, and Meel 2023). Given a network graph, a maximum number of sensors allowed, count the number of ways to place sensors such that failures in the network are uniquely identifiable. • For the auction benchmark setting (1000 instances), we adapt the combinatorial auction setting (Blumrosen and Nisan 2007) to a counting variant. There are m participants and n items, each of which can be shared by one or more participants. Given that each participant has a minimum utility threshold, we count the number of ways the n items can be shared such that all participants achieve their minimum threshold. The utilities are additive and can be negative. • For the multi-dimension knapsack benchmark setting (Gens and Levner 1980) (1000 instances), there are n items and constraints on m different features or dimensions of the items in the form of the sum of each dimension should not exceed a given constant. Given such a setting, the goal would be the count the number of subsets of items that satisfies the constraints. 1We used the provided PBEncoder for conversion. 2Binary from Model Counting Competition 2022 5.1 RQ1: Runtime Comparison 0 400 800 1200 1600 2000 Instances completed 0 400 800 1200 1600 2000 2400 2800 3200 3600 Time elapsed (s) PBCount GPMC DPMC D4 Figure 5: Cactus plot of number of benchmark instances completed by different counters. A point (x, y) on each line plot indicates the corresponding counter completes x number of benchmarks after y seconds has elapsed. We show the cactus plot of the number of instances completed by each counter out of the 3473 benchmarks in Figure 5. The exact number of instances completed by each counter for each benchmark set is shown in Table 1. Additionally, we provide individual cactus plots for each set of benchmarks in the Appendix. Benchmarks DPMC D4 GPMC PBCount Sensor placement 625 566 575 638 M-dim knapsack 81 281 279 503 Auction 76 116 159 372 Total 782 963 1013 1513 Table 1: Number of benchmark instances completed by each counter in 3600s, higher is better. In sensor placement benchmarks, PBCount count completed 638 instances, narrowly ahead of DPMC (625 instances), and more than D4 (566 instances) and GPMC (575 instances). In multi-dimension knapsack (M-dim knapsack) and auction benchmarks, PBCount significantly outperforms the competing counters. PBCount completed 503 M-dim knapsack instances, around 1.8× that of GPMC (279 instances) and D4 (281 instances), and 6.2× that of DPMC (81 instances). In auction benchmarks, PBCount completed 372 instances, around 2.3× that of GPMC (159 instances), 3.2× of D4 (116 instances), and 4.9× of DPMC (76 instances). Overall, PBCount completed 1513 instances out of 3473 total instances, around 1.5× that of GPMC, 1.6× of D4, and 1.9× that of DPMC. Note that PBCount achieved superior performance with minimal preprocessing over GPMC, which has advanced preprocessing capabilities. Our results demonstrate the significant performance advantages of counting natively for PB formulas and provide an affirmative answer to RQ1. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8205 5.2 RQ2: Analysis of Compilation Approaches We now focus on the analysis of different compilation approaches: top-down (Algorithm 5), bottom-up (Algorithm 4), and dynamic (Algorithm 7). The results in Table 2 show that for the benchmarks, bottom-up PB constraint compilation outperforms top-down approach significantly in auction and multi-dimension knapsack and to a lesser degree sensor placement. In addition, the evaluation result also highlights that our dynamic compilation heuristic and constraint term optimization closely match the bottomup approach, with the exception of completing 3 fewer instances in auction benchmarks. However, in the 372 auction instances completed by both bottom-up and dynamic approaches, the dynamic approach with term coefficient optimization completes the counting task faster for 257 instances. We show the scatter plot comparison in Figure 6. Benchmarks Top-down Bottom-up Dynamic Sensor placement 580 638 638 M-dim knapsack 109 503 503 Auction 158 375 372 Table 2: Number of benchmarks completed by PBCount when employing different compilation strategies, higher number indicates better performance. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Bottom up runtime (log10 axis) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Dynamic runtime (log10 axis) Figure 6: Dynamic vs bottom-up runtime (log10) for auction benchmarks. Points beneath red diagonal line indicate dynamic compilation is faster (257 points), points above otherwise (115 points). Compilation Approach Performance Case Study We provide an example to highlight the performance impact of the choice of compilation approach. The example involves the following PB formula in Equation 1 with a single constraint that has unique term coefficients: 12 X i=0 2ixi+1 + 10 X i=1 3ixi+13 + 7 X i=1 7ixi+23 ≥k (1) We vary the value of k in the above PB constraint from 101 to 105 and compare the runtime between top-down and bottom-up compilation approaches in Table 3. Note that bottom-up compilation takes around the same time irrespective of k as there is no early termination. On the other hand for top-down compilation, the PB constraint is easily satisfied when k is small and thus allows for early termination, leading to significant time savings compared to when k is large. Notice that when top-down compilation is unable to terminate early, it is much slower than bottom-up compilation even when all term coefficients are unique. Approach k value 101 102 103 104 105 Top-down 0.005 0.009 0.228 8.586 46.071 Bottom-up 6.927 7.202 7.198 7.434 6.732 Table 3: Runtime (seconds) to complete model counting for formula in Equation 1. Lower is better Approach k value 101 102 103 104 105 Top-down 3.325 61.753 60.530 60.881 64.097 Bottom-up 0.005 0.004 0.004 0.004 0.004 Table 4: Runtime (seconds) to complete model counting for formula in Equation 1 with all coefficients set to 1. As mentioned previously, bottom-up compilation benefits from having large numbers of same term coefficients or collisions in subset sums of coefficients. To this end, we changed all term coefficients of the PB constraint in equation 1 to 1 and compared runtimes in Table 4. We observed around three orders of magnitude reduction in the runtime of the bottom-up compilation approach. In contrast, the topdown approach terminates early only in k = 101 case and requires full enumeration in other cases. In the absence of early termination, top-down compilation approach is much slower than bottom-up compilation approach, and this is reflected in our dynamic compilation heuristic. 6 Conclusion In this work, we introduce the first exact PB model counter, PBCount. PBCount directly compiles PB formulas into ADDs, enabling us to reuse the ADD counting framework in ADDMC. In the design of PBCount, we introduce both top-down and bottom-up PB constraint compilation techniques and highlight the performance differences between them. While we introduced dynamic compilation heuristics to determine the per constraint compilation method and preliminary preprocessing techniques for PB formulas, it would be of interest to develop more advanced heuristics and preprocessing techniques in future works. A strong motivation is PBCount’s performance lead over existing CNF model counters. We hope this work will gather more interest in PB formulas and PB model counting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8206 Acknowledgments The authors thank Anna L.D. Latour for helping during benchmark generation. The authors thank Arijit Shaw and Jiong Yang for constructive discussions. The authors thank reviewers for providing feedback. This work was supported in part by the Grab-NUS AI Lab, a joint collaboration between GrabTaxi Holdings Pte. Ltd. and National University of Singapore, and the Industrial Postgraduate Program (Grant: S18-1198-IPP-II), funded by the Economic Development Board of Singapore. This work was supported in part by National Research Foundation Singapore under its NRF Fellowship Programme [NRF-NRFFAI12019-0004], Ministry of Education Singapore Tier 2 grant [MOE-T2EP20121-0011], and Ministry of Education Singapore Tier 1 Grant [R-252-000-B59-114]. The computational work for this article was performed on resources of the National Supercomputing Centre, Singapore. References Ab´ıo, I.; Nieuwenhuis, R.; Oliveras, A.; and Rodr´ıguezcarbonell, E. 2011. BDDs for Pseudo-Boolean Constraints - Revisited. In International Conference on Theory and Applications of Satisfiability Testing. Ab´ıo, I.; Nieuwenhuis, R.; Oliveras, A.; and Rodr´ıguezcarbonell, E. 2013. A Parametric Approach for Smaller and Better Encodings of Cardinality Constraints. In International Conference on Principles and Practice of Constraint Programming. Aziz, R. A.; Chu, G.; Muise, C.; and Stuckey, P. J. 2015. #∃SAT: Projected Model Counting. In International Conference on Theory and Applications of Satisfiability Testing. Bacchus, F.; Dalmao, S.; and Pitassi, T. 2003. Algorithms and complexity results for #SAT and Bayesian inference. 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings. Bahar, R. I.; Frohm, E. A.; Gaona, C. M.; Hachtel, G. D.; Macii, E.; Pardo, A.; and Somenzi, F. 1993. Algebraic decision diagrams and their applications. In International Conference on Computer Aided Design. Berre, D. L. 2001. Exploiting the real power of unit propagation lookahead. Electron. Notes Discret. Math., 9: 59–80. Berre, D. L.; Marquis, P.; Mengel, S.; and Wallon, R. 2018. Pseudo-Boolean Constraints from a Knowledge Representation Perspective. In International Joint Conference on Artificial Intelligence. Biere, A.; J¨arvisalo, M.; and Kiesl, B. 2021. Preprocessing in SAT Solving. In Handbook of Satisfiability. Blumrosen, L.; and Nisan, N. 2007. Algorithmic Game Theory, chapter 11, 267–300. Cambridge University Press. Bryant, R. E. 1986. Graph-Based Algorithms for Boolean Function Manipulation. IEEE Transactions on Computers, C-35(8): 677–691. Devriendt, J. 2020. Watched Propagation of 0-1 Integer Linear Constraints. In International Conference on Principles and Practice of Constraint Programming. Devriendt, J.; Gocht, S.; Demirovic, E.; Nordstr¨om, J.; and Stuckey, P. J. 2021. Cutting to the Core of Pseudo-Boolean Optimization: Combining Core-Guided Search with Cutting Planes Reasoning. In AAAI Conference on Artificial Intelligence. Dowling, W. F.; and Gallier, J. H. 1984. Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae. J. Log. Program., 1: 267–284. Dudek, J. M.; Phan, V. H. N.; and Vardi, M. Y. 2020a. ADDMC: Weighted Model Counting with Algebraic Decision Diagrams. In AAAI Conference on Artificial Intelligence. Dudek, J. M.; Phan, V. H. N.; and Vardi, M. Y. 2020b. DPMC: Weighted Model Counting by Dynamic Programming on Project-Join Trees. In International Conference on Principles and Practice of Constraint Programming. E´en, N.; and Biere, A. 2005. Effective Preprocessing in SAT Through Variable and Clause Elimination. In International Conference on Theory and Applications of Satisfiability Testing. E´en, N.; and S¨orensson, N. 2006. Translating PseudoBoolean Constraints into SAT. J. Satisf. Boolean Model. Comput., 2: 1–26. Elffers, J.; and Nordstr¨om, J. 2018. Divide and Conquer: Towards Faster Pseudo-Boolean Solving. In International Joint Conference on Artificial Intelligence. Fan, C.; Miller, K.; and Mitra, S. 2020. Fast and Guaranteed Safe Controller Synthesis for Nonlinear Vehicle Models. Computer Aided Verification, 12224: 629 – 652. Gens, G.; and Levner, E. 1980. Complexity of approximation algorithms for combinatorial problems: a survey. SIGACT News, 12: 52–65. Jackson, D. 2019. Alloy: a language and tool for exploring software designs. Commun. ACM, 62. Korhonen, T.; and J¨arvisalo, M. 2021. Integrating Tree Decompositions into Decision Heuristics of Propositional Model Counters. In International Conference on Principles and Practice of Constraint Programming. Lagniez, J.-M.; and Marquis, P. 2017. An Improved Decision-DNNF Compiler. In International Joint Conference on Artificial Intelligence. Lai, Y.; Meel, K. S.; and Yap, R. H. C. 2021. The Power of Literal Equivalence in Model Counting. In AAAI Conference on Artificial Intelligence. Latour, A.; Sen, A.; and Meel, K. 2023. Solving the Identifying Code Set Problem with Grouped Independent Support. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence. Narodytska, N.; Shrotri, A. A.; Meel, K. S.; Ignatiev, A.; and Marques-Silva, J. 2019. Assessing Heuristic Machine Learning Explanations with Model Counting. In International Conference on Theory and Applications of Satisfiability Testing. Philipp, T.; and Steinke, P. 2015. PBLib - A Library for Encoding Pseudo-Boolean Constraints into CNF. In International Conference on Theory and Applications of Satisfiability Testing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8207 Piette, C.; Hamadi, Y.; and Sais, L. 2008. Vivifying Propositional Clausal Formulae. In European Conference on Artificial Intelligence. Ryosuke Suzuki, K. H.; and Sakai, M. 2017. Improvement of Projected Model-Counting Solver with Component Decomposition Using SAT Solving in Components. In JSAI Technical Report. Sharma, S.; Roy, S.; Soos, M.; and Meel, K. S. 2019. GANAK: A Scalable Probabilistic Exact Model Counter. In Proceedings of International Joint Conference on Artificial Intelligence. Sinz, C. 2005. Towards an Optimal CNF Encoding of Boolean Cardinality Constraints. In International Conference on Principles and Practice of Constraint Programming. Yang, J.; and Meel, K. S. 2021. Engineering an Efficient PB-XOR Solver. In International Conference on Principles and Practice of Constraint Programming. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8208
2024
912
18,754
A Reinforcement-Learning-Based Multiple-Column Selection Strategy for Column Generation Haofeng Yuan, Lichang Fang, Shiji Song* Department of Automation, BNRist, Tsinghua University {yhf22, fanglc22}@mails.tsinghua.edu.cn, [email protected] Abstract Column generation (CG) is one of the most successful approaches for solving large-scale linear programming (LP) problems. Given an LP with a prohibitively large number of variables (i.e., columns), the idea of CG is to explicitly consider only a subset of columns and iteratively add potential columns to improve the objective value. While adding the column with the most negative reduced cost can guarantee the convergence of CG, it has been shown that adding multiple columns per iteration rather than a single column can lead to faster convergence. However, it remains a challenge to design a multiple-column selection strategy to select the most promising columns from a large number of candidate columns. In this paper, we propose a novel reinforcementlearning-based (RL) multiple-column selection strategy. To the best of our knowledge, it is the first RL-based multiplecolumn selection strategy for CG. The effectiveness of our approach is evaluated on two sets of problems: the cutting stock problem and the graph coloring problem. Compared to several widely used single-column and multiple-column selection strategies, our RL-based multiple-column selection strategy leads to faster convergence and achieves remarkable reductions in the number of CG iterations and runtime. Introduction Column generation (CG) is a widely used approach for solving the linear programming (LP) relaxations of large-scale optimization problems that have a prohibitively large number of variables to deal with. It exploits the fact that the majority of feasible variables (i.e., columns) will not be part of an optimal solution. Therefore, CG starts with a subset of columns and gradually adds new columns that have the potential to improve the current solution, e.g., columns with a negative reduced cost (assuming a minimization problem), until no such columns exist and the current solution is proven optimal (Desaulniers, Desrosiers, and Solomon 2006). CG is often combined with the branch-and-bound method to solve large-scale integer programming problems, which is called branch-and-price (Barnhart et al. 1998). Specifically, CG follows an iterative process, as shown in Figure 1. The original large-scale problem is decomposed *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Restricted Master Problem Pricing Problem Column selection Dual solution Candidate columns Selected columns Figure 1: The iterative process of CG. into the restricted master problem (RMP) and the pricing problem (PP). CG starts by solving the RMP with a small subset of columns from the original problem. At each iteration, the RMP is solved using an LP solver (e.g., the simplex algorithm), and the dual solution is used to formulate the PP. The PP is a “column generator” that generates new columns (typically with negative reduced costs) to improve the current RMP solution. If such columns are found, they are added to the RMP to start a new iteration. Otherwise, it certifies the optimality of the current RMP solution for the original problem, and CG terminates. Typically, the column with the most negative reduced cost is selected to add to RMP at each iteration, which guarantees the convergence of CG and the optimality of the final solution (L¨ubbecke and Desrosiers 2005). However, it often suffers from slow convergence, which limits its efficiency and usability. Previous research has shown that selecting multiple columns per iteration, including sub-optimal solutions of PP (even columns with non-negative reduced costs), can lead to faster convergence (Moungla, L´etocart, and Nagih 2010). This allows the RMP approximation to be improved, the optimal basis to be characterized faster, and hence to reduce the number of iterations. However, selecting multiple columns per iteration may result in a large fraction of useless columns that do not belong to the final optimal basis and increase the computation cost of RMP. In general, the PP can generate a pool of feasible candidate columns, which are sorted according to the reduced cost, and the top-k of them are greedily selected. In order to improve the selection, Goffin and Vial (2000) suggested that the RMP description can be improved by selecting non-correlated columns. Several diversification-based multiple-column selection strategies have been developed and shown to be effective in practice (Vanderbeck 1994; Moungla, L´etocart, and Nagih 2010). However, despite the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8209 practical effectiveness of the diversification-based selection strategies, there is still no perfect column selection strategy proven to outperform or dominate the others. Recently, reinforcement learning (RL) has shown impressive success in optimization tasks (Mazyavkina et al. 2021; Yang, Jiang, and Song 2023; Zhang et al. 2020), which removes the need for substantial expert knowledge and presolved instances. In this paper, we propose a novel RL-based multiple-column selection strategy for CG. Specifically, we treat the iterative column selection in CG as a sequential decision task, and introduce an actor-critic style neural network that takes into account the column-constraint structure of RMP, the interrelations of candidate columns, and global properties of the problem instance. We use proximal policy optimization (PPO) (Schulman et al. 2017) to train the strategy to minimize the total number of iterations. Our RLbased multiple-column selection strategy is evaluated on two sets of problems: the cutting stock problem (CSP) (Gilmore and Gomory 1961) and the graph coloring problem (GCP) (Mehrotra and Trick 1996). Experimental results demonstrate that our RL-based strategy outperforms several widely used single-column and multiple-column selection strategies in terms of the number of iterations and runtime. The main contributions of this paper can be summarized as follows: • We exploit RL to learn an effective multiple-column selection strategy for CG. To the best of our knowledge, it is the first RL-based multiple-column selection strategy. • We design an actor-critic style neural network that considers the column-constraint structure of RMP, the interrelations of candidate columns, and global properties of the problem instance, which allows to learn a columnrelation-aware multiple-column selection strategy. • We apply our approach to CSP and GCP, and experimental results show that it outperforms all baseline column selection strategies on various sizes of problems. Moreover, our RL-based framework can be easily applied to other problems solved based on CG. Related Work In this section, we review the acceleration methods for CG in the literature, with a focus on recent advances in machine learning (ML) techniques for column selection. Acceleration Methods for Column Generation. Various techniques have been proposed in the literature to accelerate CG (Desaulniers, Desrosiers, and Solomon 2002). One approach is to select “better” columns to add to RMP at each iteration. A classic approach is to add multiple columns rather than a single column with the most negative reduced cost. Goffin and Vial (2000) showed that the performance of CG is mathematically related to the variance-covariance matrix of selected columns: the convergence is accelerated with the selection of non-correlated columns. Moungla, L´etocart, and Nagih (2010) proposed two practical multiple-column selection strategies, which enhance the diversification of selected columns. Nevertheless, the effect of column selection is still not fully understood theoretically, and there is still no perfect column selection strategy proven to achieve the minimum number of iterations for CG. For faster convergence than existing hand-crafted column selection strategies, we apply RL to learn a multiple-column selection strategy that aims at minimizing the number of iterations through the interaction with the CG solution process. Another approach is dual stabilization, which aims to form a “better” PP. For example, du Merle et al. (1999) introduced a penalty function to reduce the oscillation of dual values. For a discussion on stabilization-based acceleration methods, please see (Pessoa et al. 2018) and the references therein. More recently, Babaki, Jena, and Charlin (2021) proposed a learning-based method for predicting the optimal stabilization center of dual values in vehicle routing problems. We remark that our column selection strategy does not conflict with dual stabilization techniques and can be used synergistically for further improvement. Machine-Learning-based Column Selection Strategy. Over the last few years, researchers have become increasingly interested in ML to accelerate optimization tasks (Bengio, Lodi, and Prouvost 2021), and several learning-based methods have been proposed for specific problems solved by CG (Tahir et al. 2021; Yuan, Jiang, and Song 2022; Shen et al. 2022). The closest works to ours are (Morabit, Desaulniers, and Lodi 2021) and (Chi et al. 2022), both leveraging ML for a better column selection strategy. Morabit, Desaulniers, and Lodi (2021) proposed a onestep lookahead “expert” to identify the columns that maximize the improvement of RMP in the next iteration, which is achieved by solving an extremely time-consuming mixedinteger linear programming (MILP). Then, they trained an ML model to cheaply imitate the decisions of the expensive MILP expert. They formulated the column selection procedure at each iteration as a classification task and trained the ML model in a supervised manner. The drawback of their approach is that the one-step lookahead expert is shortsighted because it only focuses on the very next iteration but disregards the interdependencies across iterations. Besides, it requires expensive pre-solved instances from previous executions of the MILP expert as training data, which may be unaffordable for large-scale applications. Moreover, the ML model only imitates the decision from the MILP expert, and thus it can never surpass the decisions for demonstration. Chi et al. (2022) proposed a DQN-based single-column selection strategy that applies Q-learning to identify the “best” column at each iteration. They utilize the graph neural network (GNN) as a Q-function approximator to maximize the total expected future reward. While showing improved performance compared to the greedy single-column selection strategy, their framework can hardly be extended to multiple-column selection due to the exponential growth of action space and complex interdependencies in the column combinations, which limits its practical application (we implement a multiple-column variant of their DQN-based approach in our experiments). In contrast, our proposed neural network and learning scheme overcome these challenges and can derive an effective multiple-column selection strategy. Experiment results show that our RL-based multiplecolumn selection strategy outperforms the MILP expert used in (Morabit, Desaulniers, and Lodi 2021) and the DQNThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8210 based strategy proposed in (Chi et al. 2022). Basis of Column Generation In this section, we use CSP as an example to introduce the mathematical formulation of CG. The CSP aims to determine the smallest number of rolls of length L that have to be cut to satisfy the demands of m customers, where customer i demands di pieces of orders of length ℓi, i = 1, 2, ..., m. Gilmore and Gomory (1961) proposed the CG formulation, in which the set P of all feasible cutting patterns is: P =         a1 ... am   ∈Nm | m X i=1 ℓiai ≤L      . Each pattern p ∈P is denoted by a vector (a1p, . . . , amp)T ∈Nm, where aip represents the number of pieces of length ℓi obtained in cutting pattern p. Let λp be a decision variable that denotes the number of rolls cut using pattern p ∈P. The CSP is formulated as follows: min X p∈P λp s.t. X p∈P aipλp ≥di, i ∈{1, 2, ..., m} , λp ∈N, p ∈P. The objective function minimizes the total number of patterns used, equivalent to minimizing the number of rolls used. The m constraints ensure all demands are satisfied. This formulation usually has an extremely large number of decision variables as P is exponentially large. Therefore, the RMP is proposed for the linear relaxation with an initial set ˜P ⊂P. The RMP is defined as follows: min X p∈˜ P λp s.t. X p∈˜ P aipλp ≥di, i ∈{1, 2, ..., m} , λp ≥0, p ∈˜P. Let u = (u1, . . . , um)T be the dual solution of the RMP. The columns that can potentially improve the solution of RMP are given by the solution to the following knapsack problem, which is referred to as the PP: max m X i=1 uiai s.t. m X i=1 ℓiai ≤L, i ∈{1, 2, ..., m} , ai ∈N, i ∈{1, 2, ..., m} . The PP generates feasible patterns (columns), represented as vector (a1p, . . . , amp)T, to be added to ˜P for the next iteration. In general, several sub-optimal solutions of PP, which form a candidate column pool, can be obtained through dynamic programming methods or commercial solvers such as Gurobi (Gurobi Optimization, LLC 2023). We can select one or multiple columns from the candidate column pool to add to RMP for the next iteration (see Figure 1). Methodology In this section, we present the details of our RL-based multiple-column selection strategy. MDP Formulations We treat the CG solution process as the environment for the RL agent. We formulate the multiple-column selection task for CG as a Markov decision process (MDP): State S. The state describes the information about the current CG status. which is provided for the RL agent. As illustrated in Figure 2, the state is defined to include 1) a bipartite graph representation of the current RMP and candidate columns, and 2) global properties of the problem instance. As introduced in (Gasse et al. 2019), an LP can be represented as a bipartite graph with constraint nodes C and column nodes V. We further incorporate candidate columns into the bipartite graph representation, where column nodes are divided into existing columns in the current RMP and candidate columns to be selected (blue nodes and red nodes in Figure 2). An edge (v, c) exists between a node v ∈V and a node c ∈C if column v contributes to constraint c. State information corresponding to the columns (e.g., solution value, reduced cost) and constraints (e.g., slack, dual value) are represented as node features. In addition to the bipartite graph representation, we represent the properties associated with the problem instance (e.g., the number of constraints, maximum constraint coefficient) as an additional global feature vector (the green node in Figure 2). Action A. At each iteration, we select k columns from the pool of n candidate columns generated by the PP, and add them to RMP for the next iteration. The action space A contains all possible k-combinations of the n candidate columns, i.e., |A| = n k  . The RL agent returns a probability distribution over the action space, and we sample an action from that distribution, which can be seen as a multiplecolumn selection strategy. Transition T . The transition rule is deterministic. Once an action is selected, the corresponding k columns are added to the RMP to start a new iteration. Constraint Global property Existing column Candidate column Figure 2: A toy example of state. The left part illustrates the bipartite graph representation of the current RMP, including 4 constraint nodes, 3 existing column nodes, and 4 candidate column nodes. The right part denotes the global feature vector for the problem instance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8211 Reward R. The goal of the RL agent is to minimize the total number of iterations. We design a reward function consisting of a unit penalty for each additional iteration and two auxiliary components: 1) the decrease in the objective value of RMP, and 2) the sum of cosine distances of selected columns, which is inspired by the observation in (Vanderbeck 1994). The immediate reward at time step t is: rt = −1 + α ·  objt−1 −objt obj0  + β · X ui,uj∈Cs  1 − ⟨ui, uj⟩ ∥ui∥· ∥uj∥  , where (objt−1 −objt) is the decrease in the objective value, normalized by the objective value obj0 of the initial RMP; Cs is the set of selected columns, and  1 − ⟨ui,uj⟩ ∥ui∥·∥uj∥  is the cosine distance between column vectors ui and uj; α and β are non-negative weight hyperparameters. Model We use PPO (Schulman et al. 2017) as the training algorithm for our multiple-column selection strategy. PPO is a deep reinforcement learning algorithm based on the actorcritic architecture. Given a state s, the critic estimates the value function V (s), and the actor gives a probability distribution π = π (a1 | s) , π (a2 | s) , . . . , π a|A| | s  over the action space A. An action is sampled from the probability distribution, and the corresponding k columns are selected and added to the RMP to start a new iteration. We propose an actor-critic style neural network for the RL-based multiple-column selection strategy. The network consists of three components: an encoder, a critic decoder, and an actor decoder. The details of the neural network architecture are described below: Encoder. As introduced above, the state is represented by a bipartite graph and a global feature vector. The encoder takes the bipartite graph and the global feature vector as input to produce the embeddings for the current state. The architecture of the encoder is shown in Figure 3. Specifically, the bipartite graph and the global feature vector are embedded separately. For the bipartite graph, the encoder first computes the initial node embeddings from raw node features through a learned linear projection. Then, the node embeddings of the bipartite graph are updated through N1 graph convolutional layers. Each layer proceeds in two phases: the first phase is performed to update the constraint node embeddings, followed by the second phase that updates the column node embeddings. Both phases are implemented using the graph isomorphism network (GIN) (Xu et al. 2019) with residual connections. Let x(ℓ) c and x(ℓ) v denote the embeddings for constraint node c ∈C and column node v ∈V at layer ℓ. The node embeddings are updated as follows: x(ℓ) c = MLP(ℓ) C  (1 + ϵ) · x(ℓ−1) c + X vi∈N (c) x(ℓ−1) vi  + x(ℓ−1) c , x(ℓ) v = MLP(ℓ) V  (1 + ϵ) · x(ℓ−1) v + X ci∈N (v) x(ℓ) ci  + x(ℓ−1) v , Linear Linear Graph Conv Figure 3: The architecture of the encoder. Colored nodes denote feature vectors or embeddings. where MLP(ℓ) C and MLP(ℓ) V are multi-layer perceptrons (MLPs) for updating the constraint node embeddings and column node embeddings, respectively, and N (v) denotes the neighborhood set of node v. The global feature vector is embedded through N2 linear layers, each followed by a LeakyReLU activation function. Critic Decoder. The critic decoder maps the latent embeddings of state s into the estimated value function V (s). The architecture of the critic decoder is illustrated in Figure 4(a). In the critic decoder, the node embedding vectors associated with the existing columns, candidate columns, and constraints are respectively pooled by average, and then concatenated together with the embedding vector of global features, which contains information of the current RMP as well as the global properties of the problem instance. Then, the concatenated vector is passed through an N3 layer MLP to estimate the value function V (s). Actor Decoder. Based on the embeddings produced by the encoder, the action decoder outputs a probability distribution π = π (a1 | s) , π (a2 | s) , . . . , π a|A| | s  over the action space. An action is sampled from the probability distribution, determining which k columns are selected by the RL-based multiple-column selection strategy. The interrelations, especially the similarity between candidate columns, are crucial to the multiple-column selection. Therefore, we explicitly model the message-passing between candidate columns. We first create a complete graph, with each node corresponding to a candidate column. The node embeddings of the complete graph are initialized as the final embeddings of candidate column nodes from the bipartite graph. We associate the distance (e.g., Jaccard distance, cosine distance) between candidate column vectors as the initial edge features. As shown in Figure 4(b), the candidate column node embeddings of the bipartite graph are used to create the complete graph. We apply the graph attention network (GAT) with edge features (Veliˇckovi´c et al. 2018; Kami´nski et al. 2021) to update the embeddings of the complete graph through message-passing between candidate columns. Then, for each candidate column, we concatenate its node embedding from the bipartite graph, its node embedding from the complete graph, and the global embedding together. The concatenated embedding for each candidate column is processed through an N4 layer MLP to obtain the final embedThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8212 AvgPool AvgPool AvgPool Linear 𝑉(𝑠) (a) The critic decoder + + + + + + Graph Conv Linear ℎ% ℎ& ℎ' ℎ( ℎ)! ℎ)" ℎ)# ℎ)$ ℎ)% ℎ)& 𝜋(𝑎*) 𝜋(𝑎+) 𝜋(𝑎,) 𝜋(𝑎-) 𝜋(𝑎.) 𝜋(𝑎/) Readout ℎ% ℎ& ℎ' ℎ( (b) The actor decoder Figure 4: The architecture of the critic decoder and actor decoder. Here shows a toy example of selecting 2 from 4 candidates. ding vector (hA to hD in Figure 4(b)). For each action ai, we define its representation vector hai = P vj∈Cs(ai) hvj, where Cs (ai) denotes the set of columns selected in action ai, and hvj is the final embedding vector of candidate column vj. The probability to select ai is computed through a learnable nonlinear readout function: π (ai | s) = softmax C · tanh wT o · ReLU (Wo · hai)  , where wo and Wo are learnable vector and matrix, respectively, and C is the clipping coefficient (C = 10). The actor acts as a multiple-column selection strategy, which takes the current state as input and outputs a probability distribution over the action space. Note that we are using a learnable nonlinear mapping to derive the probability distribution for each k-combination of columns. This is by no means a simple addition of individual scores for the corresponding k columns. An action is sampled from the probability distribution, and the k candidate columns in the corresponding combination are added to the RMP for the next iteration. Evaluation We evaluate our proposed RL-based multiple-column selection strategy on two sets of problems: the CSP and the GCP. Both problems are well-known for the linear relaxation effectively solved using CG. Experimental results demonstrate that our RL-based strategy outperforms several widely used single-column selection strategies and multiple-column selection strategies. Experiment Task Cutting Stock Problem. The CG formulation of CSP has been introduced in the previous section. The problem instances are generated according to the rules for random instances in BPPLIB (Delorme, Iori, and Martello 2016, 2018), a widely used benchmark for bin packing and cutting stock problems. We divide the CSP instances into three categories: easy, normal, and hard, corresponding to the roll length L = 50, 100, and 200. We generated 1000, 200, and 100 instances of the three instance categories for evaluation, respectively. Instances for training are randomly generated and solved using CG as the environment for the RL agent. Graph Coloring Problem. The GCP aims to assign a minimum number of colors to the nodes on a graph, such that every pair of adjacent nodes does not share the same color (Malaguti and Toth 2010). In the CG formulation for GCP, the RMP can be expressed as using a minimum number of maximal independent sets (MISs) to cover all the nodes, while the PP is modeled as the maximum weight independent set problem (MWISP) to search for new MISs with the most negative reduced cost. The details are described in (Mehrotra and Trick 1996). Similar to the CSP, the GCP instances are divided into three categories, corresponding to the number of nodes N = 30, 40, and 50 respectively. The CGP instances are generated according to the rules for random graphs in (Mehrotra and Trick 1996). Hyperparameter Configuration We implement our model to learn the RL-based multiplecolumn strategy. We select 5 out of 10 candidate columns at each iteration, which strikes a balance between the number of iterations and the cost per iteration in our task. To guarantee convergence, we force the optimal solution of the PP to always be selected. We set the number of layers N1 = N2 = N3 = N4 = 3 for the MLPs in the encoder and decoder. The weights in the reward function are set to α = 300 and β = 0.02 to balance the reward scales and the discount factor γ is set to 0.9. We use PPO with a clipping threshold ϵ = 0.2, and the Adam optimizer with a learning rate 1 × 10−3 to train the RL model. The hyperparameter configuration is fixed across all instance categories of CSP and GCP. Comparison Evaluation We compare our RL-based multiple-column strategy with several well-established single-column and multiple-column selection strategies, as well as the multiple-column selection strategy using the MILP expert proposed in (Morabit, Desaulniers, and Lodi 2021) and the DQN-based approach proposed in (Chi et al. 2022). The details of baseline strategies for comparison are as follows: Single-column selection strategy: • Greedy single-column selection (Greedy-S): Always select the column with the most negative reduced cost. • Random single-column selection (Random-S): Randomly select a column from the candidate column pool. • DQN-based single-column selection (DQN-S): Selection strategy based on DQN in (Chi et al. 2022). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8213 Strategy CSP (Easy) CSP (Normal) CSP (Hard) # Itr Time # Itr Time # Itr Time Greedy-S 37.68 228.92 89.20 186.78 171.44 301.07 Random-S 62.63 374.80 116.82 257.04 205.31 376.16 DQN-S 35.54 215.65 88.95 178.52 / / Greedy-M 12.03 75.34 27.01 62.60 52.13 96.42 Random-M 13.97 84.83 28.26 63.78 51.43 96.51 MILP-M 10.65 96.23 23.46 81.17 44.99 147.19 Diverse-M 11.45 74.59 25.03 59.32 47.24 93.60 Ours 10.33 67.84 22.85 55.05 43.95 87.95 Table 1: Comparison results on the CSP, in terms of the average number of iterations per instance and total runtime (in seconds) over the evaluation instance set. Multiple-column selection strategy: • Greedy multiple-column selection (Greedy-M): Always select the top-k columns according to the reduced cost. • Random multiple-column selection (Random-M): Randomly select k columns from the candidate column pool. • MILP expert (MILP-M): Selection strategy using the MILP expert in (Morabit, Desaulniers, and Lodi 2021). • Diversification-based column selection (Diverse-M): A modified strategy of CGDS (Moungla, L´etocart, and Nagih 2010) to fit our task: We first sort the candidate columns by their reduced costs, and prioritize the candidate columns that are disjoint from the already selected columns, if there exists one in the remaining pool. For a fair comparison, we set the same number of candidate columns and the columns to select in all multiplecolumn selection strategies. The candidate columns are generated as the 10 columns with the most negative reduced cost from PP. We report the evaluation metrics of 1) the average number of iterations per instance and 2) the total runtime over the evaluation set. Generally, all heuristic strategies and the RL-based strategies (on GPU), except MILPM, require negligible time for a selection decision, so the comparison of the average number of iterations is approximately equivalent to the comparison of runtime for these strategies. We remark that MILP-M is practically intractable because the time taken by the MILP expert in MILP-M is even larger than the time for RMP and PP (Morabit, Desaulniers, and Lodi 2021). As shown in the experimental results, even if MILP-M requires fewer iterations for convergence compared to other heuristic column selection strategies, its runtime is significantly larger due to the expensive decisions from the MILP expert. Results on CSP. The comparison results on CSP are reported in Table 1. All the multiple-column selection strategies achieve significantly faster convergence than the singlecolumn selection strategies, especially on large-scale problem instances. Diverse-M requires the least runtime among the baseline strategies. While MILP-M requires fewer iterations than Diverse-M, it takes a much larger runtime due to the extremely expensive column selection decisions. Strategy GCP (Easy) GCP (Normal) GCP (Hard) # Itr Time # Itr Time # Itr Time Greedy-M 18.30 75.16 29.00 99.54 39.04 123.23 Random-M 19.04 78.81 30.91 104.79 41.01 125.26 MILP-M 17.07 93.78 25.86 128.71 34.19 188.44 Diverse-M 18.07 74.36 28.17 96.14 37.78 119.90 Ours 15.19 62.06 24.93 87.31 34.11 111.56 Table 2: Comparison results on the GCP. The experimental results show that our RL approach can learn a stronger strategy for column selection, which is implicitly represented by the neural network. The RL-based multiple-column selection strategy outperforms all baseline column selection strategies in the three instance categories of various sizes. Compared to the best baseline strategy (Diverse-M), our RL-based multiple-column selection strategy yields a total runtime reduction of 9.05%, 7.20%, and 6.04% in the three instance categories, respectively. In addition, it is worth mentioning that our RL-based multiplecolumn selection strategy requires even fewer iterations than MILP-M. This is mainly because the MILP expert focuses only on the current step and its goal is to minimize the objective value for the very next iteration, whereas our RL agent aims to minimize the total number of iterations and takes into consideration the long-term effect of currently selected columns on future iterations. Results on GCP. Since the single-column selection strategies have been demonstrated to be ineffective on CSP, we conduct experiments on GCP using only multiple-column selection strategies. As reported in Table 2, it shows similar results to the experiments on CSP. Compared to Diverse-M, our RL-based strategy reduces the total runtime by 16.54%, 9.18%, and 6.96% in the three instance categories, respectively. The evaluation results on CSP and GCP demonstrate the effectiveness of our RL-based multiple-column selection strategy on different types of problems. Generalization Evaluation Generalization across different instance sizes is a highly desirable property for learning-based models. The ability to generalize across sizes would allow the RL-based strategy to scale up to larger instances while training more efficiently on smaller instances. Table 3 presents the generalization performance of our RL-based multiple-column selection strategy, which is trained on instances from the hard category but evaluated on much larger instances. For the CSP, the model is trained on instances with L = 200 and evaluated on instances with L = 500 and L = 1000; for the GCP, the model is trained on instances with N = 50 and evaluated on instances with N = 75 and N = 100. Our RL-based multiple-column selection strategy still shows advantages over most baseline column selection strategies on the evaluation instances, which are several times larger than the training instances. It is demonstrated that our RL-based multiple-column selection strategy has learned useful and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8214 Strategy CSP (L=500) CSP (L=1000) GCP (N=75) GCP (N=100) # Itr Time # Itr Time # Itr Time # Itr Time Greedy-M 78.80 143.25 98.82 221.75 60.88 287.52 83.60 449.95 Random-M 74.96 136.66 85.74 206.81 65.50 301.77 87.73 469.44 MILP-M 67.26 247.00 81.92 482.13 53.40 438.38 75.22 788.65 Diverse-M 70.44 130.84 83.74 196.92 57.80 259.46 83.07 449.74 Ours 66.94 118.89 82.04 183.76 52.96 246.63 74.80 398.37 Table 3: Generalization performance of our RL-based multiple-column selection strategy to larger problem sizes. Strategy CSP (Easy) CSP (Normal) CSP (Hard) # Itr Time # Itr Time # Itr Time Greedy-M 12.03 75.34 27.01 62.60 52.13 96.42 Random-M 13.97 84.83 28.26 63.78 51.43 96.51 Diverse-M 11.45 74.59 25.03 59.32 47.24 93.60 DQN-M 11.50 74.71 25.94 60.36 48.59 94.51 Variant 1i 10.91 71.48 24.38 58.40 45.75 92.16 Variant 2ii 10.40 68.21 23.03 56.87 44.92 89.30 Complete Model 10.33 67.84 22.85 55.05 43.95 87.95 i with the embeddings of the complete graph removed. ii with the embedding of the input global features removed. Table 4: Performance of the RL-based multiple-column selection strategies using different neural network architectures. effective selection principles that are invariant to the size of problem instances. Ablation Study We have demonstrated the effectiveness of our proposed RLbased multiple-column selection strategy on CSP and GCP. To show the effect of different components of the neural network, we consider two variants of the complete model: 1) removing the embeddings of the complete graph and 2) removing the embedding of input global features. Other components remain unchanged. We conduct the ablation evaluation on CSP and the results are reported in Table 4. Compared to the complete model, the performance of both variants is degraded on all three instance categories. The results show that the embeddings of the complete graph and the embeddings of explicit global features both provide benefits for the learned multiplecolumn selection strategy. Notably, the performance of the learned multiple-column selection strategy decreases obviously when the embeddings of the complete graph are removed, which further highlights the importance of the candidate column interrelations in column selection. To show the advantage of our approach over the framework of (Chi et al. 2022), we conduct an extended experiment using a modified implementation of their DQN-based approach for multiple-column selection (DQN-M in Table 4), where we select the top-k columns based on their Qvalues. DQN-M outperforms the greedy and random baselines but underperforms Diverse-M. This is because DQNM independently selects k columns with higher Q-values: while each of them can individually lead to good convergence, the combination of them is not necessarily the best, just as 5 Michael Jordans in a basketball team may not be better than a reasonable lineup. In other words, DQN-M is selecting “the top-k columns”, while our RL-based multiplecolumn selection strategy is devoted to selecting “the best k-combination”. Conclusion In this paper, we propose an RL-based multiple-column selection strategy for CG. We formulate the multiple-column selection task as an MDP, and introduce an actor-critic style neural network that takes into account the column-constraint structure of RMP, the interrelations of candidate columns, as well as global properties of the problem instance. We evaluate our proposed RL-based multiple-column selection strategy on two sets of problems: the CSP and the GCP. Experimental results show that our RL-based multiple-column selection strategy outperforms all baseline single-column and multiple-column selection strategies in the three instance categories of various sizes. Extensive experiments also demonstrate the ability of our RL-based model to generalize to larger-scale problem instances. Despite the significant performance of the RL-based multiple-column selection strategy, more progress can be made in exploring column selection strategies to select different numbers of columns adaptively based on the problem properties and solution stages. It is challenging for handcrafted rules due to the various characteristics of different types and sizes of problems, but may be learned using an RL agent through the interaction with the CG solution environment. In addition, how to incorporate the learning-based column selection strategy with other acceleration methods for CG, such as dual stabilization and PP reduction, is also a possible direction for future efforts. Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 61936009; and in part by the National Science and Technology Innovation 2030 Major Project of the Ministry of Science and Technology of China under Grant 2018AAA0101604. We also thank Wanlu Yang and Peng Jiang for their valuable comments and corrections. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8215 References Babaki, B.; Jena, S. D.; and Charlin, L. 2021. Neural column generation for capacitated vehicle routing. In AAAI-22 Workshop on Machine Learning for Operations Research. Barnhart, C.; Johnson, E. L.; Nemhauser, G. L.; Savelsbergh, M. W. P.; and Vance, P. H. 1998. Branch-and-price: Column generation for solving huge integer programs. Operations Research, 46(3): 316–329. Bengio, Y.; Lodi, A.; and Prouvost, A. 2021. Machine learning for combinatorial optimization: A methodological tour d’horizon. European Journal of Operational Research, 290(2): 405–421. Chi, C.; Aboussalah, A. M.; Khalil, E. B.; Wang, J.; and Sherkat-Masoumi, Z. 2022. A deep reinforcement learning framework for column generation. In Advances in Neural Information Processing Systems. Delorme, M.; Iori, M.; and Martello, S. 2016. Bin packing and cutting stock problems: Mathematical models and exact algorithms. European Journal of Operational Research, 255(1): 1–20. Delorme, M.; Iori, M.; and Martello, S. 2018. BPPLIB: A library for bin packing and cutting stock problems. Optimization Letters, 12(2): 235–250. Desaulniers, G.; Desrosiers, J.; and Solomon, M. M. 2002. Accelerating strategies in column generation methods for vehicle routing and crew scheduling problems. Essays and Surveys in Metaheuristics, 309–324. Desaulniers, G.; Desrosiers, J.; and Solomon, M. M. 2006. Column generation. Springer Science & Business Media. du Merle, O.; Villeneuve, D.; Desrosiers, J.; and Hansen, P. 1999. Stabilized column generation. Discrete Mathematics, 194(1-3): 229–237. Gasse, M.; Ch´etelat, D.; Ferroni, N.; Charlin, L.; and Lodi, A. 2019. Exact combinatorial optimization with graph convolutional neural networks. In Advances in Neural Information Processing Systems. Gilmore, P. C.; and Gomory, R. E. 1961. A linear programming approach to the cutting-stock problem. Operations Research, 9(6): 849–859. Goffin, J.-L.; and Vial, J.-P. 2000. Multiple cuts in the analytic center cutting plane method. SIAM Journal on Optimization, 11(1): 266–288. Gurobi Optimization, LLC. 2023. Gurobi optimizer reference manual. Kami´nski, K.; Ludwiczak, J.; Jasi´nski, M.; Bukala, A.; Madaj, R.; Szczepaniak, K.; and Dunin-Horkawicz, S. 2021. Rossmann-toolbox: A deep learning-based protocol for the prediction and design of cofactor specificity in Rossmann fold proteins. Briefings in Bioinformatics, 23(1): bbab371. L¨ubbecke, M. E.; and Desrosiers, J. 2005. Selected topics in column generation. Operations Research, 53(6): 1007– 1023. Malaguti, E.; and Toth, P. 2010. A survey on vertex coloring problems. International Transactions in Operational Research, 17(1): 1–34. Mazyavkina, N.; Sviridov, S.; Ivanov, S.; and Burnaev, E. 2021. Reinforcement learning for combinatorial optimization: A survey. Computers & Operations Research, 134: 105400. Mehrotra, A.; and Trick, M. A. 1996. A column generation approach for graph coloring. INFORMS Journal on Computing, 8(4): 344–354. Morabit, M.; Desaulniers, G.; and Lodi, A. 2021. Machinelearning-based column selection for column generation. Transportation Science, 55(4): 815–831. Moungla, N. T.; L´etocart, L.; and Nagih, A. 2010. Solutions diversification in a column generation algorithm. Algorithmic Operations Research, 5(2): 86–95. Pessoa, A.; Sadykov, R.; Uchoa, E.; and Vanderbeck, F. 2018. Automation and combination of linear-programming based stabilization techniques in column generation. INFORMS Journal on Computing, 30(2): 339–360. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Shen, Y.; Sun, Y.; Li, X.; Eberhard, A.; and Ernst, A. 2022. Enhancing column generation by a machine-learning-based pricing heuristic for graph coloring. In AAAI Conference on Artificial Intelligence. Tahir, A.; Quesnel, F.; Desaulniers, G.; Hallaoui, I. E.; and Yaakoubi, Y. 2021. An improved integral column generation algorithm using machine learning for aircrew pairing. Transportation Science, 55(6): 1411–1429. Vanderbeck, F. 1994. Decomposition and column generation for integer programs. Ph.D. thesis, UCL-Universit´e Catholique de Louvain. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph attention networks. In International Conference on Learning Representations. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2019. How powerful are graph neural networks? In International Conference on Learning Representations. Yang, W.; Jiang, P.; and Song, S. 2023. High-speed Train Timetabling Based on Reinforcement Learning. In IEEE Symposium Series on Computational Intelligence, 1187– 1193. Yuan, H.; Jiang, P.; and Song, S. 2022. The neuralprediction based acceleration algorithm of column generation for graph-based set covering problems. In IEEE International Conference on Systems, Man, and Cybernetics, 1115–1120. Zhang, C.; Song, W.; Cao, Z.; Zhang, J.; Tan, P. S.; and Xu, C. 2020. Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning. In Advances in Neural Information Processing Systems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8216
2024
913
18,755
Large-Scale Non-convex Stochastic Constrained Distributionally Robust Optimization Qi Zhang 1, Yi Zhou 2, Ashley Prater-Bennette 3, Lixin Shen 4, Shaofeng Zou 1 1University at Buffalo 2The University of Utah 3Air Force Research Laboratory 4Syracuse University [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Distributionally robust optimization (DRO) is a powerful framework for training robust models against data distribution shifts. This paper focuses on constrained DRO, which has an explicit characterization of the robustness level. Existing studies on constrained DRO mostly focus on convex loss function, and exclude the practical and challenging case with non-convex loss function, e.g., neural network. This paper develops a stochastic algorithm and its performance analysis for non-convex constrained DRO. The computational complexity of our stochastic algorithm at each iteration is independent of the overall dataset size, and thus is suitable for largescale applications. We focus on the general Cressie-Read family divergence defined uncertainty set which includes χ2divergences as a special case. We prove that our algorithm finds an ✏-stationary point with an improved computational complexity than existing methods. Our method also applies to the smoothed conditional value at risk (CVaR) DRO. 1 Introduction Machine learning algorithms typically employ the approach of Empirical Risk Minimization (ERM), which minimizes the expected loss under the empirical distribution P0 of the training dataset and assumes that test samples are generated from the same distribution. However, in practice, there usually exists a mismatch between the training and testing distributions due to various reasons, for example, in domain adaptation tasks domains differ from training to testing (Blitzer, McDonald, and Pereira 2006; Daume III and Marcu 2006); test samples were selected from minority groups which are underrepresented in the training dataset (Grother et al. 2011; Hovy and Søgaard 2015) and there might exist potential adversarial attacks (Goodfellow, Shlens, and Szegedy 2014; Madry et al. 2017). Such a mismatch may lead to a significant performance degradation. This challenge spurred noteworthy efforts on developing a framework of Distributionally Robust Optimization (DRO) e.g., (Ben-Tal et al. 2013; Shapiro 2017; Rahimian and Mehrotra 2019). Rather than minimizing the expected loss under one fixed distribution, in DRO, one seeks to optimize the expected loss under the worst-case distribution in an uncertainty set of distributions. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Specifically, DRO aims to solve the following problem: inf x sup Q⇠U(P0) ES⇠Q `(x; S), (1) where U(P0) is an uncertainty set of distributions centered at P0, P0 is the empirical distribution of the training dataset, ` is the loss function, and x is the optimization variable. For example, the uncertainty set can be defined as U(P0) := {Q : D(QkP0) ⇢}, (2) where D is some distance-like metric, e.g., Kullback-Leibler (KL) divergence and χ2 divergence, and ⇢is the uncertainty level. In practice, for ease of implementation and analysis, a relaxed formulation of eq. (1), which is referred to as the penalized DRO, is usually solved (Levy et al. 2020; Jin et al. 2021; Qi et al. 2021; Sinha et al. 2017): inf x sup Q ES⇠Q `(x; S) −λD(QkP0), (3) where λ > 0 is a fixed hyperparameter that needs to be chosen manually. In contrast to constrained DRO in eq. (1), a regularization term is added to the objective function to keep the distribution Q and the distribution P0 close, and the hyperparameter λ is manually chosen beforehand to control the tradeoff with minimizing the loss. Compared with the penalized DRO setting, the constrained DRO problem in eq. (1) requires that the distribution Q to be strictly in the uncertainty set, and searches for the optimal solution under the worst-case distribution in the uncertainty set. Therefore, the obtained solution from the constrained DRO is minimax optimal for distributions in the uncertainty set, whereas it is hard to get such a guarantee for the penalized DRO relaxation. In this paper, we focus on the challenging constrained DRO problem in eq. (1). Existing studies on constrained DRO are limited to convex loss functions or require some additional assumptions (Soma and Yoshida 2020; Hashimoto et al. 2018; Levy et al. 2020; Duchi and Namkoong 2018; Duchi, Glynn, and Namkoong 2021; Qi et al. 2022; Wang, Gao, and Xie 2021). Little understanding on the practical non-convex loss functions, e.g., neural network, is known. In this paper, we focus on the constrained DRO problem with non-convex loss. DRO problems under different uncertainty sets are fundamentally different. As will be discussed later in related The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8217 works, there is a rich literature on DRO with various uncertainty sets. In this paper, we focus on the general CressieRead family divergence defined uncertainty set (Duchi and Namkoong 2018; Jin et al. 2021), which includes, e.g., χ2 divergence, as a special case (see Section 2 for more details). We also investigate the smoothed conditional value at risk (CVaR) DRO problem. More importantly, we focus on the practical yet challenging large-scale scenario, where P0 is the empirical distribution of N samples and N is very large. In classic stochastic optimization problems, e.g., ERM, it is easy to get an unbiased estimate of the gradient using only a few samples, and therefore the computational complexity at each iteration is independent of the training dataset size. However, in the DRO problems, due to taking the worst-case distributions in the objective, the problem becomes challenging. Many existing DRO algorithms incur a complexity that increases linearly (or even worse) in the training dataset size (Duchi and Namkoong 2018; Namkoong and Duchi 2016; Ghosh, Squillante, and Wollega 2018), which is not feasible for large-scale applications. In this paper, we will design a stochastic algorithm with computational complexity at each iteration being independent of the training dataset size (Qi et al. 2022; Wang, Gao, and Xie 2021; Levy et al. 2020). 1.1 Challenges and Contributions The key challenges and main contributions in this paper are summarized as follows. • For large-scale applications, the number of training samples is large, and therefore directly computing the full gradient is not practical. Nevertheless, as discussed above, it is challenging to obtain an unbiased estimate of the gradient for DRO problems using only a few samples. For '-divergence DRO problem, the distributions in the uncertainty set are continuous w.r.t. the training distribution. Thus the distributions in the uncertainty set can be parameterized by an N-dimensional vector (Namkoong and Duchi 2016). Then the DRO problem becomes a min-max problem and primal-dual algorithms (Rafique et al. 2022; Lin, Jin, and Jordan 2020; Xu et al. 2023) can be used directly. Subsampling methods in DRO were also studied in (Namkoong and Duchi 2016; Ghosh, Squillante, and Wollega 2018). However, all the above studies require a computational complexity linear or even worse in the training dataset size at each iteration and thus is prohibitive in large-scale applications. In (Levy et al. 2020), an efficient subsampling method was proposed, where the batch size is independent of the training dataset size. However, they only showed the sampling bias for χ2 and CVaR DRO problems. In this paper, we generalize the analysis of the bias in (Levy et al. 2020) to the general Cressie-Read family. We further develop a Frank-Wolfe update on the dual variables in order to bound the gap between the objective and its optimal value given the optimization variable x and the biased estimate. • The second challenge is due to the non-convex loss function. Existing studies for the Cressie-Read divergence family (Duchi and Namkoong 2018; Levy et al. 2020) are limited to the case with convex loss function, and their approach does not generalize to the non-convex case. The key difficulty lies in that the subgradient of the objective function can not be obtained via subdifferential for nonconvex loss functions. Instead of explicitly calculating the worst-case distribution as in (Duchi and Namkoong 2018; Levy et al. 2020), we propose to design an algorithm for the dual problem which optimizes the objective under a known distribution. Thus the gradient of the objective can be efficiently obtained. • The third challenge is that the dual form of constrained DRO is neither smooth nor Lipschitz, making the convergence analysis difficult. Existing studies, e.g., (Wang, Gao, and Xie 2021), assume that the optimal dual variable is bounded away from zero, i.e., λ⇤> λ0 for some λ0 > 0, so that it is sufficient to consider λ ≥λ0. However, this assumption may not necessarily be true as shown in (Wang, Gao, and Xie 2021; Hu and Hong 2013). In this paper, we generalize the idea in (Qi et al. 2022; Levy et al. 2020) to the general Cressie-Read divergence family. We design an approximation of the original problem, and show that it is smooth and Lipschitz. The approximation error can be made arbitrarily small so that the solution to the approximation is still a good solution to the original. We prove the strong duality of the approximated problem. We add a regularizer to the objective and at the same time we keep the hard constraint. In this way, we can guarantee that its dual variable λ has a positive lower bound. Moreover, our strong duality holds for any '-divergence DRO problem. • We design a novel algorithm to solve the approximated problem and prove it converges to a stationary point of the constrained DRO problem. The general Proximal Gradient Descent algorithm (Ghadimi, Lan, and Zhang 2016) can be used to solve this approximated problem directly. However, it assumes the objective is non-convex in all parameters and does not provide a tight bound on the bias due to subsampling. We take advantage of the fact that the objective function is convex in λ and thus the bias due to subsampling can be bounded in a tighter way. Our proposed algorithm converges to a stationary point faster than existing methods. 1.2 Related Work Various Uncertainty Sets. '-divergence DRO problems (Ali and Silvey 1966; Csisz´ar 1967) were widely studied, for example, CVaR in (Rockafellar, Uryasev et al. 2000; Soma and Yoshida 2020; Curi et al. 2020; Tamar, Glassner, and Mannor 2015), χ2-divergence in (Hashimoto et al. 2018; Ghosh, Squillante, and Wollega 2018; Levy et al. 2020), KLdivergence in (Qi et al. 2021, 2022; Hu and Hong 2013) and Sinkhorn distance (Wang, Gao, and Xie 2021), a variant of Wasserstein distance based on entropic regularization. However, the above studies are for some specific divergence function and can not be extended directly to the general Cressie-Read divergence family. Penalized DRO. The general '-divergence DRO problem was studied in (Jin et al. 2021) where the proposed algorithm The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8218 works for any divergence function with a smooth conjugate. The authors also designed a smoothed version of the CVaR problem and showed their method converges to a stationary point. However, their method is for the penalized formulation and does not generalize to the constrained DRO. In this paper, we focus on the challenging constrained DRO, the solution of which is minimax optimal over the uncertainty set. Our proposed algorithm can also be applied to solve the smoothed CVaR problem in the constrained setting. Constrained DRO With Convex Loss. The general '-divergence constrained DRO problem was studied in (Namkoong and Duchi 2016). Instead of optimizing from the dual form, the authors treat the worst-case distribution as a N-dimentional vector and implement a stochastic primaldual method to solve the min-max problem. However, the computational complexity at each iteration is linear in the number of the training samples and can not be used in largescale applications. The same problem was further studied in (Duchi, Glynn, and Namkoong 2021). The authors pointed out that minimizing constrained DRO with '-divergence is equivalent to adding variance regularization for the Empirical Risk Minimization (ERM) objective. The general Cressie-Read divergence family DRO problem was studied in (Duchi and Namkoong 2018), where the basic idea is to calculate the worst-case distribution for the constrained DRO first and then use the subdifferential to get the subgradient. Furthermore, the χ2 and CVaR DRO problems were studied in (Levy et al. 2020). Compared with the method in (Duchi and Namkoong 2018), they calculate the worst-case distribution for the penalized DRO and then optimize both the Lagrange multiplier and the loss function. This approach converges to the optimal solution with a reduced complexity. Their method can be extended to the general CressieRead divergence family. However, all the above papers are limited to the case with convex loss function. To the best of our knowledge, our work is the first paper on large-scale non-convex constrained DRO with the general Cressie-Read divergence family. We note that the KL DRO was studied in (Qi et al. 2022), which however needs an exponential computational complexity. We achieve a polynomial computational complexity for the Cressie-Read divergence family. 2 Preliminaries and Problem Model 2.1 Notations Let s be a sample in S and P0 be the distribution on the points {si}N i=1, where N is the size of the support. Denote by ∆n := {p 2 Rn| Pn i=1 pi = 1, pi ≥0} the ndimensional probability simplex. Denote by x 2 Rd the optimization variable. We denote by 1X(x) the indicator function, where 1X(x) = 0 if x 2 X, otherwise 1X(x) = 1. Let ` : Rd ⇥S ! R be a non-convex loss function. Let k · k be the Euclidean norm and (t)+ := max{t, 0} be the positive part of t 2 R. Denote rx by the gradient to x. For a function f : Rd ! R, a point x 2 Rd is said to be an ✏-optimal solution if |f(x) −f(x⇤)| ✏, where f(x⇤) is the optimal value of f. If the function f is differentiable, a point x 2 Rd is said to be first-order ✏-stationary if krf(x)k ✏. 2.2 Assumptions In this paper, we take the following standard assumptions that are commonly used in the DRO literature (Duchi and Namkoong 2018; Levy et al. 2020; Qi et al. 2021, 2022; Wang, Gao, and Xie 2021; Soma and Yoshida 2020): • The non-convex loss function is bounded: 0 `(x; s)  B for some B > 0, 8x 2 Rd, s 2 S. • The non-convex loss function is G-Lipschitz such that |`(x1; s) −`(x2; s)| Gkx1 −x2k and L-smooth such that krx`(x1; s) −rx`(x2; s)k Lkx1 −x2k for any x1, x2 2 Rd and s 2 S. 2.3 DRO Objective and Its Dual Form In empirical risk minimization (ERM), the goal is to solve inf x ES⇠P0 `(x; S), where the objective function is the expectation of the loss function with respect to the training distribution P0. To solve the distributional mismatch between training data and testing data, the formulation of Distributionally Robust Optimization (DRO) (Goodfellow, Shlens, and Szegedy 2014; Madry et al. 2017; Rahimian and Mehrotra 2019) was developed, where the goal is to minimize the expected loss with respect to the worst distribution in an uncertainty set U(P0): inf x sup Q⇠U(P0) ES⇠Q `(x; S). (4) DRO problems under different uncertainty sets are fundamentally different. Consider the uncertainty set defined by '-divergence D'(QkP0), which is one of the most common choices in the literature and can be written as D'(QkP0) := R ' ⇣ dQ dP0 ⌘ dP0, where ' is a non-negative convex function such that '(1) = 0 and '(t) = +1 for ant t < 0. Then let the uncertainty set U(P0) := {Q : D'(QkP0) ⇢} where ⇢> 0 is the radius of the uncertainty set. In this paper, we study the general Cressie-Read family of '-divergence (Cressie and Read 1984; Van Erven and Harremos 2014), where 'k(t) := tk −kt + k −1 k(k −1) , (5) k 2 (−1, +1) \ {0, 1}. Let k⇤= k k−1. This family includes as special cases χ2-divergence (k = 2) and KL divergence (k ! 1). When k > 2, the conjugate function of 'k(t) (which will be introduced later) is not smooth, thus the problem becomes hard to solve even in the penalized formulation (Jin et al. 2021). In this paper, we focus on k 2 (1, 2] (k⇤2 [2, 1)). The objective is inf x sup Q:D'k (QkP0)⇢ ES⇠Q `(x; S). (6) Solving (6) directly is challenging due to the sup over Q. In (Namkoong and Duchi 2016), a finite-dimensional vector q was used to parameterize the distributions in the uncertainty set since Q ⌧P0 for '-divergence. Then the DRO problem becomes a convex concave min-max problem. This The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8219 method can be extended to the case with non-convex loss function by applying the algorithms for non-convex concave min-max problems (Rafique et al. 2022; Lin, Jin, and Jordan 2020; Xu et al. 2023). However, the dimension of distribution in the uncertainty set is equal to the number of training samples. Thus, the computational complexity at each iteration is linear in the sample size and is prohibitive in largescale applications. To obtain a complexity independent of the sample size, one alternative is to use its dual. By duality, we can show that the DRO objective (6) can be equivalently written as (Levy et al. 2020; Shapiro 2017) inf x inf λ≥0,˜⌘2R ES⇠P0  λ'⇤ k ✓`(x; S) −˜⌘ λ ◆ + λ⇢+ ˜⌘ ( , where '⇤ k(t0) = supt{t0t −'k(t)} is the conjugate function of 'k(t0). In this way, the optimization problem under an unknown distribution is rewritten into one under a known distribution. The subsampling method can then be used, which leads to a complexity independent of the sample size (which will be introduced later). For the Cressie-Read family in (5), the corresponding conjugate function family is '⇤ k(t) = 1 k h ((k −1)t + 1)k⇤ + −1 i . Therefore, the objective can be written as inf x,λ≥0,˜⌘2RES⇠P0 " λ k ✓ (k −1)`(x; S) −˜⌘ λ + 1 ◆k⇤ + # + λ ✓ ⇢−1 k ◆ + ˜⌘. Let ⌘= ˜⌘− λ k−1 and the corresponding objective is inf x,λ≥0,⌘2R ES⇠P0 (k −1)k⇤ k (`(x; S) −⌘)k⇤ + λ1−k⇤ ( + λ ✓ ⇢+ 1 k(k −1) ◆ + ⌘. Define f(x; λ; ⌘; s) =(k −1)k⇤ k (`(x; S) −⌘)k⇤ + λ1−k⇤ + λ ✓ ⇢+ 1 k(k −1) ◆ + ⌘. (7) Thus the goal is to solve inf x inf λ≥0,⌘2R F(x; λ; ⌘), (8) where F(x; λ; ⌘) is defined as F(x; λ; ⌘) = ES⇠P0 h f(x; λ; ⌘; S) i . Therefore, we reformulate the DRO problem as one to minimize an objective function under a known distribution, where subsampling method could be used to reduce the complexity. 3 Analysis of Constrained DRO In this section, we analyze the constrained DRO problems under Cressie-Read family divergence uncertainty sets with general smooth non-convex loss function. We first discuss the challenges appearing in constrained formulations, then we present how to construct the corresponding approximated problem in order to overcome these challenges. 3.1 Smooth and Lipschitz Approximation For λ 2 [0, +1), ⌘2 R, the objective function F(x; λ; ⌘) is neither smooth nor Lipschitz. Thus it is difficult to implement gradient-based algorithms. In the following, we will construct an approximation of the original problem so that the objective function F(x; λ; ⌘) becomes smooth and Lipschitz by constraining both λ and ⌘in some bounded intervals. Denote by ! = (k(k−1)⇢+1) 1 k⇤. Since the loss function is bounded such that 0 ` B, we can show that there exists an upper bound ¯λ = (k −1)!−1 ✓ 1 + ( 1 ! ) 1 k⇤−1 1−( 1 ! ) 1 k⇤−1 ◆ B which only depends on k, ⇢and B such that the optimal value λ⇤¯λ. In this paper, we do not assume that λ⇤≥λ0 > 0 as in (Wang, Gao, and Xie 2021). Instead, we consider an approximation with λ 2 [λ0, ¯λ], and show that the difference between the orignial and the approximation can be bounded. We can show corresponding optimal ⌘⇤2 [−¯⌘, B], where ¯⌘= ¯λ ⇣ k (k−1)k⇤k⇤ ⌘ 1 k⇤−1 . The proof can be found in Appendix A. The challenge lies in that the value of ⌘can be negative. Thus given this ⌘, the optimal value of λ can be quite large then it is hard to upper bound λ. In our proof, we change the objective to the function that only depends on ⌘and find the lower bound on ⌘. Based on this lower bound, we solve this challenge and further get the bound on λ. We show that the difference between the original and the approximation can be bounded in the following lemma. Lemma 1. 8x 2 Rd, 0 λ0 ¯λ, ---inf λ2[λ0,¯λ],⌘2[−¯⌘,B] F(x; λ; ⌘) − inf λ≥0,⌘2R F(x; λ; ⌘) ---- 2λ0⇢. The proof can be found in Appendix B. Note in the proof of this lemma, we provide the strong duality of sup D'k (QkP0)⇢ ES⇠Q `(x; S) −λ0D'k(QkP0), where both the hard constraint and regulator are kept. This is different from the approach in Section 3.2 of (Shapiro 2017). Note this strong duality holds for any '-divergence DRO problem. Lemma 1 demonstrates that the non-smooth objective function can be approximated by a smooth objective function. A smaller λ0 makes the gap smaller but the function “less smooth”. 3.2 Convexity and Smoothness on Parameters The advantage of our approximated problem is that the function is smooth in all x, λ, and ⌘. Moreover, We find that the objective function is convex in λ and ⌘though the loss function is non-convex in x. Lemma 2. Define z = (λ, ⌘) 2 M, where M = {(λ, ⌘) : λ 2 [λ0, ¯λ], ⌘2 [−¯⌘, B]}. Then 8x 2 Rd, z 2 M, the objective function F(x; z) is convex and Lz-smooth in z, where Lz = 1 λ0 + 2(B+¯⌘) λ2 0 + (B+¯⌘)2 2λ3 0 if k⇤= 2 and Lz = (k−1)k⇤ k k⇤(k⇤−1) ⇣ (B+¯⌘)k⇤ λk⇤+1 0 + (B+¯⌘)k⇤−2 λk⇤−1 0 ⌘ if k⇤> 2. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8220 Algorithm 1: SFK-DRO Input: Iteration number K, initial point (x1, z1), sample numbers nx, nz, step size ↵, and one constant C 1: Let t = 1 2: while t K do 3: randomly select nx samples and compute rxfx(xt, zt) = Pnx i=1 rxf(xt;zt;si) nx . 4: xt+1 = xt −↵rxfx(xt, zt) 5: randomly select nz samples and compute rzfz(xt+1, zt) = Pnz j=1 rzf(xt+1;zt;sj) nz 6: et= arg mine2Mhe, rzfz(xt+1; zt)i 7: dt = et −zt 8: gt = hdt, −rzfz(xt+1; zt)i 9: γt = min . gk C , 1 10: zt+1 = zt + γtdt 11: t = t + 1 12: end while t0 = arg mint krxfx(xt; zt)k2 + g2 t Output: (xt0+1, zt0) Moreover, the objective function F(x; z) is Lx-smooth in x, where Lx = (k−1)k⇤ k k⇤λ1−k⇤ 0 (B + ¯⌘)k⇤−2((k⇤−1)G2 + (B + ¯⌘)L). The proof can be found in Appendix C. Note the firstorder gradient of the objective function is non-differential at some point when k⇤= 2. Therefore, we discuss in two cases: k⇤> 2 and k⇤= 2. In the first case, we can get the Hessian matrix of the objective. In the second case, we show the smoothness and convexity. 4 Mini-Batch Algorithm Existing constrained stochastic algorithm for general nonconvex functions (Ghadimi, Lan, and Zhang 2016) can be used to solve the approximated problem directly. However, their method optimizes y = (x; λ; ⌘) as a whole. It can be seen that the objective function is non-convex in y and the computation complexity to get the ✏-stationary point is O(✏−3k⇤−5). In the previous section, we show that F(x; z) is Lzsmooth in z and Lx-smooth in x. Moreover, Lz ⇠ O(λ−k⇤−1 0 ), which is much larger then Lx when λ0 is small, since Lx ⇠O(λ−k⇤+1 0 ). If we optimize all the parameters together, we need to implement non-convex algorithms to optimize a smooth function with a large smooth constant, which is not computationally efficient. However, if we optimize x and z separately, though Lz > Lx which requires more resources to optimize z, the convexity in z makes it faster to converge to the optimal value of z. This motivates us to consider a stronger convergence criterion. Instead of finding the ✏- stationary point for F(y), we can find (x, λ, ⌘) such that |rxF(x; λ; ⌘)| ✏, |F(x; λ; ⌘) − inf λ0≥0;⌘0 F(x; λ0; ⌘0)| ✏. We then provide our Stochastic gradient and Frank-Wolfe DRO algorithm (SFK-DRO), which optimizes x and z separately (see Algorithm 1). Define D = maxz1,z22M kz1 − z2k, σ = (k−1)k⇤ k k⇤(B + ¯⌘)k⇤−1Gλ1−k⇤ 0 , ∆= F(x1; z1) − infx,z2M F(x; z) and C is a constant such that C ≥DLz. The convergence rate is then provided in the following theorem. Theorem 1. With a mini-batch size nx = 8Lxσ2 C✏2 ⇠ O(λ−2k⇤+4 0 ✏−2), nz ⇠ O(✏−k⇤) such that 3B p 1 + k(k −1)⇢ q 4+log(nz) 4nz < ✏ 4 if k⇤ = 2 or 3B(1 + k(k −1)⇢) 1 k ⇣ 1 nz + 1 2k⇤−1(k⇤−2)nz ⌘1 k⇤ < ✏ 4 if k⇤> 2 and ↵= 1 C , λ0 = ✏ 8⇢, for any small ✏> 0 such that DLz Lx ⇠O(✏−2) ≥2 and g C ⇠O(✏) 1, at most T = 16C∆✏−2 ⇠O(λ−k⇤−1 0 ✏−2) iterations are needed to guarantee a stationary point (xt0+1; zt0) in expectation: EkrxF(xt0+1; zt0)k ✏, E h---F(xt0+1; zt0) − inf λ≥0;⌘2R F(xt0+1; λ; ⌘) --i ✏. The detailed proof can be found in Appendix D and a proof sketch will be provided later. Before that, we introduce a lemma for our subsampling method. Via this lemma, we can show the complexity is independent of the sample size and thus is suitable for our large-scale setting. When we optimize z, an estimator fz(x, z) = Pnz j=1 f(x;z;sj) nz is build to estimate F(x; z) = ES⇠P0 h f(x; z; S) i . Though the estimator is unbiased, in our Frank-Wolfe update process (Jaggi 2013; Frank, Wolfe et al. 1956; Lacoste-Julien 2016) we need to estimate min F(x; z) via E min fz(x; z). Obviously, the expectation of minimum is not equal to the minimum of expectation, thus it is a biased estimator. In the following lemma, we show that this gap can be bounded by a decreasing function of the sample batch nz. Lemma 3. For any bounded loss function `, if k⇤= 2, ---- inf z2M [F(xt+1; z)] −E  inf z2M fz(xt+1; z) (---3B p 1 + k(k −1)⇢ s 4 + log(nz) 4nz ; and if k⇤> 2, ---- inf z2M [F(xt+1; z)] −E  inf z2M fz(xt+1; z) (---3B(1 + k(k −1)⇢) 1 k ✓1 nz + 1 2k⇤−1(k⇤−2)nz ◆1 k⇤ . The detailed proof can be found in Appendix E. Note that (Levy et al. 2020) only shows this lemma when k⇤= 2, and we extend the results to k⇤> 2. This lemma shows that the gap is in the order of O(nz −1 k⇤) and is independent of the total number of samples. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8221 4.1 Proof Sketch of Theorem 1 We use a stochastic gradient descent method (Moulines and Bach 2011; Gower et al. 2019; Robbins and Monro 1951) to update x. Since the objective function is Lx-smooth in x, if ↵ 1 2Lx we have that: ↵ 2 E ⇥ krxFx(xt; zt)k2⇤ E[F(xt; zt)] −E[F(xt+1; zt)] + ↵2LxE[krxfx(xt; zt) −rxFx(xt; zt)k2], (9) where fx(x, z) = Pnx j=1 f(x;z;sj) nx . Define σ = (k−1)k⇤ k k⇤(B + ¯⌘)k⇤−1Gλ1−k⇤ 0 and we can show that E[krxfx(xt; zt) −rxFx(xt; zt)k2] σ2 nx . (10) Since z 2 M, instead of the stochastic gradient descent method, we employ the Frank-Wolfe method (Frank, Wolfe et al. 1956) to update z. Define et = arg mine2Mhe, rzfz(xt+1; zt)i and gt = het −zt, −rzfz(xt+1; zt)i. In addition, we have gt ≥fz(xt+1; zt) −min z2M fz(xt+1; z) since f(x; z) is convex in z (Jaggi 2013). We can show that gt C ⇠O(λ0) thus for small λ0 we have gt C 1. Then due to the fact that the objective is Lz-smooth in z ((9) of (LacosteJulien 2016)), we have that E g2 t 2C ( E[F(xt+1; zt)] −E[F(xt+1; zt+1)]. (11) By recursively adding (9) and (11), we have that 1 T T X t=1 ↵ 2 E ⇥ krxFx(xt; zt)k2⇤ + E g2 t 2C ( F(x1; z1) −E[F(xT +1; zT +1)] T + ↵2Lx σ2 nx . (12) Since Lz ⇠O(λ−k⇤−1 0 ) and Lz ⇠O(λ−k⇤−1 0 ), for small λ0 we have C ≥DLz ≥2Lx. Then we set ↵= 1 C  1 2Lx , T = 16C∆✏−2 ⇠O(λ−k⇤−1 0 ✏−2), nx = 8Lxσ2 C✏2 , and denote ∆= F(x1; z1) −minx,z2M F(x; z), for some t 2 [1, T] we have E [krxFx(xt; zt)k] ✏ 2, (13) E  F(xt+1; zt) −inf z2M fz(xt+1; z) ( E[gt] ✏ 2. (14) We choose (xt+1, zt) as our output and we need to bound E [krxFx(xt+1; zt)k] and E [F(xt+1; zt) −infz2M Fz(xt+1; z)]. Since F(x; z) is Lx-smooth in x, we have that E[krxFx(xt+1; zt)k] ✏. By Lemma 3, we pick nz ⇠O(✏−k⇤) such that ---- inf z2M [F(xt+1; z)] −E  inf z2M fz(xt+1; z) (---- ✏ 4. By Lemma 1, when λ0 = ✏ 8⇢, we have ---inf λ2[λ0,¯λ],⌘2[−¯⌘,B] F(x; λ; ⌘) − inf λ≥0,⌘2R F(x; λ; ⌘) ---- ✏ 4. Thus we have F(xt+1; zt) − inf λ≥0,⌘2R F(x; λ; ⌘) ✏, (15) which completes the proof. 5 Smoothed CVaR Our algorithm can also solve other DRO problems efficiently, for example, the Smoothed CVaR proposed in (Jin et al. 2021). The CVaR DRO is an important '-divergence DRO problem, where '(t) = 1[0, 1 µ ) if 0 t < 1 µ, and 0 < µ < 1 is some constant. The dual expression of CVaR can be written as LCV aR(x; P0) = inf ⌘2R 1 µES⇠P0 [(`(x; S) −⌘)+ + ⌘] . The dual of CVaR is non-differentiable, which is undesirable from an optimization viewpoint. To solve this problem, (Jin et al. 2021) proposed a new divergence function, which can be seen as a smoothed version of the CVaR. Their experiment results show the optimization of smoothed CVaR is much easier. However, (Jin et al. 2021)’s method only works for the penalized formulation of DRO. We will show that our method can solve the constrained smoothed CVaR. Here, the divergence function is 's(t) = ( t log(t) + 1−µt µ log( 1−µt 1−µ ), t 2 [0, 1 µ); +1, otherwise. (16) The corresponding conjugate function is '⇤ s(t) = 1 µ log(1 −µ + µ exp(t)). (17) The objective function is then written as inf x inf λ≥0,⌘2R Fs(x; λ; ⌘) =ES⇠P0  λ'⇤ s(`(x; S) −⌘ λ ) + λ⇢+ ⌘ ( . (18) We can show that there exist upper bounds for the optimal values λ⇤and ⌘⇤. There exists a ¯λ > 0 only depends on µ, B and ⇢such that λ⇤2 [0, ¯λ] and ⌘⇤2 [0, B]. The proof can be found in Appendix F. This objective function is non-smooth when λ ! 0. Therefore, we take a similar approach as the one in Section 3.1 to approximate the original problem with λ 2 [λ0, ¯λ]. We bound the difference in the following lemma. Lemma 4. 8x 2 Rd, λ0 ≥0, ---inf λ2[λ0,¯λ],⌘2[0,B] Fs(x; λ; ⌘) − inf λ≥0,⌘2R Fs(x; λ; ⌘) ---- 2λ0⇢. The proof is similar to Lemma 1 thus is omitted here. In addition, we can show that Fs(x; z) is L0 z-smooth and convex in z, where L0 z ⇠O(λ−3 0 ) if λ 2 [λ0, ¯λ]. Also it is easy to get Fs(x; z) is L0 x-smooth in x, where L0 x ⇠O(λ−2 0 ). Similar to eq. (42) and Remark 1 in (Levy et al. 2020), we can prove that -- minz2M [Fs(xt+1; z)] − E [minz2M fs(xt+1; z)] -- ⇠O(n−0.5 s ). We then use Algorithm 1 directly and the complexity to get the ✏-stationary point is O(✏−7). The detailed proof can be found in Appendix F. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8222 Class 0 1 2 3 4 5 6 7 8 9 EMR 77.64 86.19 69.33 54.03 51.53 47.05 87.66 85.35 87.12 83.15 SFK-DRO 76.11 84.71 66.18 54.95 58.65 49.36 89.06 84.03 88.41 83.09 PAN-DRO 74.92 85.62 65.72 52.69 55.83 49.50 88.85 84.06 88.68 81.29 Table 1: Test Accuracy of each class for imbalanced CIFAR 10. 6 Numerical Results In this section, we verify our theoretical results in solving an imbalanced classification problem. In the experiment, we consider a non-convex loss function and k is set to be 2 for the Cressie-Read family. We will show that 1) to optimize the same dual objective function, our proposed algorithm converges faster than the general Proximal Gradient Descent(PGD) algorithm (Ghadimi, Lan, and Zhang 2016); 2) The performance proposed algorithm for the constrained DRO problem outperforms or is close to the performance of the penalized DRO with respect to the worst classes. Both of them outperform the baseline. Tasks. We conduct experiments on the imbalanced CIFAR10 dataset, following the experimental setting in (Jin et al. 2021; Chou et al. 2020). The original CIFAR-10 test dataset consists of 10 classes, where each of the classes has 5000 images. We randomly select training samples from the original set for each class with the following sampling ratio: {0.804, 0.543, 0.997, 0.593, 0.390, 0.285, 0.959, 0.806, 0.967, 0.660}. We keep the test dataset unchanged. Models. We learn the standard Alexnet model in (Krizhevsky, Sutskever, and Hinton 2012) with the standard cross-entropy (CE) loss. For the comparison of convergence rate, we optimize the same dual objective with the PGD algorithm in (Ghadimi, Lan, and Zhang 2016). To compare robustness, we optimize the ERM via vanilla SGD. In addition, we propose an algorithm PAN-DRO, which fixes λ and only optimizes ⌘and the neural network. Thus it gets the solution for the penalized DRO problem. Training Details. We set λ1 = 1, ⌘1 = 0, λ0 = 0.1, −¯⌘= −10, and the upper bounds ¯λ = 10, B = 10. To achieve a faster optimization rate, we set the learning rate ↵= 0.01 before the first 40 epochs and ↵= 0.001 after. The minibatch size is chosen to be 128. All of the results are moving averaged by a window with size 5. The simulations are repeated by 4 times. Results. In Figure 1, we plot the value of the CE loss using different algorithms through the training process. It can be seen that to optimize the same dual objective function with the same learning rate, the PGD algorithm converges slower than our proposed DRO algorithms, which matches our theoretical results. Moreover, compared with ERM, the DRO algorithms have higher training losses but lower test losses, which demonstrates they are robust. We also provide the test accuracy of trained models in Table 1. It can be shown that for class 3, 4, 5, the accuracies are the lowest due to the limited samples. For these classes, the performance of our SFK-DRO algorithm for the constrained DRO is better or close to the performance of PAN-DRO for the penalized DRO. Both DRO algorithms outperform the Figure 1: Training curve of classification task. vanilla ERM algorithm. 7 Conclusion In this paper, we developed the first stochastic algorithm for large-scale non-convex stochastic constrained DRO problems in the literature with theoretical convergence and complexity guarantee. We developed a smooth and Lipschitz approximation with bounded approximation error to the original problem. Compared with existing algorithms, the proposed algorithm has an improved convergence rate. The computational complexity at each iteration is independent of the size of the training dataset, and thus our algorithm is applicable to large scale applications. Our results hold for a general family of Cressie-Read divergences. Acknowledgments This work of Q. Zhang and S. Zou is supported by the National Science Foundation under Grants CCF-2106560. Department of Education. Y. Zhou’s work is supported by the National Science Foundation under Grants CCF-2106216, DMS-2134223 and ECCS-2237830 (CAREER). L. Shen’s work is supported by the NSF under Grant DMS-2208385. This material is based upon work supported under the AI Research Institutes program by National Science Foundation and the Institute of Education Sciences, U.S. Department of Education through Award No. 2229873 - National AI Institute for Exceptional Education. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, the Institute of Education Sciences, or the U.S. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8223 References Ali, S. M.; and Silvey, S. D. 1966. A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society: Series B (Methodological), 28(1): 131–142. Ben-Tal, A.; Den Hertog, D.; De Waegenaere, A.; Melenberg, B.; and Rennen, G. 2013. Robust solutions of optimization problems affected by uncertain probabilities. Management Science, 59(2): 341–357. Blitzer, J.; McDonald, R.; and Pereira, F. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, 120–128. Chou, H.-P.; Chang, S.-C.; Pan, J.-Y.; Wei, W.; and Juan, D.C. 2020. Remix: rebalanced mixup. In Proceedings of Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16, 95–110. Springer. Cressie, N.; and Read, T. R. 1984. Multinomial goodnessof-fit tests. Journal of the Royal Statistical Society Series B: Statistical Methodology, 46(3): 440–464. Csisz´ar, I. 1967. On information-type measure of difference of probability distributions and indirect observations. Studia Sci. Math. Hungar., 2: 299–318. Curi, S.; Levy, K. Y.; Jegelka, S.; and Krause, A. 2020. Adaptive sampling for stochastic risk-averse learning. In Proceedings of Advances in Neural Information Processing Systems, volume 33, 1036–1047. Daume III, H.; and Marcu, D. 2006. Domain adaptation for statistical classifiers. Journal of artificial Intelligence research, 26: 101–126. Duchi, J.; and Namkoong, H. 2018. Learning models with uniform performance via distributionally robust optimization. arXiv preprint arXiv:1810.08750. Duchi, J. C.; Glynn, P. W.; and Namkoong, H. 2021. Statistics of robust optimization: A generalized empirical likelihood approach. Mathematics of Operations Research, 46(3): 946–969. Frank, M.; Wolfe, P.; et al. 1956. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2): 95–110. Ghadimi, S.; Lan, G.; and Zhang, H. 2016. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1-2): 267–305. Ghosh, S.; Squillante, M.; and Wollega, E. 2018. Efficient stochastic gradient descent for distributionally robust learning. arXiv preprint arXiv:1805.08728. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Gower, R. M.; Loizou, N.; Qian, X.; Sailanbayev, A.; Shulgin, E.; and Richt´arik, P. 2019. SGD: General analysis and improved rates. In Proceedings of International conference on machine learning, 5200–5209. PMLR. Grother, P. J.; Grother, P. J.; Phillips, P. J.; and Quinn, G. W. 2011. Report on the evaluation of 2D still-image face recognition algorithms. US Department of Commerce, National Institute of Standards and Technology. Hashimoto, T.; Srivastava, M.; Namkoong, H.; and Liang, P. 2018. Fairness without demographics in repeated loss minimization. In Proceedings of International Conference on Machine Learning, 1929–1938. PMLR. Hovy, D.; and Søgaard, A. 2015. Tagging performance correlates with author age. In Proceedings of the 53rd annual meeting of the Association for Computational Linguistics and the 7th international joint conference on natural language processing (volume 2: Short papers), 483–488. Hu, Z.; and Hong, L. J. 2013. Kullback-Leibler divergence constrained distributionally robust optimization. Available at Optimization Online, 1(2): 9. Jaggi, M. 2013. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In Proceedings of International conference on machine learning, 427–435. PMLR. Jin, J.; Zhang, B.; Wang, H.; and Wang, L. 2021. Nonconvex distributionally robust optimization: Non-asymptotic analysis. In Proceedings of Advances in Neural Information Processing Systems, volume 34, 2771–2782. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of Advances in neural information processing systems, volume 25. Lacoste-Julien, S. 2016. Convergence rate of frank-wolfe for non-convex objectives. arXiv preprint arXiv:1607.00345. Levy, D.; Carmon, Y.; Duchi, J. C.; and Sidford, A. 2020. Large-scale methods for distributionally robust optimization. In Proceedings of Advances in Neural Information Processing Systems, volume 33, 8847–8860. Lin, T.; Jin, C.; and Jordan, M. 2020. On gradient descent ascent for nonconvex-concave minimax problems. In Proceedings of International Conference on Machine Learning, 6083–6093. PMLR. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Moulines, E.; and Bach, F. 2011. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. Advances in neural information processing systems, 24. Namkoong, H.; and Duchi, J. C. 2016. Stochastic gradient methods for distributionally robust optimization with fdivergences. In Proceedings of Advances in neural information processing systems, volume 29. Nesterov, Y. 2003. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media. Qi, Q.; Guo, Z.; Xu, Y.; Jin, R.; and Yang, T. 2021. An online method for a class of distributionally robust optimization with non-convex objectives. In Proceedings of Advances in Neural Information Processing Systems, volume 34, 10067– 10080. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8224 Qi, Q.; Lyu, J.; Bai, E. W.; Yang, T.; et al. 2022. Stochastic constrained dro with a complexity independent of sample size. arXiv preprint arXiv:2210.05740. Rafique, H.; Liu, M.; Lin, Q.; and Yang, T. 2022. Weaklyconvex–concave min–max optimization: provable algorithms and applications in machine learning. Optimization Methods and Software, 37(3): 1087–1121. Rahimian, H.; and Mehrotra, S. 2019. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659. Robbins, H.; and Monro, S. 1951. A stochastic approximation method. The annals of mathematical statistics, 400– 407. Rockafellar, R. T.; Uryasev, S.; et al. 2000. Optimization of conditional value-at-risk. Journal of risk, 2: 21–42. Rockafellar, R. T.; and Wets, R. J. 1998. Variational Analysis. Springer. Shapiro, A. 2017. Distributionally robust stochastic programming. SIAM Journal on Optimization, 27(4): 2258– 2275. Sinha, A.; Namkoong, H.; Volpi, R.; and Duchi, J. 2017. Certifying some distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571. Soma, T.; and Yoshida, Y. 2020. Statistical learning with conditional value at risk. arXiv preprint arXiv:2002.05826. Tamar, A.; Glassner, Y.; and Mannor, S. 2015. Optimizing the CVaR via sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29. Van Erven, T.; and Harremos, P. 2014. R´enyi divergence and Kullback-Leibler divergence. IEEE Transactions on Information Theory, 60(7): 3797–3820. Wang, J.; Gao, R.; and Xie, Y. 2021. Sinkhorn distributionally robust optimization. arXiv preprint arXiv:2109.11926. Xu, Z.; Zhang, H.; Xu, Y.; and Lan, G. 2023. A unified single-loop alternating gradient projection algorithm for nonconvex–concave and convex–nonconcave minimax problems. Mathematical Programming, 1–72. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8225
2024
914
18,756
Multimodal Graph Neural Architecture Search under Distribution Shifts Jie Cai 1, Xin Wang 1,2*, Haoyang Li 1, Ziwei Zhang 1, Wenwu Zhu1,2* 1Department of Computer Science and Technology, Tsinghua University 2Beijing National Research Center for Information Science and Technology, Tsinghua University [email protected], {xin wang, zwzhang, wwzhu}@tsinghua.edu.cn, [email protected] Abstract Multimodal graph neural architecture search (MGNAS) has shown great success for automatically designing the optimal multimodal graph neural network (MGNN) architecture by leveraging multimodal representation, crossmodal information and graph structure in one unified framework. However, existing MGNAS fails to handle distribution shifts that naturally exist in multimodal graph data, since the searched architectures inevitably capture spurious statistical correlations under distribution shifts. To solve this problem, we propose a novel Out-of-distribution Generalized Multimodal Graph Neural Architecture Search (OMG-NAS) method which optimizes the MGNN architecture with respect to its performance on decorrelated OOD data. Specifically, we propose a multimodal graph representation decorrelation strategy, which encourages the searched MGNN model to output representations that eliminate spurious correlations through iteratively optimizing the feature weights and controller. In addition, we propose a global sample weight estimator that facilitates the sharing of optimal sample weights learned from existing architectures. This design promotes the effective estimation of the sample weights for candidate MGNN architectures to generate decorrelated multimodal graph representations, concentrating more on the truly predictive relations between invariant features and ground-truth labels. Extensive experiments on real-world multimodal graph datasets demonstrate the superiority of our proposed method over SOTA baselines. 1 Introduction Multimodal graph data is ubiquitous in various real-world applications such as social media (Li et al. 2019b; Tao et al. 2020; Li et al. 2021a), biomedicine (Wen et al. 2022), and health (Gao et al. 2021a). Accordingly, the development of a capable multimodal graph neural architecture search (MGNAS) algorithm is of significant importance to effectively process this complex data type for various tasks and data distributions. MG-NAS aims at automatically designing multimodal graph neural networks (MGNN) and getting more powerful models, which achieves great success in various multimodal graph tasks (Cai et al. 2022) under the identically distributed (ID) assumption, where the training and *Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. testing multimodal graphs are sampled from the same distribution. Different MGNN architectures have been proven to have different performances on different tasks and data distributions because of the diverse message-passing mechanisms (Li et al. 2022b) and crossmodal interaction modes. However, distribution shifts in multimodal graph data which indicate changes in the statistical properties of data across domains or over time, are very common in real-world applications. Distribution shifts can arise from various factors, such as modifications in real-world conditions, variations in data collection processes, or the presence of confounding variables. Especially in the case of multimodal graph data, where each graph or node is characterized by multiple modalities, may be impacted by distinct factors. Furthermore, these factors always interact with each other in intricate and complex ways. Furthermore, distribution shifts in multimodal graph data occur not only in single-modal or multimodal interactions within a node but also in the overall distribution of the graph. As shown in Figure 1, the distribution of multimodal reviews on Amazon varies across different commodity categories, such as dress, trousers and shoes. Existing MGNAS or NAS approaches are based on the ID assumption. When there is a distribution shift, each MGNN is trained on the training dataset and evaluated on the validation dataset. The best-performing model on the validation dataset is then selected for the new data distribution. However, it is important to note that the selected MGNN is more prone to overfitting the training distribution, resulting in the over-exploitation of spurious features and disregard of invariant features. Consequently, the best MGNN architecture derived by MGNAS may be sub-optimal in out-ofdistribution (OOD) scenarios due to the mistakenly learned features and the inadequate evaluation strategy. The problem of distribution shifts can be particularly challenging, and developing effective methods to handle distribution shifts in multimodal graph data is an essential research topic. The first challenge in addressing the OOD generalization problem of MGNAS is how to model the distribution shifts across graph structures, modalities, and their complex interactions. Given the complex nature of feature interactions in multimodal graphs, distinguishing between invariant and variant features can pose significant challenges. The second challenge lies in automating the search for the best model in multimodal graph OOD situations. It will be benThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8227 Blue Dress Red Dress Black Dress (b) Distribution Shift on Singlemodal Domain (Color Shift) (a) Distribution Shift on Multimodal Domain Dress Trousers Baby Shoes Red Dress Black Dress Blue Dress Train Test Train Test Figure 1: Distribution shifts in the Amazon dataset. Different node colors represent different node distributions and variations in graph density correspond to different graph structure distributions. In Figure (a), the MGNNs are trained on Dress, Trousers and Baby but tested on Shoes, leading to distribution shifts in both multimodal node features and graph structures. In Figure (b), the MGNNs are trained with red and black dresses but tested with blue dresses with a single-modal distribution shift. eficial if we leverage the diverse characteristics of searched models to enhance the OOD generalization capabilities. It is also important to minimize the inconsistent performance between the validation and the training dataset. As an effort towards enhancing the generalization capability of MG-NAS, we propose a method named Out-ofdistribution Generalized Multimodal Graph Neural Architecture Search (OMG-NAS) that searches for architectures with both maximal predictive performance and maximal generalization ability. The overview of the proposed OMGNAS is illustrated in Figure 2. Firstly, to address the different distribution shift patterns of different modalities and their complex interactions at their core, we disentangle the multimodal features into singalmodal contributions. Notably, we observe that without this disentanglement, the performances of MGNNs are significantly compromised. Secondly, we employ sample reweighting and random Fourier features (RFF) to decorrelate multimodal graph features and alleviate the impact of intricate non-linear dependencies during the learning process. The global multimodal sample weight estimator (GMSWE), the sampled MGNN model, and the controller are optimized iteratively in an end-to-end manner. Finally, to fully leverage the diverse models exploited by OMG-NAS, we train and maintain the global weights across different architectures. The global weights are model agnostic and allow for efficient warm-starting of new architectures, leading to a more stable searching process. Our contributions are summarized as follows: • To the best of our knowledge, we are the first to formulate the problem of OOD graph neural architecture search from multiple modalities. We introduce three novel multimodal graph-OOD datasets to evaluate the generalization ability of the proposed method. • We propose an OMG-NAS method that automatically searches for the best MGNN model with the best OOD generalization ability by using a novel GMSWE module to optimize the global weights that decorrelate multimodal invariant and variant features. • We construct extensive experiments that demonstrate the superiority of OMG-NAS over previous SOTA methods in both graph classification and node classification tasks.. 2 Related Works Multimodal Graph Learning. Given the success of graph learning in information aggregation and transmission (Li et al. 2019b), some scholars focus on multimodal graph learning to effectively utilize the dependencies and relationships across multiple modalities in information dissemination. Multimodal graph neural network (MGNN) aims to represent multimodal graph-structured data in an endto-end manner (Peng et al. 2017; Wu et al. 2020), taking into account both multimodal information aggregation and message passing. MGNNs also provide an expressive and flexible strategy to leverage interdependencies in multimodal datasets (Gao et al. 2020a). Although these multimodal graph learning methods have made great success, it is worth noting that they are designed for ID conditions. That is, they can not be directly applicable in OOD scenarios due to the lack of generalization ability. Out-of-Distribution Generalization. In real-world scenarios, the distribution of training graphs may differ from that of testing graphs, leading to unstable inference across different testing environments (Zhu et al. 2021; Ding et al. 2021). To tackle the OOD problem on graphs, researchers have proposed various methods including disentanglementbased graph models (Fan et al. 2022; Li et al. 2022c, 2021b), causality-based graph models (Li et al. 2022a; Chen et al. 2022) and graph invariant learning (Li et al. 2023, 2022d; Wu et al. 2022a). Sample reweighting is an effective tool for addressing distribution shift problems. (Shen et al. 2020) introduce an online reweighting method that utilizes a set of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8228 Trousers Shoes Distribution shift Baby Data Preprocessing Arch Controller 𝜃 Sampling Searching Process 𝑅𝑒𝑝𝑟𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛! 𝑅𝑒𝑝𝑟𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛" Weight 𝜔# Weight 𝜔$ Prediction Loss 𝐶𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛! 𝐶𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛" Multimodal Gradient Balancing 𝐺𝑁𝑁 𝑐𝑒𝑙𝑙𝑠 Textual Graph Visual Graph Multimodal Graph Feature Decorrelation Final Loss Textual Visual Baby Dress Trousers Baby Dress Trousers Multimodal Graph Reweighting Optimal weights 𝜔 … Reward Multimodal Fusion Classifier ! " # " $ % Multimodal Global Sample Weight Estimator Dress Figure 2: Overview of the OMG-NAS method. After ①data preprocessing, we ②sample an MGNN architecture using the controller, then we ③optimize the MGNN model with Multimodal Graph Feature Decorrelation (MGFD) and ④get the optimal global multimodal sample weights using Global Multimodal Sample Weight Estimator across Architectures (GMSWE). The performance of the optimized MGNN acts as the ⑤reward to guide the training of the controller in the next optimization cycle. unbiased clean validation examples for sample reweighting. Similarly, (Fang et al. 2020) propose to automatically learn an explicit loss-weight function that is parameterized by an MLP. Graph Neural Architecture Search. Graph neural architecture search (GraphNAS) intends to automatically search for the most effective model architecture without human intervention (Qin et al. 2022b; Zhang et al. 2022b, 2023). Existing GraphNAS methods can be categorized into reinforcement learning-based methods (Gao et al. 2021b; Zhou et al. 2022), evolutionary algorithms (Nunes and Pappa 2020; Shi et al. 2022), and differentiable methods (Zhao et al. 2020a,b). In recent years, scholars also explore how NAS performs under distribution shifts (Bai et al. 2021; Qin et al. 2022a). However, there are still significant challenges when dealing with multimodal graph data. An aspect worthy of attention is that NAS methods naturally generate a diverse set of models that have proven effective for OOD settings (Pagliardini et al. 2022; Teney et al. 2022; Rame et al. 2022). 3 Methodology 3.1 Preliminaries Notations Consider a multimodal graph represented as G = (U, E), where U = {u1, · · · , uN} is the vertex set with size N and E = {⟨ui, uj⟩|1 ≤i, j ≤N} is the edge set with |E| edges. Each node ui ∈U, ui corresponds to multimodal node feature Xi = [Xt i, Xv i ] including the textual feature Xt i and the visual feature Xv i . For textual modality, each node can represent either words, sentences, or paragraphs, while for visual modality, each node can represent either a part of a picture or a whole picture. For the node classification task, the dataset contains graphs with N nodes U = {(ui, yi)|i = 1, · · · , N} where yi is the label of node ui. For the graph classification task, the dataset contains a set of graphs D = {(Gj, yj)|j = 1, · · · , M} where yj is the label of graph Gj. This paper focuses on the node classification task and graph classification task. However, our method can also be easily extended to other multimodal graph learning tasks. MG-NAS aims to find the best architecture A∗∈A that maximizes the prediction accuracy given the pre-defined search space A. OOD problem of MG-NAS Given a training multimodal graph G from the distribution Ptr(G, Y ), MG-NAS needs to handle the testing multimodal graph G from a new distribution Pte(G, Y ), where Ptr(G, Y ) ̸= Pte(G, Y ) and Pte(G, Y ) is unknown during the training process. In this scenario, MG-NAS faces the issue of over-fitting the training data, leading to sub-optimal MGNN architectures. The goal of this paper is to address the issue of overfitting in the training data and leverage the diverse models generated by MG-NAS to develop a re-weighting approach that effectively decorrelates the multimodal graph information acquired from the training set. Given an input multimodal graph G for the node classification task (or a set of multimodal graphs {G} for the graph classification task), we aim to optimize the following objective: (A∗, W ∗, ω∗)= arg min (A,W,ω)∈(A,W,Ω) Ltrain(fcls(ΦA,W (G; ω))), (1) where A∗, W ∗are the best MGNN architecture and optimal trainable model parameters, respectively. ω∗is the best sample weight for multimodal graph feature decorrelation, ΦA,W (G; ω) denotes the MGNN encoder under weights ω, fcls represents the classifier, Ltrain represents the loss function. We use two GNN encoders ϕt(θt, ·) and ϕv(θv, ·) to extract textual and visual features. We denote Zt n = ϕt(θt, ut n) = [Zt n1, · · · , Zt nmt] ∈RN×mt, Zv n = ϕv(θv, uv n) = [Zv n1, · · · , Zv nmv] ∈RN×mv as the uni-modal representations of node un. 3.2 Multimodal Graph Feature Decorrelation We aim to identify the optimal weights of training samples to eliminate the dependency between features and reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8229 move the reliance on spurious features in the multimodal graph representation space. However, simply weighting each sample will fail to achieve OOD generalization in the multimodal graph OOD settings. To tackle this problem, we propose a multimodal graph feature decorrelation (MGFD) method. MGFD separates the features into unimodal components and multimodal interactions, then utilizes the approximation prediction generated by the unimodal sub-networks to decorrelate features within each modality. We first introduce how to obtain textual intra-modal weights ωt and the process is similar for visual intra-modal weights ωv. We eliminate dependence between sub-features in the single-modality representation space by measuring their relevance based on the sample data. We adopt the squared Frobenius norm of the partial cross-covariance matrix ∥ˆΣZt ∗i,Zt ∗j∥2 F as a way of quantifying the degree of independence, inspired by (Zhang et al. 2021; Li et al. 2022a): ˆΣZt ∗i,Zt ∗j = 1 N −1 N X n=1 [(h(Zt ni)−h(Zt ∗i))⊤·(g(Zt nj)−g(Zt ∗j))], (2) where Zt ni and Zt nj denote the value of textual random variables Zt ∗i and Zt ∗j given the input node un, N is the number of training samples. h(·) and g(·) are random Fourier features (RFF) mapping functions and we select Q functions from the RFF function space HRF F . h(Zt ∗i) and g(Zt ∗j) are the mean values of vectors h(Zt ∗i) = [h(Zt 1i), · · · , h(Zt Ni)]⊤and g(Zt ∗j) = [g(Zt 1j), · · · , g(Zt Nj)]⊤. Given textual intra-modal weights, the re-weighted partial cross-covariance matrix can be calculated as ˆΣωt Zt ∗i,Zt ∗j = 1 N −1 N X n=1 [(ωt nh(Zt ni)−hωt(Zt ∗i))⊤·(ωt ng(Zt nj)−gωt(Zt ∗j))], (3) where hωt(Zt ∗i) and gωt(Zt ∗j) are weighted average of vectors h(Zt ∗i) and g(Zt ∗j) with weights ωt =[ωt 1, · · · , ωt N]⊤. To eliminate the dependence between representations, we optimize ωt by minimizing the squared Frobenius norm of the partial cross-covariance matrix: ωt∗= arg min ωt∈∆ X 1≤i<j≤mt Z ∥ˆΣωt Zt ∗i,Zt ∗j∥2 F , (4) where ∆= {ωt ∈RN +| PN n=1 ωt n = N}, mt Z is the dimension of Zt. Afterwards, we iteratively optimize the multimodal feature weights ω = [ωt, ωv], MGNN encoder Φ = {ϕt, ϕv, ϕf} and classifier fcls by minimizing the following MGFD loss function Lw: Lw = N X n=1 ωt nL(fcls(ϕt(xt n), yn))+ωv nL(fcls(ϕv(xv n), yn)), (5) where L denotes the cross-entropy loss, ωt, ωv are the textual and visual intra-modal weights with length of N, indicating the importance of unimodal training features. The overall loss function consists of two terms: the first term represents the cross-entropy loss function and the second term represents the MGFD loss: Ltrain =−1 N N X n=0 (yn log ˆyn−(1−yn) log(1−ˆyn))+Lw. (6) When one of the modalities is more susceptible to be influenced by spurious features, MGNN tends to learn the spurious feature of this modality over the other modality. To fully learn from the two GNN encoders of MGNN, we also incorporate the OGM-GE method (Peng et al. 2022) as a plug-in module to prevent over-reliance on the spurious features of a single modality. 3.3 Global Multimodal Sample Weight Estimator across Architectures In Equation 4, our objective is to learn unique weights for each sample’s unimodal contribution. However, different MGNN models offer diverse multimodal graph feature spaces for learning these feature weights. To address this issue, we propose a novel approach called the global multimodal sample weight estimator (GMSWE). During the training of the OMG-NAS controller, we employ a savingreloading-finetuning method. We have observed that the learned weights have limited dependence on the sampled model architectures in the architecture search phase because they are associated with the distribution of input multimodal graph features. Consequently, these learned weights can be effectively transferred and generalized across architectures. Our findings yield two important insights: Firstly, the optimal weights learned from one MGNN architecture can be used as a less biased multimodal graph feature reweighting scheme for another architecture. Secondly, we notice that the learning speed of the optimal weight varies across different MGNN architectures, with some models struggling to effectively learn decorrelated features. Based on these insights, we recommend the adoption of a global weight across different model architectures. Specifically, after the training of an MGNN architecture, we retain the best global weight and use it as a warm start for the next MGNN architecture training process. For the i-th sampled architecture, the formula is shown below: (A(i), W (i), ω(i)) = arg min A,W,ω Ltrain|ω∗, ω∗= ω(i), L∗ val = L(i) val, if L(i) val < L∗ val , (7) where A(i), W (i) and ω(i) represent the best architecture, optimized model parameters and sample weights at the i-th step, L(i) val refers to the best validation loss at the i-th step, ω∗and L∗ val represent the best global sample weights and the best global validation loss prior to the i-th step. 3.4 Search Algorithm In this work, we employ a reinforcement learning-based search algorithm, which is a widely adopted strategy in many popular NAS algorithms. We utilize a recurrent neural network (RNN) as the controller to generate MGNN architectures. Once an architecture is generated, we construct and train an MGNN model based on this architecture and record its highest accuracy on the validation dataset. Subsequently, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8230 we optimize the parameters of the RNN controller, enabling it to generate better architectures over time. Train the controller. Let P(A; θ) denote the distribution of architecture A parameterized by the choice of controller θ, the goal is to maximize the expected accuracy EP (A;θ)[R(A(W ∗, G))], while minimizing the training loss Ltrain(A(W, G)). This process can be formulated as a three-level optimization problem outlined below: max A∈A E[Rval(A(W ∗, G); ω∗)], s.t. W ∗= arg min W Ltrain(A(W, G); ω∗), ω∗ t = arg min ωt X 1≤i<j≤mt Z ∥ˆΣωt Zt ∗i,Zt ∗j∥2 F , ω∗ v = arg min ωv X 1≤i<j≤mv Z ∥ˆΣωv Zv ∗i,Zv ∗j∥2 F , (8) where A represents the search space of the neural architectures, W ∗represents the optimal trainable parameters for architecture A. Rval measures the performance (e.g., accuracy) of architecture A on the validation dataset, which is used as the reward in reinforcement learning. 4 Experiment In this section, we perform various experiments to verify the effectiveness of the proposed OMG-NAS method. 4.1 Datasets We evaluate our OMG-NAS on three challenging real-world multimodal graph OOD datasets: Tencent dataset, Amazon review dataset and Recipe dataset. More details about datasets are provided in Appendix. Tencent dataset: We extract the articles spreading network from Tencent WeChat official accounts, where each node represents an article and has two modalities: visual head images and textual titles. We establish connections between two articles if at least one user has viewed both of them. The objective of this task is to identify and detect lowquality articles for different network domains. Amazon review dataset: We extract both user-generated reviews and product images from the famous Amazon ecommerce platform1. We classify ratings equal to or greater than 4 as positive feedback and ratings less than 2 as negative feedback. Each review in the graph has two modalities and is connected to other reviews based on whether they belong to the same or similar products. The task is to categorize each review as either positive or negative. For Tencent dataset and Amazon dataset, we use the opensource implementations (Wolf et al. 2020) of pre-trained Bert (Devlin et al. 2018) to extract the textual features and pre-trained Vision Transformer (ViT) (Dosovitskiy et al. 2020) to extract the visual features. Recipe dataset: We collect recipes data from 3 popular cooking websites234 with relevant text and images. The ex1https://www.Amazon.com 2https://www.simplyrecipes.com/ 3https://www.allrecipes.com/ 4https://www.thespruceeats.com/ tracted text includes titles, lists of ingredients, and cooking instructions, while the images showcase raw materials, manufacturing processes, and finished products. We partition each image into 16*16 small blocks as visual nodes and divide the text into words as textual nodes (Huang et al. 2019; Han et al. 2022). We aim to classify each recipe into the corresponding food label such as cakes and beverages. 4.2 Experimental Settings Evaluation Tasks and Metrics. We consider Tencent dataset, Amazon review dataset for node classification task, and Recipe dataset for graph classification task. The evaluation metric is the classification accuracy of the test datasets. OOD Settings. For each task, we perform experiments in both Multi-OOD and Single-OOD settings since the occurrence of both Multi-OOD and Single-OOD is common in the real world. According to existing works that focus on text-OOD (Yang et al. 2022; Wang et al. 2021a), image-OOD (Wang et al. 2021b; Zhang et al. 2022a), and multimodal-OOD scenarios (Sun et al. 2022), we identify two situations where OOD could potentially occur: 1. Multimodal OOD (Multi-OOD): This term refers to the scenario where the training dataset and testing dataset come from different domains, causing distribution shifts in both modalities. For Amazon review dataset, different domains correspond to different types of products. For Recipe dataset, these domains could be determined based on the subcategories of food. For instance, one can train a model using chocolate cakes and evaluate its performance on fruit cakes. 2. Singlemodal OOD (Single-OOD): This term refers to the situation where one modality of the test data exhibits a different distribution than that of the training data. For instance, distribution shifts in the visual modality can occur due to changes in color, background, or shape, whereas in the textual modality, distribution shifts can be attributed to variations in words, named entities, or sentiments. Baselines. We compare our model with baselines from the following three different categories. • Manually designed MGNNs: we include the MGNNs in our search space as baselines, i.e., GCN, GAT and MGAT (Tao et al. 2020). • OOD generalization methods for GNNs: we consider Mixup (Wang et al. 2021c), OOD-GNN (Li et al. 2022a), EERM (Wu et al. 2022a), DIR (Wu et al. 2022b), and CIGA (Chen et al. 2022) along with manually designed MGNNs as baselines. • Neural Architecture Search: we consider three baselines, Random Search and GraphNAS(Gao et al. 2019) with a single-modal search space using similar GNN cells, and MG-NAS (Cai et al. 2022) with the MGNN search space designed in this paper. Implementation Details. For the Tencent dataset, we set the number of epochs to 200, the learning rate to 0.001, and the dimensions of the representations and hidden layers to 768 for both the text modality and visual modality. For the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8231 Methods Tencent Amazon review Multi-OOD Multi-OOD-S Multi-OOD-B Multi-OOD-D Multi-OOD-T Single-OOD GCN 55.88±0.96 79.42±7.68 79.79±9.01 55.82±2.01 60.90±1.83 57.87±4.11 GAT 55.89±5.50 78.09±2.79 80.42±3.78 55.60±3.44 56.16±2.08 59.38±2.38 MGAT 59.83±4.60 67.83±9.17 72.71±7.50 48.00±4.06 53.60±7.01 61.35±3.16 Mixup 58.08±1.43 57.82±0.56 75.00±0.42 64.50±0.54 70.55±2.72 76.12±1.31 SRGNN 46.36±0.01 47.21±0.15 59.38±0.83 49.62±2.59 60.35±0.49 68.45±1.29 EERM 53.73±0.42 60.74±0.16 55.54±0.15 56.42±0.10 41.83±0.01 64.93±0.41 OOD-GNN + GCN 61.00±1.25 65.96±8.95 74.82±3.75 58.92±3.15 52.80±9.20 59.02±3.17 OOD-GNN + GAT 56.49±3.26 73.14±5.45 78.51±5.81 56.60±5.57 50.19±7.61 58.13±1.81 OOD-GNN + MGAT 60.06±7.99 66.30±5.17 74.54±6.63 60.26±7.69 56.55±5.28 62.67±2.50 Random Search 60.98±2.53 80.13±5.21 82.61±4.12 60.60±5.82 63.86±4.57 65.52 ±6.91 GraphNAS 62.41±3.35 81.89±5.32 83.47±3.98 61.49±5.34 63.51±4.89 67.10 ±7.59 MG-NAS 64.25±3.45 85.13±4.68 86.75±3.32 64.97±4.97 68.55±4.12 68.22±8.80 OMG-NAS (ours) 66.82±1.35 88.40±2.13 88.47±2.82 68.46±1.27 71.65±1.93 75.56±5.41 Table 1: Classification accuracy (%) on the Tencent dataset and Amazon review dataset. In each column, the boldfaced score denotes the best result and the underlined score represents the second-best result. ± denotes standard deviation. For Amazon review dataset, we select one domain as the target domain and the other three domains serve as source domains. We use the first letter to represent each target domain in a concise way, , where S stands for Shoes, B for Baby, D for Dress, and T for Trousers. Methods Recipe Multi-OOD Single-OOD GCN / Edge 61.60±4.90 73.14±4.19 GAT / Mr 53.15±6.55 69.75±3.08 MGAT / Sage 66.80±1.90 75.00±5.56 Mixup 69.54±2.21 75.39±1.56 DIR 60.14±2.75 73.90±1.57 CIGA 67.82±1.98 74.38±1.42 OOD-GNN + GCN / Edge 61.65±5.05 75.45±1.91 OOD-GNN + GAT / Mr 53.73±3.24 71.57±1.82 OOD-GNN + MGAT / Sage 65.30±1.15 76.02±2.49 Random Search 64.64±5.05 72.41±5.80 GraphNAS 64.91±5.92 72.07±5.43 MG-NAS 68.84±4.52 75.82±4.12 OMG-NAS (ours) 75.70±2.48 76.53±2.73 Table 2: Classification accuracy (%) on Recipe dataset. Amazon dataset, we set the number of epochs to 100, choose the learning rate from {0.001, 0.005, 0.01}, and set the dimensions of the representations and hidden layers to 128 for both the text modality and visual modalities. For the Recipe dataset, the number of epochs is set to be 50, the batch size is selected from {8,16,64}, and the learning rate is chosen from {0.001, 0.005, 0.01}. The dimensions of the representations and hidden layers are set to 200 for the text modality and 128 for the visual modality. The number of epochs for learning weights in MGFD is set to be 30 for all datasets. We utilize a two-layer MLP classifier. We report the mean values with standard deviations from 5 repeated experiments. 4.3 Results Analysis and Comparison Table 1 and Table 2 display a comparison of our proposed method OMG-NAS with the baseline methods on three realworld datasets. The results reveal that OMG-NAS achieves state-of-the-art performance in both Single-OOD and MultiOOD settings. Firstly, due to the limited ability to learn domain invariant features, fixed MGNN models demonstrate relatively poor performance and high instability, as evidenced by lower accuracy and higher standard deviation. Secondly, when utilizing distribution generalization methods like OOD-GNN, it is important to consider the inconsistency between different modality distribution shifts. Failing to account for the mixed distribution shift of different modalities may result in worse outcomes as the model becomes susceptible to variant features. Furthermore, we conduct a comparison between OMG-NAS and various NAS techniques, namely random search, GraphNAS and MGNAS. While these methods are successful in selecting the optimal architecture on the validation dataset, they often underperform on the test dataset due to the presence of spurious features. This can lead to suboptimal performance on the test dataset for the chosen architecture. These results demonstrate that OMG-NAS enhances the OOD generalization ability of MGNN models through automatic exploration of both architectures and multimodal weights. We also conduct a comparison of OMG-NAS with baseline methods under unbalanced settings in Table 3. Three domains are used as source domains, while the remaining one is used as the target domain. We select Dress as the dominant source domain and adjust the ratio of data from the dominant domain and the other two domains, Trousers and Shoes. OMG-NAS consistently achieves the best performance under all ratios. These findings indicate that the statistical correlations between relevant and irrelevant features are strong enough to hinder generalization across domains when the size of domains is unbalanced. However, OMGNAS is able to learn the true connections between features and labels by eliminating these correlations. 4.4 Ablation Study We compare OMG-NAS against three variations in Table 4 to investigate the impact of different components of OMGNAS, as detailed below: • MGNN + SFD: We adopt a fixed MGNN that learns weights for each sample instead of learns weights for difThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8232 Methods Amazon Review MD2 MD4 MD6 MD8 MD10 GCN 71.91±5.21 68.48±7.14 65.44±5.10 64.32±4.51 67.54±17.86 GAT 69.84±6.46 53.91±6.41 56.48±4.91 51.59±8.13 52.62±0.93 MGAT 70.21±8.32 58.14±6.98 67.13±9.24 59.15±2.41 76.17±1.85 OOD-GNN + GCN 69.91±5.83 64.48±4.74 68.98±6.87 59.01±3.61 52.13±6.93 OOD-GNN + GAT 60.39±7.90 61.84±5.61 57.91±5.31 62.67±4.63 60.96±6.33 OOD-GNN + MGAT 67.17±5.19 79.93±4.92 68.03±5.92 71.43±5.72 66.50±9.41 Random Search 83.61±4.32 83.58±6.70 76.25±5.19 73.23±6.30 71.79±9.27 GraphNAS 84.91±3.95 84.66±7.71 79.82±5.39 72.64±6.21 77.16±10.87 MG-NAS 86.55±3.06 85.33±2.93 82.17±6.51 86.15±1.97 78.79±9.21 OMG-NAS (ours) 88.93±1.89 87.92±2.91 85.94±2.07 89.04±0.51 83.11±0.82 Table 3: Predictive performance on Amazon-Review dataset (Unbalanced). MD2 indicates that the ratio of Dress, Trousers, and Shoes is 2:1:1 in both the training and validation data, and other notations with ‘MD’ are similar. ferent modalities. Specifically, we concatenate the outputs of the multimodal GNN encoders and optimize the sample weights via Equation 3 to Equation 6. • MGNN + MGFD: We consider a fixed MGNN model with the MGFD method by learning intra-modal weights, as described in Section 3.2. • MGNAS+: Based on the OMG-NAS framework, we integrate an MGNAS+ approach that mitigates the transfer of sample weights across architectures as in Section 3.3. Methods Tencent Amazon Recipe GCN 55.88±0.96 60.90±1.83 61.60±4.90 GCN + SFD 61.00±1.25 52.80±9.20 61.65±5.05 GCN + MGFD 55.26±2.24 61.85±3.51 70.63±2.10 GAT 55.89±5.50 56.16±2.08 53.15±6.55 GAT + SFD 56.49±3.26 50.19±7.61 53.73±3.24 GAT + MGFD 62.68±4.45 56.85±8.28 54.98±1.47 MGAT 59.83±4.60 53.60±7.01 66.80±1.90 MGAT + SFD 60.06±7.99 56.55±5.28 65.30±1.15 MGAT + MGFD 65.66±1.41 58.30±4.97 65.93±0.60 MGNAS+ 62.54±3.11 63.28±6.53 72.85±3.54 OMG-NAS (ours) 66.82±1.35 71.65±1.93 75.70±2.48 Table 4: Ablation experiments on Multi-OOD dataset. Architectures WT w/o WT Improvement GCN →GCN 65.19±9.31 64.03±9.21 +1.8% GCN →GAT 70.55±6.50 65.80±5.93 +7.22% GCN →MGAT 74.15±3.90 73.82±2.50 +4.40% GAT →GCN 54.68±8.13 58.27±4.10 -5.80% GAT →GAT 69.25±4.42 58.85±6.51 +17.7% GAT →MGAT 73.35±7.46 70.13±6.69 +4.60% MGAT →GCN 74.68±0.47 72.85±1.25 +2.51% MGAT →GAT 69.00±10.9 64.13±9.10 +7.60% MGAT →MGAT 73.28±4.74 72.38±1.62 +1.25% Table 5: Weight transfer experiments on Recipe dataset. WT means with-transfer setting and w/o WT means withouttransfer setting. Firstly, MGNN+MGFD outperforms MGNN+SFD in all datasets and MGNN architectures. This result highlights the effectiveness of the multimodal graph feature decorrelation component in OMG-NAS, which effectively distinguishes distribution shifts among different modalities and resolves the issue of overlapping spurious features between modalities. Secondly, OMG-NAS outperforms OMG-NAS w/o GMSWE in terms of all datasets, indicating the effectiveness of our proposed global sample weight estimator across architectures. We also demonstrate the effectiveness of weight transfer in Figure 3 and Table 5. In Figure 3, we compare the validation accuracy and test accuracy between transfer and not-transfer settings on Recipe dataset. In the transfer setting, we use the optimal weights obtained from training GAT as the initial sample weights of MGAT. In the NOTtransfer setting, we use random weight initialization for the sample weights of MGAT. To further investigate the effectiveness of the multimodal graph reweighting, we present the distribution of learned weights in OMG-NAS on the Recipe dataset. Figure 4 demonstrates that OMG-NAS successfully acquires meaningful weights, while the distribution of weights are obviously different for different modalities. In summary, OMG-NAS outperforms all the variations considered in this ablation study, demonstrating the importance of every components of our approach in achieving superior performance. 5 Conclusion In this paper, we propose a novel OMG-NAS method to improve the OOD generalization ability of MGNAS. OMGNAS disentangles multimodal features and reweights samples using random Fourier features. Additionally, it utilizes the diverse features of the searched models. All of these designs aim to eliminate spurious correlations and enable OOD generalization ability. Extensive experiments show the significant contribution of OMG-NAS in addressing the challenging generalization problem of MGNAS. Acknowledgments This work was supported by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222), Beijing National Research Center for Information Science and Technology under Grant No. BNR2023RC01003, BNR2023TD03006, and Beijing Key Lab of Networked Multimedia. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8233 References Abavisani, M.; Wu, L.; Hu, S.; Tetreault, J.; and Jaimes, A. 2020. Multimodal categorization of crisis events in social media. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Bai, H.; Zhou, F.; Hong, L.; Ye, N.; Chan, S.-H. G.; and Li, Z. 2021. Nas-ood: Neural architecture search for out-ofdistribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Cai, J.; Wang, X.; Guan, C.; Tang, Y.; Xu, J.; Zhong, B.; and Zhu, W. 2022. Multimodal continual graph learning with neural architecture search. In Proceedings of the ACM Web Conference 2022. Chen, Y.; Zhang, Y.; Bian, Y.; Yang, H.; Kaili, M.; Xie, B.; Liu, T.; Han, B.; and Cheng, J. 2022. Learning causally invariant representations for out-of-distribution generalization on graphs. Advances in Neural Information Processing Systems. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ding, M.; Kong, K.; Chen, J.; Kirchenbauer, J.; Goldblum, M.; Wipf, D.; Huang, F.; and Goldstein, T. 2021. A closer look at distribution shifts and out-of-distribution generalization on graphs. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fan, S.; Wang, X.; Mo, Y.; Shi, C.; and Tang, J. 2022. Debiasing graph neural networks via learning disentangled causal substructure. Advances in Neural Information Processing Systems. Fang, T.; Lu, N.; Niu, G.; and Sugiyama, M. 2020. Rethinking importance weighting for deep learning under distribution shift. Advances in neural information processing systems. Gao, D.; Li, K.; Wang, R.; Shan, S.; and Chen, X. 2020a. Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text. arXiv:2003.13962. Gao, J.; Lyu, T.; Xiong, F.; Wang, J.; Ke, W.; and Li, Z. 2021a. Predicting the survival of cancer patients with multimodal graph neural network. IEEE/ACM Transactions on Computational Biology and Bioinformatics. Gao, Y.; Yang, H.; Zhang, P.; Zhou, C.; and Hu, Y. 2019. Graphnas: Graph neural architecture search with reinforcement learning. arXiv preprint arXiv:1904.09981. Gao, Y.; Yang, H.; Zhang, P.; Zhou, C.; and Hu, Y. 2020b. Graph Neural Architecture Search. In IJCAI. Gao, Y.; Yang, H.; Zhang, P.; Zhou, C.; and Hu, Y. 2021b. Graph neural architecture search. In International joint conference on artificial intelligence. International Joint Conference on Artificial Intelligence. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems. Han, K.; Wang, Y.; Guo, J.; Tang, Y.; and Wu, E. 2022. Vision gnn: An image is worth graph of nodes. arXiv preprint arXiv:2206.00272. Huang, L.; Ma, D.; Li, S.; Zhang, X.; and Wang, H. 2019. Text level graph neural network for text classification. arXiv preprint arXiv:1910.02356. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Li, G.; Muller, M.; Thabet, A.; and Ghanem, B. 2019a. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision. Li, H.; Cui, P.; Zang, C.; Zhang, T.; Zhu, W.; and Lin, Y. 2019b. Fates of microscopic social ecosystems: Keep alive or dead? In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Li, H.; Wang, X.; Zhang, Z.; Ma, J.; Cui, P.; and Zhu, W. 2021a. Intention-aware sequential recommendation with structured intent transition. IEEE Transactions on Knowledge and Data Engineering. Li, H.; Wang, X.; Zhang, Z.; Yuan, Z.; Li, H.; and Zhu, W. 2021b. Disentangled contrastive learning on graphs. Advances in Neural Information Processing Systems. Li, H.; Wang, X.; Zhang, Z.; and Zhu, W. 2022a. Ood-gnn: Out-of-distribution generalized graph neural network. IEEE Transactions on Knowledge and Data Engineering. Li, H.; Wang, X.; Zhang, Z.; and Zhu, W. 2022b. Outof-distribution generalization on graphs: A survey. arXiv preprint arXiv:2202.07987. Li, H.; Zhang, Z.; Wang, X.; and Zhu, W. 2022c. Disentangled graph contrastive learning with independence promotion. IEEE Transactions on Knowledge and Data Engineering. Li, H.; Zhang, Z.; Wang, X.; and Zhu, W. 2022d. Learning invariant graph representations for out-of-distribution generalization. In Advances in Neural Information Processing Systems. Li, H.; Zhang, Z.; Wang, X.; and Zhu, W. 2023. Invariant Node Representation Learning under Distribution Shifts with Multiple Latent Environments. ACM Transactions on Information Systems. Nunes, M.; and Pappa, G. L. 2020. Neural architecture search in graph neural networks. In Intelligent Systems: 9th Brazilian Conference, BRACIS 2020, Rio Grande, Brazil, October 20–23, 2020, Proceedings, Part I 9. Springer. Pagliardini, M.; Jaggi, M.; Fleuret, F.; and Karimireddy, S. P. 2022. Agree to disagree: Diversity through disagreement for better transferability. arXiv preprint arXiv:2202.04414. Peng, N.; Poon, H.; Quirk, C.; Toutanova, K.; and Yih, W.-t. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8234 Peng, X.; Wei, Y.; Deng, A.; Wang, D.; and Hu, D. 2022. Balanced multimodal learning via on-the-fly gradient modulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Qin, Y.; Wang, X.; Zhang, Z.; Xie, P.; and Zhu, W. 2022a. Graph neural architecture search under distribution shifts. In International Conference on Machine Learning. PMLR. Qin, Y.; Zhang, Z.; Wang, X.; Zhang, Z.; and Zhu, W. 2022b. NAS-Bench-Graph: Benchmarking graph neural architecture search. Advances in Neural Information Processing Systems. Rame, A.; Kirchmeyer, M.; Rahier, T.; Rakotomamonjy, A.; Gallinari, P.; and Cord, M. 2022. Diverse weight averaging for out-of-distribution generalization. arXiv preprint arXiv:2205.09739. Shen, Z.; Cui, P.; Zhang, T.; and Kunag, K. 2020. Stable learning via sample reweighting. In Proceedings of the AAAI Conference on Artificial Intelligence. Shi, M.; Tang, Y.; Zhu, X.; Huang, Y.; Wilson, D.; Zhuang, Y.; and Liu, J. 2022. Genetic-GNN: Evolutionary architecture search for graph neural networks. Knowledge-Based Systems. Sun, T.; Wang, W.; Jing, L.; Cui, Y.; Song, X.; and Nie, L. 2022. Counterfactual reasoning for out-of-distribution multimodal sentiment analysis. In Proceedings of the 30th ACM International Conference on Multimedia. Tao, Z.; Wei, Y.; Wang, X.; He, X.; Huang, X.; and Chua, T.-S. 2020. Mgat: Multimodal graph attention network for recommendation. Information Processing & Management. Teney, D.; Abbasnejad, E.; Lucey, S.; and Van den Hengel, A. 2022. Evading the simplicity bias: Training a diverse set of models discovers solutions with superior ood generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Wang, T.; Sridhar, R.; Yang, D.; and Wang, X. 2021a. Identifying and mitigating spurious correlations for improving robustness in nlp models. arXiv preprint arXiv:2110.07736. Wang, T.; Zhou, C.; Sun, Q.; and Zhang, H. 2021b. Causal Attention for Unbiased Visual Recognition. CoRR, abs/2108.08782. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog). Wang, Y.; Wang, W.; Liang, Y.; Cai, Y.; and Hooi, B. 2021c. Mixup for node and graph classification. In Proceedings of the Web Conference 2021. Wen, H.; Ding, J.; Jin, W.; Wang, Y.; Xie, Y.; and Tang, J. 2022. Graph neural networks for multimodal single-cell data integration. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, 38–45. Wu, Q.; Zhang, H.; Yan, J.; and Wipf, D. 2022a. Handling distribution shifts on graphs: An invariance perspective. arXiv preprint arXiv:2202.02466. Wu, Y.-X.; Wang, X.; Zhang, A.; He, X.; and Chua, T.-S. 2022b. Discovering invariant rationales for graph neural networks. arXiv preprint arXiv:2201.12872. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Yang, L.; Zhang, S.; Qin, L.; Li, Y.; Wang, Y.; Liu, H.; Wang, J.; Xie, X.; and Zhang, Y. 2022. GLUE-X: Evaluating Natural Language Understanding Models from an Outof-distribution Generalization Perspective. arXiv preprint arXiv:2211.08073. Zhang, C.; Zhang, M.; Zhang, S.; Jin, D.; Zhou, Q.; Cai, Z.; Zhao, H.; Liu, X.; and Liu, Z. 2022a. Delving deep into the generalization of vision transformers under distribution shifts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhang, X.; Cui, P.; Xu, R.; Zhou, L.; He, Y.; and Shen, Z. 2021. Deep stable learning for out-of-distribution generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhang, Z.; Wang, X.; Guan, C.; Zhang, Z.; Li, H.; and Zhu, W. 2022b. Autogt: Automated graph transformer architecture search. In The Eleventh International Conference on Learning Representations. Zhang, Z.; Wang, X.; Zhang, Z.; Shen, G.; Shen, S.; and Zhu, W. 2023. Unsupervised graph neural architecture search with disentangled self-supervision. In Thirty-seventh Conference on Neural Information Processing Systems. Zhao, Y.; Wang, D.; Bates, D.; Mullins, R.; Jamnik, M.; and Lio, P. 2020a. Learned low precision graph neural networks. arXiv preprint arXiv:2009.09232. Zhao, Y.; Wang, D.; Gao, X.; Mullins, R.; Lio, P.; and Jamnik, M. 2020b. Probabilistic dual network architecture search on graphs. arXiv preprint arXiv:2003.09676. Zhou, K.; Huang, X.; Song, Q.; Chen, R.; and Hu, X. 2022. Auto-gnn: Neural architecture search of graph neural networks. Frontiers in big Data. Zhu, Q.; Ponomareva, N.; Han, J.; and Perozzi, B. 2021. Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training Data. arXiv:2108.01099. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8235
2024
915
18,757
Make Lossy Compression Meaningful for Low-Light Images Shilv Cai1,2,*, Liqun Chen1,2,*, Sheng Zhong1,2, Luxin Yan1,2, Jiahuan Zhou3, Xu Zou1,2,† 1Huazhong University of Science and Technology, Wuhan, Hubei 430074, China 2National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, Hubei 430074, China 3Wangxuan Institute of Computer Technology, Peking University, Beijing 100871, China {caishilv, chenliqun, zhongsheng, yanluxin, zoux}@hust.edu.cn, [email protected] Abstract Low-light images frequently occur due to unavoidable environmental influences or technical limitations, such as insufficient lighting or limited exposure time. To achieve better visibility for visual perception, low-light image enhancement is usually adopted. Besides, lossy image compression is vital for meeting the requirements of storage and transmission in computer vision applications. To touch the above two practical demands, current solutions can be categorized into two sequential manners: “Compress before Enhance (CbE)” or “Enhance before Compress (EbC)”. However, both of them are not suitable since: (1) Error accumulation in the individual models plagues sequential solutions. Especially, once low-light images are compressed by existing general lossy image compression approaches, useful information (e.g., texture details) would be lost resulting in a dramatic performance decrease in low-light image enhancement. (2) Due to the intermediate process, the sequential solution introduces an additional burden resulting in low efficiency. We propose a novel joint solution to simultaneously achieve a high compression rate and good enhancement performance for lowlight images with much lower computational cost and fewer model parameters. We design an end-to-end trainable architecture, which includes the main enhancement branch and the signal-to-noise ratio (SNR) aware branch. Experimental results show that our proposed joint solution achieves a significant improvement over different combinations of existing state-of-the-art sequential “Compress before Enhance” or “Enhance before Compress” solutions for low-light images, which would make lossy low-light image compression more meaningful. The project is publicly available at: https://github.com/CaiShilv/Joint-IC-LL. Introduction Low-light images are prevalent in the real world since they are inevitably captured under sub-optimal conditions (e.g., back, uneven, or dim lighting) or technical limitations (e.g., limited exposure time). Low-light images present challenges for human perception and subsequent downstream vision tasks due to unsatisfied visibility. Therefore, low-light image *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. enhancement is usually employed. In recent years, the success of learning-based low-light image enhancement (Lore, Akintayo, and Sarkar 2017; Xu et al. 2022; Ma et al. 2022b) has been compelling thus attracting growing attention. In practical applications, lossy image compression is also crucial for media storage and transmission. Many traditional standards (e.g., JPEG (Wallace 1992), JPEG2000 (Rabbani 2002), BPG (Bellard 2015), and Versatile Video Coding (VVC) (Joint Video Experts Team 2021)) have been proposed and widely used. In recent years, learning-based lossy image compression methods (Cheng et al. 2020; He et al. 2022; Xie, Cheng, and Chen 2021; Wang et al. 2022a; Liu, Sun, and Katto 2023) have developed rapidly and outperformed traditional standards in terms of performance metrics, such as the peak signal-to-noise ratio (PSNR) and the multi-scale structural similarity index (MS-SSIM). Whereas, lossy low-light image compression is required in many actual systems as well (e.g., nighttime autonomous driving and visual surveillance), while little research has been conducted in the academic community on this practical topic. Current engineering solutions can be categorized into two manners: “Compress before Enhance (CbE)” and “Enhance before Compress (EbC)”. However, existing sequential solutions have at least two major drawbacks: (1) Error accumulation and loss of information in the individual models plague sequential solutions (see Figure 1). In particular, the loss of useful detail information in low-light images after compression severely degrades enhancement performance. Off-the-shelf lossy image compression methods often lack adaptability to low-light images. (2) Sequential solutions introduce additional computational costs due to intermediate results, resulting in low efficiency. Therefore, in this work, we try to answer an important question: Can we construct a joint solution of low-light image compression and enhancement, which would achieve high visual quality of reconstructed image under both low computational cost and bits per pixel (BPP)? Or simply say, can we make lossy low-light image compression more meaningful? Based on these considerations, in this work, we propose a novel joint solution for low-light image compression and enhancement. We design an end-to-end trainable two-branch architecture with the main enhancement branch for obtaining compressed domain features and the signalto-noise ratio (SNR) aware branch for obtaining local/nonThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8236 Low-light Image Ground Truth Ours FLOPs(G): 494.56 Enhance before Compress (EbC) FLOPs(G): 334.69+847.05 BPP=0.1701 PSNR=16.016 MS-SSIM=6.279 BPP=0.2174 PSNR=18.007 MS-SSIM=7.370 BPP=0.1115 PSNR=15.263 MS-SSIM=5.996 BPP=0.0646 PSNR=23.154 MS-SSIM=7.985 BPP=0.0627 PSNR=25.405 MS-SSIM=11.372 BPP=0.1693 PSNR=24.364 MS-SSIM=11.753 Compress before Enhance (CbE) FLOPs(G): 847.05+334.69 Figure 1: Compared with sequential solutions (“Compress before Enhance (CbE)” and “Enhance before Compress (EbC)”), our proposed joint solution has significantly greater advantages in terms of PSNR, MS-SSIM, and computational cost with even lower bits per pixel (BPP). As shown, our joint solution makes lossy low-light image compression meaningful with much better visibility for visual perception. In this teaser figure, the compression and low-light enhancement methods of sequential solutions are Cheng (Cheng et al. 2020) and Xu2022 (Xu et al. 2022) respectively. The example images in the figure are from the SID dataset (Chen et al. 2018). For more comparison qualitative results, please refer to the supplementary material. local features. Then, the local/non-local features are fused with the compressed domain features to generate the enhanced features for jointly compressing and enhancing lowlight images simultaneously. Finally, the enhanced image is reconstructed by the main decoder. Our proposed joint solution achieves significant advantages compared to sequential ones, please see Figure 1 for visualization. More comparison results are included in the supplementary material. In summary, the contributions of this work are as follows: • A joint solution of low-light image compression and enhancement is proposed with much lower computational cost compared to sequential ones. • Thanks to the end-to-end trainable two-branch architecture, the joint solution has the ability to achieve high visual quality of reconstructed images with low BPP. • Since there is no off-the-shelf joint solution, we compare our model with sequential CbE and EbC solutions (different combinations and orders of three compression and two enhancement methods respectively) on four datasets to verify the superiority of our joint solution. Related Works Learning-based lossy image compression. Learningbased image compression methods have shown great potential, which has led to a growing interest among researchers in this field. Lossy image compression usually contains transform, quantization, and entropy coding. These three components have been studied by many researchers. There are some works that focus on quantization. Works (Ball´e, Laparra, and Simoncelli 2017; Ball´e et al. 2018) used the additive uniform noise U(−0.5, 0.5) instead of the actual quantization during the training. Agustsson et al. (Agustsson et al. 2017) proposed soft-to-hard vector quantization to replace scalar quantization. Dumas et al. (Dumas, Roumy, and Guillemot 2018) aimed to learn the quantization step size for each latent feature map. Zhang and Wu (Zhang and Wu 2023) proposed a Lattice Vector Quantization scheme coupled with a spatially Adaptive Companding (LVQAC) mapping. Some works focus on the transform, e.g., generalized divisive normalization (GDN) (Ball´e, Laparra, and Simoncelli 2016a,b, 2017), residual block (Theis et al. 2017), attention module (Cheng et al. 2020; Zhou et al. 2019), non-local attention module (Chen et al. 2021), attentional multi-scale back projection (Gao et al. 2021), window attention module (Zou, Song, and Zhang 2022), stereo attention module (W¨odlinger et al. 2022), and expanded adaptive scaling normalization (EASN) (Shin et al. 2022) have been used to improve the nonlinear transform. Invertible neural networkbased architecture (Cai et al. 2022; Helminger et al. 2021; Ho et al. 2021; Ma et al. 2019, 2022a; Xie, Cheng, and Chen 2021) and transformer-based architecture (Qian et al. 2022; Zhu, Yang, and Cohen 2022; Zou, Song, and Zhang 2022; Liu, Sun, and Katto 2023) also have been utilized to enhance the modeling capacity of the transforms. Some other works aim to improve the efficiency of entropy coding, e.g., scale hyperprior entropy model (Ball´e et al. 2018), channel-wise entropy model (Minnen and Singh 2020), context model (Lee, Cho, and Beack 2019; Mentzer et al. 2018; Minnen, Ball´e, and Toderici 2018), 3D-context model (Guo et al. 2020b), multi-scale hyperprior entropy model (Hu et al. 2022), discretized Gaussian mixture model (Cheng et al. 2020), checkerboard context model (He et al. 2021), split hierarchical variational compression (SHVC) (Ryder et al. 2022), information transformer (Informer) entropy model (Kim, Heo, and Lee 2022), bi-directional conditional entropy model (Lei et al. 2022), unevenly grouped space-channel context model (ELIC) (He et al. 2022), neural data-dependent transform (Wang et al. 2022a), multi-level cross-channel entropy model (Guo et al. 2022), and multivariate Gaussian mixture model (Zhu et al. 2022). By constructing more accurate entropy models, these methods have achieved greater compression efficiency. However, existing learning-based compression methods The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8237 typically do not consider the impact on images of low-light conditions in their design. They may cause unsatisfied image quality and subsequent visual perception problems after decompression due to the loss of detailed information. Learning-based low-light image enhancement. Many learning-based low-light image enhancement methods (Cai, Gu, and Zhang 2018; Guo et al. 2020a; Jiang et al. 2021; Jin, Yang, and Tan 2022; Kim et al. 2021; Liu et al. 2021; Lore, Akintayo, and Sarkar 2017; Ma et al. 2022b; Ren et al. 2019; Wang et al. 2021b, 2022b; Wu et al. 2022; Xu et al. 2022, 2020; Yan et al. 2014, 2016; Yang et al. 2021a,b; Zamir et al. 2020; Zeng et al. 2020; Zhang et al. 2021, 2022; Zhao et al. 2021; Zheng, Shi, and Shi 2021) have been proposed with compelling success in recent years. For supervised methods, Zhu et al. (Zhu et al. 2020) proposed a two-stage method called EEMEFN, which comprised muti-exposure fusion and edge enhancement. Xu et al. (Xu et al. 2020) proposed a frequency-based decomposition-and-enhancement model network. It first learned to recover image contents in a low-frequency layer and then enhanced high-frequency details according to recovered contents. Sean et al. (Moran et al. 2020) introduced three different types of deep local parametric filters to enhance low-light images. For semi-supervised methods, Yang et al. (Yang et al. 2020) proposed the semi-supervised deep recursive band network (DRBN) to extract a series of coarse-to-fine band representations of low-light images. The DRBN was extended by using Long Short Term Memory (LSTM) networks and obtaining better performance (Yang et al. 2021a). For unsupervised methods, Jiang et al. (Jiang et al. 2021) proposed an unsupervised generative adversarial network which was the first work that successfully attempted to introduce unpaired training for low-light image enhancement. Ma et al. (Ma et al. 2022b) developed a self-calibrated illumination learning method and defined the unsupervised training loss to improve the generalization ability of the model. Fu et al. (Fu et al. 2023) proposed PairLIE which learned adaptive priors from low-light image pairs. However, these low-light image enhancement methods currently overlook the mutual influence with image compression, resulting in significant performance degradation once CbE or EbC is conducted (see Figure 1). In addition, most low-light image enhancement networks have complex architecture designs, and their architectures are not suited to combine with image compression directly in a joint manner. Joint solutions. It is worth noting that, in some other image processing tasks, joint solutions have been verified as an effective alternative to sequential ones with promising results. These joint solutions alleviate the error accumulation effect in the pipeline process. The success of the joint solution of multiple tasks using a single network architecture has attracted the attention of researchers in the development of deep learning. There are some works studied for joint solutions have made progress including joint denoising and demosaicing (Ehret et al. 2019; Gharbi et al. 2016), joint image demosaicing, denoising and super-resolution (Xing and Egiazarian 2021), joint low-light enhancement and denoising (Lu and Jung 2022), and joint low-light enhancement and deblurring (Zhou, Li, and Loy 2022). Recently, some works (Cheng, Xie, and Chen 2022; Alves de Oliveira et al. 2022; Ranjbar Alvar et al. 2022) optimize image processing and image compression jointly. Cheng et al. (Cheng, Xie, and Chen 2022) jointed image compression and denoising to resolve the bits misallocation problem. Jeong et al. (Jeong and Jung 2022) proposed the RAWtoBit network (RBN), which jointly optimizes camera image signal processing and image compression. Qi et al. (Qi et al. 2023) proposed a framework for real-time 6K rate-distortion-aware image rescaling which could reconstruct a high-fidelity HR image from the JPEG thumbnail. Nevertheless, the aforementioned methods are ill-suited for low-light image compression and enhancement. This interesting issue has received limited research attention within the academic community yet. Methodology Problem Formulation Lossy image compression. We briefly introduce the formulation of the learning-based lossy image compression first. In the widely used variational auto-encoder based framework (Ball´e et al. 2018), the source image x is transformed to the latent representation y by the parametric encoder ga(x; ϕa). The latent representation y is quantized to discrete value ˆy which is losslessly encoded to bitstream using entropy coders (Duda 2013; Witten, Neal, and Cleary 1987). During the decoding, ˆy is obtained through entropy decoding the bitstream. Finally, ˆy is inversely transformed to the reconstructed image ˆx through the parametric decoder gs(ˆy; ϕs). In fact, the optimization of the image compression model for the rate-distortion performance can be realized by minimizing the expectation Kullback-Leibler (KL) divergence between intractable true posterior pˆy|x(ˆy|x) and parametric variational density q(ˆy|x) over the data distribution px (Ball´e et al. 2018): Ex∼pxDKL[q(ˆy|x)||p(ˆy|x)] = Ex∼pxEˆy∼q  log q(ˆy|x) −log px|ˆy(x|ˆy) | {z } weighted distortion −log pˆy(ˆy) | {z } rate  + const, (1) where DKL[·||·] is the KL divergence. Given the transform parameter ϕa, the transform y = ga(x; ϕa) (from x to y) is determined and the process of quantizing y is equivalent to adding uniform distribution U(−1/2, 1/2) for relaxation. Therefore, q(ˆy|x) = Q i U(yi −1/2, yi +1/2) and the first term log q(ˆy|x) = 0. The second term log px|ˆy(x|ˆy) is the expected distortion between source image x and reconstructed image x◦. The third term reflects the cost of entropy encoding discrete value ˆy. In order to make the second term of Eq. 1 easier to calculate. Suppose that the likelihood is give by p(x|ˆy) = N(x|x◦, (p · λ)−11). In addition, considering the introduction of scale hyperprior. Similar to previous works (Ball´e et al. 2018; Cheng et al. 2020), the rate-distortion objective function can be written as: Ex∼pxEˆy,ˆz∼q  λ · ∥x −x◦∥p p −log pˆy|ˆz(ˆy|ˆz) −log pˆz(ˆz)  , (2) where the parameter λ is the trade-off between distortion and compression levels. If the value of p = 2, the first term The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8238 Q SNR Compute ED EC Context ( ) 2 , µ σ  s g Low-light Image x SNR Map s 0 y y ˆy ˆy ˆ ˆ y|z p Enhanced Image ˆx ED EC Q a h s h ˆz ˆz Main Enhancement Branch SNR Aware Branch 0 s 1 s 0 sf 0 lf 1 sf 1 lf 0 af 1 af Main Encoder Hyper Encoder Hyper Decoder Main Decoder SNR-guided Attention SNR-guided Fusion Feature Adaptive Residual Block 0′ s 1′ s 0′ y 1′ y 0 ag 1 a g z Figure 2: The network architecture of our joint solution of low-light image compression and enhancement. The left half of the figure contains two branches, the “Main Enhancement Branch” and the “SNR Aware Branch”. The low-light image is fed into the “Main Enhancement Branch” to obtain the two-level enhanced compressed domain features (y0/y) via “Feature Adaptive” modules (fa0/fa1). The “SNR Aware Branch” obtains local/non-local information by the SNR-map s and compressed domain features (y′ 0/y′ 1). The right half of the figure contains the main decoder, entropy models, context model, and hyper encoder/decoder commonly used in recent learning-based compression methods (Minnen, Ball´e, and Toderici 2018; Cheng et al. 2020). “/” means “or” in this paper. is the mean square error (MSE) distortion. The additional side information ˆz is used to capture spatial dependencies. Supervised learning-based low-light image enhancement. The low-light image refer as x ∈R3×h×w. h and w denote the height and width of the low-light image respectively. The low-light enhancement processing can be expressed as: ¯x = G(x; θ), (3) where the ¯x denotes the reconstructed low-light enhancement image. θ represents the learnable parameters of the neural network G. The optimization of the learning-based low-light image enhancement model is done by minimizing loss to learn the optimal network parameters ˆθ: ˆθ = argmin θ Le(G(x; θ), xgt) = argmin θ Le(¯x, xgt). (4) The loss function Le(·, ·) usually can use L1, L2, or Charbonnier (Lai et al. 2018) loss, etc. The network parameters θ can be optimized by minimizing the error between the reconstructed image ¯x and the ground truth image xgt. Joint formulation. Based on Eq. 2 and Eq. 4, we further develop the joint formulation of image compression and low-light image enhancement by simultaneously optimizing the rate distortion and the similarities between enhanced and ground truth images as follows: L =λd · D(xgt, ˆx) + R(ˆy) + R(ˆz) =λd · Ex∼px  xgt −ˆx p p  −Eˆy∼qˆy  log pˆy|ˆz(ˆy|ˆz)  −Eˆz∼qˆz  log pˆz(ˆz)  . (5) The first term D(xgt, ˆx) measures distortion between the ground truth image xgt and the enhanced image ˆx. The second term R(ˆy) and third term R(ˆz) denote the compression levels. λd denotes the weighting coefficient, which is the trade-off between compression levels and distortion. If p = 2, the first term is mean square error (MSE) distortion. Framework Overall workflow. Figure 2 shows an overview of the network architecture of our proposed joint solution of low-light image compression and enhancement. The low-light image x is transformed to the enhanced compressed domain features y by main encoders ga0 and ga1 with SNR-guided feature adaptive operations. Then y is quantized to the discrete enhanced compressed domain features ˆy by the quantizer Q. The uniform noise U(−1/2, 1/2) is added to the enhanced compressed domain features y instead of nondifferentiable quantization operation during the training and rounding the enhanced compressed domain features y during testing (Ball´e et al. 2018). We use the hyper-prior scale (Ball´e et al. 2018; Minnen, Ball´e, and Toderici 2018) module to effectively estimate the distribution pˆy|ˆz ∼N(µ, σ2) of the discrete enhanced compressed domain features ˆy by generating parameters (µ and σ) of the Gaussian entropy model to support entropy coding/decoding (EC/ED). The latent representation z is quantized to ˆz by the same quantization strategy as the enhanced features y. The distribution of discrete latent representation ˆz is estimated by the factorized entropy model (Ball´e, Laparra, and Simoncelli 2017). The range asymmetric numeral system (Duda 2013) is used to losslessly compress discrete enhanced features ˆy and latent representation ˆz into bitstreams. The decoded enhanced features ˆy obtained by the entropy decoding are fed into the main decoder gs to reconstruct the enhanced image ˆx. It is worth noting that the proposed joint solution integrates compression and low-light enhancement into a single process that performs both tasks simultaneously, achieving excellent performance while significantly reducing the computational cost. Two branch architecture. Our proposed joint solution includes two branches. The first branch is the signal-to-noise The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8239 Conv Act. Conv ⊕ 0 1 / ′ ′ y y 0 / y y Act. Conv  0 1 / s s Conv Conv Figure 3: Architecture details of the “Feature Adaptive” module. SNR-aware fusion features (s0/s1) act as a condition on the compressed domain features (y′ 0/y′ 1) to generate enhanced features (y0/y). ⊙denotes the Hadamard product and ⊕denotes the addition by element. ratio (SNR) aware branch. The SNR map s is achieved by employing a no-learning-based denoising operation (refer Eq. 6) which is simple yet effective. Local/non-local information on the low-light image is obtained through the SNRaware branch. The second branch is the main enhancement branch, the compressed domain features (y′ 0/y′ 1) combine with the local/non-local information (s0/s1) generated by the SNR-aware branch to obtain the enhanced compressed domain features (y0/y). Enhanced Compressed Domain Features As Figure 2 shows, the SNR map s ∈Rh×w is estimated from the low-light image x ∈R3×h×w. The calculation process starts by converting low-light image x into grayscale image ˙x ∈Rh×w and then proceeds as follows: ¨x = kernel( ˙x), n = abs( ˙x −¨x), s = ¨x n, (6) where kernel(·) denotes averaging local pixel groups operation, abs(·) denotes taking absolute value function. The SNR map s is processed by the residual block module (“Residual Block” in Figure 2) and transformer-based module (“SNR-guided Attention” in Figure 2) with generating the local features (fs0/fs1) and the non-local features (fl0/fl1) inspired by the work (Xu et al. 2022). Local and non-local features are fused. It is illustrated in “SNRguided Fusion” of Figure 2 and is calculated as follows: s0 = fs0 × s′ 0 + fl0 × (1 −s′ 0), s1 = fs1 × s′ 1 + fl1 × (1 −s′ 1), (7) where s′ 0 and s′ 1 are resized from SNR map s according to the shape of corresponding features (fs0/fs1/fl0/fl1). s0 and s1 are SNR-aware fusion features. Since the SNR map s is unavailable in the decoding process, we consider enhancing the features y0 and y in the compressed domain instead of the manner (Xu et al. 2022) using the decoded domain. Thus, the enhanced image ˆx can be obtained by decoding the enhanced features ˆy directly. The compressed domain features (y′ 0/y′ 1) are enhanced by “Feature Adaptive” modules (refered as fa0/fa1), shown in Figure 2, and their details are shown in Figure 3. Training Strategy In our experiments, we observe that training both image compression and low-light image enhancement tasks jointly at the beginning results in convergence problems. Thus, we adopt the two-stage training. Pre-train without SNR-aware branch. We pre-train the model without joining the signal-to-noise ratio (SNR) aware branch. In this case, the network architecture is similar to the Cheng2020-anchor (Cheng et al. 2020) of the CompressAI library (B´egaint et al. 2020) implementation. The ratedistortion loss is: L =λd · D(x, ˆx) + R(ˆy) + R(ˆz) =λd · Ex∼px  ∥x −ˆx∥p p  −Eˆy∼qˆy  log pˆy|ˆz(ˆy|ˆz)  −Eˆz∼qˆz  log pˆz(ˆz)  , (8) where x and ˆx denote the original image and decoded image respectively. We set the λd = 0.0016. It is worth noting that the parameter p of the first term Ex∼px  ∥x −ˆx∥p p  is equal to 2. That means, the distortion loss D(x, ˆx) is the MSE loss. Train the entire network. We train the entire network by loading the pre-trained model parameters. The joint loss function is Eq. 5. The parameter p of the first term Ex∼px  ∥xgt −ˆx∥p p  is equal to 1. That means, we employ L1 as the distortion loss D(xgt, ˆx) instead of the MSE loss to ensure stable training, mitigating the risk of encountering the episodic non-convergence problem. Experiments Datasets and Implementation Details Datasets. The Flicker 2W (Liu et al. 2020) is used in the pre-training and fine-tuning stages for all learning-based methods involved in the comparison. The low-light datasets that we use include SID (Chen et al. 2018), SDSD (Wang et al. 2021a), and SMID (Chen et al. 2019). The SID and SMID contain pairs of short- and long-exposure images with the resolution of 960×512. Both SID and SMID have heavy noise because they are captured in extreme darkness. The SDSD (static version) dataset contains an indoor subset and an outdoor subset with low-light and normal-light pairs. We set up splitting for training and testing based on the previous work (Xu et al. 2022). All low-light data are converted to the RGB domain for experiments. Implementation details. We use the image compression anchor model (Cheng et al. 2020) as our main architecture except for the “Feature Adaptive” modules and the SNRaware branch. Randomly cropped patches with a resolution of 512 × 512 pixels are used to optimize the model during the pre-training stage. Our implementation relies on Pytorch (Paszke et al. 2019) and the open-source CompressAI PyTorch library (B´egaint et al. 2020). The networks are optimized using the Adam (Kingma and Ba 2015) optimizer with a mini-batch size of 8 for approximately 900000 iterations and trained on RTX 3090 GPUs. The initial learning rate is set as 10-4 and decayed by a factor of 0.5 at iterations 500000, 600000, 700000, and 850000. The number of pre-training iteration steps is 150000. We have a loss cap The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8240 0.00 0.25 0.50 0.75 1.00 17 18 19 20 21 22 23 0.0 0.1 0.2 0.3 25 26 27 28 29 30 31 0.0 0.2 0.4 0.6 0.8 20 22 24 26 28 30 32 0.2 0.4 0.6 24 25 26 0.0 0.2 0.4 0.6 0.8 1.0 5 6 7 8 9 0.0 0.1 0.2 0.3 9 10 11 12 13 0.0 0.2 0.4 0.6 0.8 7 8 9 10 11 12 13 0.1 0.2 0.3 0.4 0.5 0.6 8 9 10 11 12 PSNR Bits Per Pixel (BPP) Xu2022 before TCM (EbC) Xu2022 before VTM (EbC) Xu2022 before Cheng (EbC) Ours TCM before Xu2022 (CbE) VTM before Xu2022 (CbE) Cheng before Xu2022 (CbE) (a) PSNR on SID (b) PSNR on SDSD-indoor (c) PSNR on SDSD-outdoor (d) PSNR on SMID (e) MS-SSIM on SID (f) MS-SSIM on SDSD-indoor (g) MS-SSIM on SDSD-outdoor (h) MS-SSIM on SMID PSNR Bits Per Pixel (BPP) PSNR Bits Per Pixel (BPP) PSNR Bits Per Pixel (BPP) MS-SSIM Bits Per Pixel (BPP) MS-SSIM Bits Per Pixel (BPP) MS-SSIM Bits Per Pixel (BPP) MS-SSIM Bits Per Pixel (BPP) Figure 4: Rate-distortion performance curves aggregated over four test datasets. (a)/(b)/(c)/(d) and (e)/(f)/(g)/(h) are results on SID, SDSD-indoor, SDSD-outdoor, and SMID about PSNR and MS-SSIM, respectively. Remarkably, we are the first to address the problem of error accumulation and information loss in the joint task of image compression and low-light image enhancement, so there is no existing method for comparison. We adopt the low-light enhancement method (Xu et al. 2022) for comparison. Experimental results obviously show that our proposed joint solution achieves great advantages compared to both “Compress before Enhance (CbE)” and “Enhance before Compress (EbC)” sequential solutions. for each model, so the network will skip optimizing a ministep if the training loss is above the specified threshold. We train our model under 8 qualities, where λd is selected from the set {0.0001, 0.0002, 0.0004, 0.0008, 0.0016, 0.0028, 0.0064, 0.012}. To verify the performance of the algorithm, the peak signal-to-noise ratio (PSNR) and the multi-scale structural similarity index (MS-SSIM) are used as evaluation metrics. We also compare the size of the models and computational cost. For better visualization, the MS-SSIM is converted to decibels (−10log10(1 −MS-SSIM)). Algorithm Performance Rate-distortion performance. Sequential solutions contain individual models of the state-of-the-art low-light enhancement method Xu2022 (Xu et al. 2022), the stateof-the-art compression method TCM (Liu, Sun, and Katto 2023), the typical learning-based compression method Cheng2020-anchor (Cheng et al. 2020), and the classical codec method VVC (Joint Video Experts Team 2021)). The proposed joint solution compares with the six sequential solutions as follows: (1) “Xu2022 before TCM (EbC)”; (2) “Xu2022 before VTM (EbC)”; (3) “Xu2022 before Cheng (EbC)”; (4) “TCM before Xu2022 (CbE)”; (5) “VTM before Xu2022 (CbE)”; (6) “Cheng before Xu2022 (CbE)”. For brief representation, “Cheng” denotes the compression method cheng2020Anchor, and “VTM” denotes the classical codec method VVC. For image compression methods, we fine-tune the pretrained Cheng2020-anchor models provided by the CompressAI PyTorch library (B´egaint et al. 2020) and the models provided by TCM (Liu, Sun, and Katto 2023) on the Ours Cheng-S Cheng-L TCM-L TCM-M TCM-S 0 500 1000 1500 2000 FLOPs(G) Enhance Compress Joint Ours Cheng-S Cheng-L TCM-L TCM-M TCM-S 0 20 40 60 80 100 Parameters(M) Figure 5: Comparison of computational costs and model size. “TCM-S”/“TCM-M”/“TCM-L” represents the sequential solution of the 64/96/128 channels compression method (Liu, Sun, and Katto 2023) before the low-light image enhancement method (Xu et al. 2022). “ChengS”/“Cheng-L” represents the sequential solution of the 128/192 channels compression method (Cheng et al. 2020) before the low-light image enhancement method (Xu et al. 2022). Obviously, our joint solution has the advantage of lower computational costs and fewer model parameters. Flicker and paired low-light image training datasets for fair comparison. The VCC is implemented by the official Test Model VTM 12.1 with the intra-profile configuration from the official GitHub page to test images, configured with the YUV444 format to maximize compression performance. For the low-light enhancement method Xu2022, we use the source code obtained from the official GitHub page finetuned on the same paired training datasets for fair comparison. We show the overall rate-distortion (RD) performance curves on SID, SDSD-indoor, SDSD-outdoor, and SMID The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8241 0.0 0.2 0.4 0.6 0.8 5 6 7 8 9 0.0 0.2 0.4 0.6 0.8 18 19 20 21 22 23 MS-SSIM Bits Per Pixel (BPP) SMG before TCM SMG before VTM SMG before Cheng TCM before SMG VTM before SMG Cheng before SMG Ours PSNR Bits Per Pixel (BPP) Figure 6: We adopt the state-of-the-art low-light enhancement method SMG (Xu, Wang, and Lu 2023) for comparison on the SID dataset. The results of the experiments show that the proposed joint solution also achieves the greatest advantages compared to the sequential solutions. datasets in Figure 4. Our proposed solution (red curves) achieves great advantages with the common metrics PSNR and MS-SSIM. More qualitative results with quantitative metrics are included in the supplementary material. Obviously, the error accumulation and loss of information in the individual models plague the sequential solution. Especially, the compressed low-light images with useful information loss make it difficult for the low-light image enhancement method to reconstruct pleasing images. Computational complexity. We compare the computational cost and model size of the proposed joint solution with sequential solutions of the typical learning-based image compression method Cheng2020-anchor (Cheng et al. 2020), the state-of-the-art learning-based image compression method TCM (Liu, Sun, and Katto 2023) and the lowlight image enhancement method Xu2022 (Xu et al. 2022). As shown in Figure 5, the left side of the figure shows the computational cost over an RGB image with the resolution of 960×512, and the right side of the figure shows the number of model parameters. In our proposed joint solution, the low-light image enhancement and image compression share the same feature extractor/decoder during the encoding/decoding. Thus, the proposed joint solution achieves much lower computational costs and fewer model parameters. Comparison with another enhancement method. To further verify the effectiveness of the joint solution, we have also performed comparison experiments with another stateof-the-art low-light image enhancement method SMG (Xu, Wang, and Lu 2023). The proposed joint solution compares with the six sequential solutions as follows: (1) “SMG before TCM (EbC)”; (2) “SMG before VTM (EbC)”; (3) “SMG before Cheng (EbC)”; (4) “TCM before SMG (CbE)”; (5) “VTM before Xu2022 (CbE)”; (6) “Cheng before Xu2022 (CbE)”. The comparison results on the SID dataset are shown in Figure 6. It is worth noting that SMG uses a more complex network structure, implying a higher computational cost. The experimental results show that our proposed joint solution consistently has a large advantage over sequential solutions. This indicates that our proposed method can indeed solve the problem of error accumulation and loss of information in sequential solutions. 0.2 0.4 0.6 0.8 1.0 4 5 6 7 8 9 0.2 0.4 0.6 0.8 1.0 16 17 18 19 20 21 22 23 MS-SSIM Bits Per Pixel (BPP) Joint Guidance with SNR-aware Joint Guidance without SNR-aware Ours PSNR Bits Per Pixel (BPP) Figure 7: The impact of different branches on RD performance. The curves are aggregated on the SID. More experimental results are presented in the supplementary material. Analysis Impact of the SNR-aware branch. The SNR-aware branch can effectively extract local and non-local information from the low-light image by being aware of the signalto-noise ratio, which is crucial for our low-light image enhancement. To verify the effectiveness of the SNR-aware branch, we remove the SNR-aware branch and add corresponding network modules to the main enhancement branch to achieve low-light image enhancement. We name this method “Joint Guidance without SNR-aware”. The model architecture is similar to DC (Cheng, Xie, and Chen 2022). More details of this method are given in the supplementary material. Figure 7 shows the results of our method outperforms the “Joint Guidance without SNR-aware” by a large margin, indicating that the significance and importance of the SNR-aware branch (red curve vs blue curve). Joint guidance with SNR-aware. To further investigate another training strategy by using the SNR-aware information, we additionally use a three-branch network architecture (named “Joint Guidance with SNR-aware”) for experiments. It has an additional teacher guidance branch during the training stage. Details are shown in the supplementary material. The comparison results are shown in Figure 7. The performance of using such a “Teacher Guidance Branch” is slightly worse than our joint solution (red curve vs yellow curve), while additionally increasing the computational cost during the training procedure. That is, our usage of SNRaware information is more effective and efficient. Conclusion We propose a novel joint solution to make lossy image compression meaningful for low-light images, alleviating the problem of error accumulation when the two tasks are performed in sequential manners. Local and non-local features (obtained by the SNR-aware branch) would be fused with the compressed features to generate enhanced features. Finally, the enhanced image can be obtained by decoding the enhanced features directly. The experiments show that Our proposed joint solution surpasses sequential solutions significantly in terms of PSNR and MS-SSIM, resulting in superior reconstructed image quality for subsequent visual perception. Additionally, it offers lower computational costs and a reduced number of model parameters. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8242 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62301228, 62176100, 62376011 and in part by the Special Project of Science and Technology Development of Central Guiding Local of Hubei Province under Grant 2021BEE056. The computation is completed in the HPC Platform of Huazhong University of Science and Technology. References Agustsson, E.; Mentzer, F.; Tschannen, M.; Cavigelli, L.; Timofte, R.; Benini, L.; and Gool, L. V. 2017. Soft-to-hard Vector Quantization for End-to-end Learning Compressible Representations. In NeurIPS. Alves de Oliveira, V.; Chabert, M.; Oberlin, T.; Poulliat, C.; Bruno, M.; Latry, C.; Carlavan, M.; Henrot, S.; Falzon, F.; and Camarero, R. 2022. Satellite Image Compression and Denoising With Neural Networks. IEEE Geoscience and Remote Sensing Letters, 19: 1–5. Ball´e, J.; Laparra, V.; and Simoncelli, E. P. 2016a. Density Modeling of Images Using a Generalized Normalization Transformation. In ICLR. Ball´e, J.; Laparra, V.; and Simoncelli, E. P. 2016b. End-toend Optimization of Nonlinear Transform Codes for Perceptual Quality. In PCS. Ball´e, J.; Laparra, V.; and Simoncelli, E. P. 2017. End-to-end Optimized Image Compression. In ICLR. Ball´e, J.; Minnen, D.; Singh, S.; Hwang, S. J.; and Johnston, N. 2018. Variational Image Compression with a Scale Hyperprior. In ICLR. B´egaint, J.; Racap´e, F.; Feltman, S.; and Pushparaja, A. 2020. CompressAI: a Pytorch Library and Evaluation Platform for End-to-End Compression Research. arXiv preprint arXiv:2011.03029. Bellard, F. 2015. BPG Image Format. https://bellard.org/ bpg/. Accessed: 2023-07-11. Cai, J.; Gu, S.; and Zhang, L. 2018. Learning a Deep Single Image Contrast Enhancer from Multi-exposure Images. IEEE Transactions on Image Processing, 27(4): 2049–2062. Cai, S.; Zhang, Z.; Chen, L.; Yan, L.; Zhong, S.; and Zou, X. 2022. High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation. In ACM MM. Chen, C.; Chen, Q.; Do, M. N.; and Koltun, V. 2019. Seeing Motion in the Dark. In ICCV. Chen, C.; Chen, Q.; Xu, J.; and Koltun, V. 2018. Learning to see in the dark. In CVPR. Chen, T.; Liu, H.; Ma, Z.; Shen, Q.; Cao, X.; and Wang, Y. 2021. End-to-end Learnt Image Compression via Nonlocal Attention Optimization and Improved Context Modeling. IEEE Transactions on Image Processing, 30: 3179– 3191. Cheng, K. L.; Xie, Y.; and Chen, Q. 2022. Optimizing Image Compression via Joint Learning with Denoising. In ECCV. Cheng, Z.; Sun, H.; Takeuchi, M.; and Katto, J. 2020. Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules. In CVPR. Duda, J. 2013. Asymmetric Numeral Systems: Entropy Coding Combining Speed of Huffman Coding with Compression Rate of Arithmetic Coding. arXiv preprint arXiv:1311.2540. Dumas, T.; Roumy, A.; and Guillemot, C. 2018. Autoencoder Based Image Compression: Can the Learning be Quantization Independent? In ICASSP. Ehret, T.; Davy, A.; Arias, P.; and Facciolo, G. 2019. Joint Demosaicking and Denoising by Fine-tuning of Bursts of Raw Images. In ICCV. Fu, Z.; Yang, Y.; Tu, X.; Huang, Y.; Ding, X.; and Ma, K.-K. 2023. Learning a Simple Low-Light Image Enhancer From Paired Low-Light Instances. In CVPR. Gao, G.; You, P.; Pan, R.; Han, S.; Zhang, Y.; Dai, Y.; and Lee, H. 2021. Neural Image Compression via Attentional Multi-scale Back Projection and Frequency Decomposition. In ICCV. Gharbi, M.; Chaurasia, G.; Paris, S.; and Durand, F. 2016. Deep Joint Demosaicking and Denoising. ACM Transactions on Graphics (ToG), 35(6): 1–12. Guo, C.; Li, C.; Guo, J.; Loy, C. C.; Hou, J.; Kwong, S.; and Cong, R. 2020a. Zero-reference Deep Curve Estimation for Low-light Image Enhancement. In CVPR. Guo, L.; Shi, X.; He, D.; Wang, Y.; Ma, R.; Qin, H.; and Wang, Y. 2022. Practical Learned Lossless JPEG Recompression with Multi-Level Cross-Channel Entropy Model in the DCT Domain. In CVPR. Guo, Z.; Wu, Y.; Feng, R.; Zhang, Z.; and Chen, Z. 2020b. 3-D Context Entropy Model for Improved Practical Image Compression. In CVPRW. He, D.; Yang, Z.; Peng, W.; Ma, R.; Qin, H.; and Wang, Y. 2022. ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-channel Contextual Adaptive Coding. In CVPR. He, D.; Zheng, Y.; Sun, B.; Wang, Y.; and Qin, H. 2021. Checkerboard Context Model for Efficient Learned Image Compression. In CVPR. Helminger, L.; Djelouah, A.; Gross, M.; and Schroers, C. 2021. Lossy Image Compression with Normalizing Flows. In ICLRW. Ho, Y.-H.; Chan, C.-C.; Peng, W.-H.; Hang, H.-M.; and Doma´nski, M. 2021. ANFIC: Image Compression Using Augmented Normalizing Flows. IEEE Open Journal of Circuits and Systems, 2: 613–626. Hu, Y.; Yang, W.; Ma, Z.; and Liu, J. 2022. Learning End-toEnd Lossy Image Compression: A Benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4194–4211. Jeong, W.; and Jung, S.-W. 2022. RAWtoBit: A Fully Endto-end Camera ISP Network. In ECCV. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; and Wang, Z. 2021. Enlightengan: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8243 Deep Light Enhancement without Paired Supervision. IEEE Transactions on Image Processing, 30: 2340–2349. Jin, Y.; Yang, W.; and Tan, R. T. 2022. Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-effects Suppression. In ECCV. Joint Video Experts Team. 2021. VVC Official Test Model VTM. https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware VTM/-/tree/VTM-12.1. Accessed: 2023-07-11. Kim, H.; Choi, S.-M.; Kim, C.-S.; and Koh, Y. J. 2021. Representative Color Transform for Image Enhancement. In ICCV. Kim, J.-H.; Heo, B.; and Lee, J.-S. 2022. Joint Global and Local Hierarchical Priors for Learned Image Compression. In CVPR. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Lai, W.-S.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2018. Fast and Accurate Image Super-resolution with Deep Laplacian Pyramid Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(11): 2599–2613. Lee, J.; Cho, S.; and Beack, S.-K. 2019. Context-adaptive Entropy Model for End-to-end Optimized Image Compression. In ICLR. Lei, J.; Liu, X.; Peng, B.; Jin, D.; Li, W.; and Gu, J. 2022. Deep Stereo Image Compression via Bi-Directional Coding. In CVPR. Liu, J.; Lu, G.; Hu, Z.; and Xu, D. 2020. A Unified Endto-End Framework for Efficient Deep Image Compression. arXiv preprint arXiv:2002.03370. Liu, J.; Sun, H.; and Katto, J. 2023. Learned Image Compression with Mixed Transformer-CNN Architectures. In CVPR. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; and Luo, Z. 2021. Retinex-inspired Unrolling with Cooperative Prior Architecture Search for Low-light Image Enhancement. In CVPR. Lore, K. G.; Akintayo, A.; and Sarkar, S. 2017. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. Pattern Recognition, 61: 650–662. Lu, Y.; and Jung, S.-W. 2022. Progressive Joint Low-Light Enhancement and Noise Removal for Raw Images. IEEE Transactions on Image Processing, 31: 2390–2404. Ma, H.; Liu, D.; Xiong, R.; and Wu, F. 2019. iWave: CNN-based Wavelet-like Transform for Image Compression. IEEE Transactions on Multimedia, 22(7): 1667–1679. Ma, H.; Liu, D.; Yan, N.; Li, H.; and Wu, F. 2022a. End-toend Optimized Versatile Image Compression With WaveletLike Transform. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3): 1247–1263. Ma, L.; Ma, T.; Liu, R.; Fan, X.; and Luo, Z. 2022b. Toward Fast, Flexible, and Robust Low-light Image Enhancement. In CVPR. Mentzer, F.; Agustsson, E.; Tschannen, M.; Timofte, R.; and Van Gool, L. 2018. Conditional Probability Models for Deep Image Compression. In CVPR. Minnen, D.; Ball´e, J.; and Toderici, G. D. 2018. Joint Autoregressive and Hierarchical Priors for Learned Image Compression. In NeurIPS. Minnen, D.; and Singh, S. 2020. Channel-wise Autoregressive Entropy Models for Learned Image Compression. In ICIP. Moran, S.; Marza, P.; McDonagh, S.; Parisot, S.; and Slabaugh, G. 2020. DeepLPF: Deep Local Parametric Filters for Image Enhancement. In CVPR. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In NeurIPS. Qi, C.; Yang, X.; Cheng, K. L.; Chen, Y.-C.; and Chen, Q. 2023. Real-time 6K Image Rescaling with Rate-distortion Optimization. In CVPR. Qian, Y.; Lin, M.; Sun, X.; Tan, Z.; and Jin, R. 2022. Entroformer: A Transformer-based Entropy Model for Learned Image Compression. In ICLR. Rabbani, M. 2002. JPEG2000: Image Compression Fundamentals, Standards and Practice. Journal of Electronic Imaging, 11(2): 286. Ranjbar Alvar, S.; Ulhaq, M.; Choi, H.; and Baji´c, I. V. 2022. Joint Image Compression and Denoising via Latent-space Scalability. Frontiers in Signal Processing, 2: 932873. Ren, W.; Liu, S.; Ma, L.; Xu, Q.; Xu, X.; Cao, X.; Du, J.; and Yang, M.-H. 2019. Low-light Image Enhancement via A Deep Hybrid Network. IEEE Transactions on Image Processing, 28(9): 4364–4375. Ryder, T.; Zhang, C.; Kang, N.; and Zhang, S. 2022. Split Hierarchical Variational Compression. In CVPR. Shin, C.; Lee, H.; Son, H.; Lee, S.; Lee, D.; and Lee, S. 2022. Expanded Adaptive Scaling Normalization for End to End Image Compression. In ECCV. Theis, L.; Shi, W.; Cunningham, A.; and Husz´ar, F. 2017. Lossy Image Compression with Compressive Autoencoders. In ICLR. Wallace, G. K. 1992. The JPEG Still Picture Compression Standard. IEEE Transactions on Consumer Electronics, 38(1): 18–34. Wang, D.; Yang, W.; Hu, Y.; and Liu, J. 2022a. Neural DataDependent Transform for Learned Image Compression. In CVPR. Wang, R.; Xu, X.; Fu, C.-W.; Lu, J.; Yu, B.; and Jia, J. 2021a. Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment. In ICCV. Wang, T.; Li, Y.; Peng, J.; Ma, Y.; Wang, X.; Song, F.; and Yan, Y. 2021b. Real-time Image Enhancer via Learnable Spatial-aware 3D Lookup Tables. In ICCV. Wang, Y.; Wan, R.; Yang, W.; Li, H.; Chau, L.-P.; and Kot, A. 2022b. Low-light Image Enhancement with Normalizing Flow. In AAAI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8244 Witten, I. H.; Neal, R. M.; and Cleary, J. G. 1987. Arithmetic Coding for Data Compression. Communications of the ACM, 30(6): 520–540. W¨odlinger, M.; Kotera, J.; Xu, J.; and Sablatnig, R. 2022. SASIC: Stereo Image Compression With Latent Shifts and Stereo Attention. In CVPR. Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; and Jiang, J. 2022. URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement. In CVPR. Xie, Y.; Cheng, K. L.; and Chen, Q. 2021. Enhanced Invertible Encoding for Learned Image Compression. In ACM MM. Xing, W.; and Egiazarian, K. 2021. End-to-end Learning for Joint Image Demosaicing, Denoising and Super-resolution. In CVPR. Xu, K.; Yang, X.; Yin, B.; and Lau, R. W. 2020. Learning to Restore Low-light Images via Decomposition-andenhancement. In CVPR. Xu, X.; Wang, R.; Fu, C.-W.; and Jia, J. 2022. SNR-Aware Low-Light Image Enhancement. In CVPR. Xu, X.; Wang, R.; and Lu, J. 2023. Low-Light Image Enhancement via Structure Modeling and Guidance. In CVPR. Yan, J.; Lin, S.; Bing Kang, S.; and Tang, X. 2014. A Learning-to-rank Approach for Image Color Enhancement. In CVPR. Yan, Z.; Zhang, H.; Wang, B.; Paris, S.; and Yu, Y. 2016. Automatic Photo Adjustment Using Deep Neural Networks. ACM Transactions on Graphics, 35(2): 1–15. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; and Liu, J. 2020. From Fidelity to Perceptual Quality: A Semi-supervised Approach for Low-light Image Enhancement. In CVPR. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; and Liu, J. 2021a. Band Representation-based Semi-supervised Low-light Image Enhancement: Bridging the Gap between Signal Fidelity and Perceptual Quality. IEEE Transactions on Image Processing, 30: 3461–3473. Yang, W.; Wang, W.; Huang, H.; Wang, S.; and Liu, J. 2021b. Sparse Gradient Regularized Deep Retinex Network for Robust Low-light Image Enhancement. IEEE Transactions on Image Processing, 30: 2072–2086. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2020. Learning Enriched Features for Real Image Restoration and Enhancement. In ECCV. Zeng, H.; Cai, J.; Li, L.; Cao, Z.; and Zhang, L. 2020. Learning Image-adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-time. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhang, X.; and Wu, X. 2023. LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive Companding for Efficient Learned Image Compression. In CVPR. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; and Zhang, J. 2021. Beyond brightening low-light images. International Journal of Computer Vision, 129(4): 1013–1037. Zhang, Z.; Zheng, H.; Hong, R.; Xu, M.; Yan, S.; and Wang, M. 2022. Deep Color Consistent Network for Low-Light Image Enhancement. In CVPR. Zhao, L.; Lu, S.-P.; Chen, T.; Yang, Z.; and Shamir, A. 2021. Deep Symmetric Network for Underexposed Image Enhancement with Recurrent Attentional Learning. In ICCV. Zheng, C.; Shi, D.; and Shi, W. 2021. Adaptive Unfolding Total Variation Network for Low-Light Image Enhancement. In ICCV. Zhou, L.; Sun, Z.; Wu, X.; and Wu, J. 2019. End-to-end Optimized Image Compression with Attention Mechanism. In CVPRW. Zhou, S.; Li, C.; and Loy, C. C. 2022. LEDNet: Joint Lowlight Enhancement and Deblurring in the Dark. In ECCV. Zhu, M.; Pan, P.; Chen, W.; and Yang, Y. 2020. EEMEFN: Low-light Image Enhancement via Edge-enhanced Multiexposure Fusion Network. In AAAI. Zhu, X.; Song, J.; Gao, L.; Zheng, F.; and Shen, H. T. 2022. Unified Multivariate Gaussian Mixture for Efficient Neural Image Compression. In CVPR. Zhu, Y.; Yang, Y.; and Cohen, T. 2022. Transformer-based Transform Coding. In ICLR. Zou, R.; Song, C.; and Zhang, Z. 2022. The Devil Is in the Details: Window-based Attention for Image Compression. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8245
2024
916
18,758
RR-PU: A Synergistic Two-Stage Positive and Unlabeled Learning Framework for Robust Tax Evasion Detection Shuzhi Cao*1,2, Jianfei Ruan*1,2,†, Bo Dong 2,3, Bin Shi 1,2, Qinghua Zheng 1,2 1 School of Computer Science and Technology, Xi’an Jiaotong University, China 2 Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi’an Jiaotong University, China 3 School of Distance Education, Xi’an Jiaotong University, China [email protected], [email protected], {dong.bo,shibin,qhzheng}@xjtu.edu.cn Abstract Tax evasion, an unlawful practice in which taxpayers deliberately conceal information to avoid paying tax liabilities, poses significant challenges for tax authorities. Effective tax evasion detection is critical for assisting tax authorities in mitigating tax revenue loss. Recently, machine-learning-based methods, particularly those employing positive and unlabeled (PU) learning, have been adopted for tax evasion detection, achieving notable success. However, these methods exhibit two major practical limitations. First, their success heavily relies on the strong assumption that the label frequency (the fraction of identified taxpayers among tax evaders) is known in advance. Second, although some methods attempt to estimate label frequency using approaches like Mixture Proportion Estimation (MPE) without making any assumptions, they subsequently construct a classifier based on the errorprone label frequency obtained from the previous estimation. This two-stage approach may not be optimal, as it neglects error accumulation in classifier training resulting from the estimation bias in the first stage. To address these limitations, we propose a novel PU learning-based tax evasion detection framework called RR-PU, which can revise the bias in a twostage synergistic manner. Specifically, RR-PU refines the label frequency initialization by leveraging a regrouping technique to fortify the MPE perspective. Subsequently, we integrate a trainable slack variable to fine-tune the initial label frequency, concurrently optimizing this variable and the classifier to eliminate latent bias in the initial stage. Experimental results on three real-world tax datasets demonstrate that RR-PU outperforms state-of-the-art methods in tax evasion detection tasks. Introduction Taxation, as the primary and indispensable source of national fiscal revenue, plays a crucial role in fostering national economic development. Compliance with legal provisions and fulfillment of tax obligations constitute the fundamental responsibilities of taxpayers. Unfortunately, a marked increase in corporate tax evasion has been observed recently, leading to significant adverse impacts on overall national fiscal revenue. Tax evasion, characterized by the intentional *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. concealment and deceptive practices employed by taxpayers to circumvent their tax liabilities, represents an illicit behavior that necessitates immediate attention. Recent empirical research indicates that the extent of tax revenue loss in China is approximately 22% (Tian et al. 2016), while governments worldwide face an annual loss of nearly 500 billion US dollars (De Roux et al. 2018). Hence, it is of paramount importance to develop effective methodologies for the detection of tax evasion practices adopted by taxpayers. Presently, conventional techniques for detecting tax evasion can be classified into three primary categories: manual case selection, whistle-blowing-based selection, and rulebased methods (Krivko 2010; Baesens, Van Vlasselaer, and Verbeke 2015; Wu et al. 2012; Zheng et al. 2023). However, these approaches suffer from inherent limitations. The first two methods demonstrate limited coverage, as they are unable to encompass all taxpayers, relying substantially on auditors’ expertise, and consequently leading to timeconsuming and inefficient processes (Ruan et al. 2019). On the other hand, rule-based methods utilize predefined rules to identify suspicious taxpayers. Regrettably, these rules gradually become outdated, rendering the approach less adaptable over time. To mitigate the limitations of traditional techniques, recent advancements have incorporated machine learningbased approaches (Shi et al. 2023; Ruan et al. 2019; Wang et al. 2020; Zhang et al. 2020; Wu et al. 2019; Hemberg et al. 2016; Junqu´e de Fortuny et al. 2014; Abe et al. 2010). These approaches capitalize on fully labeled tax data to extract pertinent features and train tax evasion detection models utilizing machine learning techniques. In contrast to traditional methods, these approaches demonstrate notable performance improvements while diminishing reliance on human effort. Nonetheless, considering the extensive scale of tax data, obtaining fully labeled datasets for training becomes infeasible in real-world situations (Gao et al. 2021; Mi et al. 2020; Zhang et al. 2020). Consequently, only a small subset of instances are identified as tax evaders, while the majority of instances remain unidentified. To adapt to real-world scenarios, various semi-supervised techniques have been proposed. Among these methods, positive and unlabeled learning (PU learning) (Wu et al. 2019; Zhang et al. 2020; Mi et al. 2020; Gao et al. 2021) has emerged as the most prevalent approach in the tax evasion The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8246 domain. PU learning techniques capitalize on the available labeled and unlabeled instances directly, obviating the need for additional manual annotation. As a special PU learning problem, tax evasion detection has its unique challenge, that is, the label frequency, representing the proportion of identified instances among tax evaders, is unknown. Nevertheless, contemporary PU learning algorithms rely largely on the assumption that the label frequency is known a priori. As a result, these methods cannot be directly applied to detect tax evasion behaviors. Therefore, developing a proprietary PU learning algorithm capable of automatically estimating label frequency to effectively detect tax evasion has become an urgent and critical problem. To surmount this problem, we integrate the mathematical Mixture proportion estimation (MPE) technique with the PU learning method, which is typically used for anomaly detection, to estimate label frequency and finally achieve tax evasion detection. To be specific, we introduce a novel tax evasion detection framework called RR-PU (RegroupingMPE and Revision-Based Synergistic PU Learning Framework), which capitalizes on the PU learning paradigm. RRPU is devised as a synergistic two-stage framework that aims to address the latent estimation error of label frequency and concurrently facilitate classifier learning. The framework comprises the following stages: (1) Label frequency initialization stage: In this stage, we augment the label frequency initialization process by utilizing an enhanced MPE method predicated on the regrouping methodology; (2) Revision stage: In this stage, we introduce a slack variable to revise the less accurate initial label frequency estimation. By optimizing both the slack variable and the classifier concurrently, we aim to eliminate the latent estimation bias encountered in the first stage. This synergistic learning approach enables the refinement of the label frequency estimation while improving the overall performance of the classifier. RR-PU, as a potent tax auditing instrument, independent of fully labeled data or established label frequency, is proficient in surveilling tax evaders. While conserving labor, RR-PU amplifies governmental fiscal influx and underpins market regulation integrity. Our paper presents several significant contributions, which can be summarized as follows: • RR-PU: We introduce a novel tax evasion detection framework called RR-PU, which shows its effectiveness in detecting tax evasion without requiring extra manual annotation. RR-PU capitalizes on a small proportion of positive instances along with a substantial amount of unlabeled instances, offering a labor-saving and efficient approach. • Automatic label frequency estimation: In contrast to existing PU learning-based tax evasion detection methods, RRPU automatically estimates the label frequency using tax data, obviating the need for extra assumptions. This feature enhances the adaptability of RR-PU in real-world scenarios • Synergistic two-stage framework: RR-PU is designed as a synergistic two-stage positive and unlabeled learning framework. It effectively addresses the latent estimation error of label frequency while concurrently training a classifier. Compared to existing methods, RR-PU avoids the accumulation of errors, resulting in a superior classifier. • Extensive experimental validation: We conduct experiments on three real-world tax datasets. The results validate the effectiveness of RR-PU in both estimating label frequency and detecting tax evasion behaviors. Moreover, our method outperforms the state-of-the-art (SOTA) tax evasion detection algorithms, establishing its superiority. Related Work Tax evasion detection Tax evasion detection methods can be categorized into four primary groups: manual case selection, whistle-blowing-based selection, rule-based methods (Tian et al. 2016; Krivko 2010; Baesens, Van Vlasselaer, and Verbeke 2015), and machine-learning-based methods (Shi et al. 2023; Ruan et al. 2019; Wang et al. 2020; Zhang et al. 2020; Mi et al. 2020; Gao et al. 2021). The first two methods involve the random selection of taxpayers for subsequent auditing (Tian et al. 2016). However, they do not encompass all taxpayers, and their effectiveness is heavily dependent on auditors’ skills, rendering them time-consuming and inefficient (Ruan et al. 2019). As for rule-based methods, experts typically define rules or derive them from historical cases. When a taxpayer’s behavior aligns with a defined rule, the rule-based system issues an alert. Nevertheless, maintaining and updating these rules can be challenging, causing the rule-based methods to lack adaptability. To address these issues, machine-learningbased methods have been recently introduced. In contrast to predefining specific rules, these methods automatically generalize tax evasion behavioral patterns from historical tax data. Consequently, they are not restricted to particular tax evasion behaviors and exhibit superior generalization capabilities. While these methods showcase strong performance, they necessitate a substantial volume of fully labeled instances for training, which can be challenging to acquire in reality. To mitigate this problem, some recent PU learningbased methods have been proposed, which train tax evasion detection models based on limited identified taxpayers and a large number of unidentified taxpayers. PU learning Given limited positive instances and a large quantity of unlabeled instances as training data, the objective of PU learning is to train a classifier capable of distinguishing between positive and negative instances in the test data. Existing PU learning methods can be classified into three categories: two-step techniques (Liu et al. 2002; Li and Liu 2003), biased learning (Northcutt, Wu, and Chuang 2017; Patrini et al. 2016), and reweighting methods (Elkan and Noto 2008; Du Plessis, Niu, and Sugiyama 2015; Kiryo et al. 2017; Zhao et al. 2022). Two-step techniques typically identify reliable negative instances among unlabeled instances and then perform ordinary supervised learning. However, such algorithms rely heavily on heuristics, which can introduce additional errors. Biased learning methods treat unlabeled instances as negative instances with class label noise (Li et al. 2021). To mitigate interference from label noise, these methods often impose higher penalties on incorrectly classified positive instances. Nonetheless, penalties are closely associated with the positive class prior or label frequency, which is generally unknown in practice, rendering these methods challenging to implement. Reweighting methods assign varying weights to different instances, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8247 calibrating the inaccurate data distribution to a potentially correct one. However, these algorithms also perform poorly when the positive class prior or label frequency is unknown. MPE The MPE problem is a statistical inference problem wherein, given data from a mixture and one of its two components, the objective is to identify the proportion of each component. Notable works in this area include the following: Blanchard et al. (Blanchard, Lee, and Scott 2010) first conducted a systematic study on the MPE problem, identifying it as an ill-posed problem without any assumptions. They introduced the irreducible assumption (Blanchard, Lee, and Scott 2010) to ensure a unique solution for the MPE problem. However, under this assumption, the convergence rate is exceedingly slow (Scott 2015). Subsequently, Ramaswamy et al. (Ramaswamy, Scott, and Tewari 2016) proposed the first computationally feasible algorithms, KM1 and KM2, to enhance the convergence rate by embedding the distributions into a reproducing kernel Hilbert space (RKHS) (Berlinet and Thomas-Agnan 2011). Regrettably, their methods falter when applied to high-dimensional data. To counteract this issue, Jain et al. (Jain et al. 2016) and Ivanov (Ivanov 2020) proposed AlphaMax and DEDPUL, respectively, both of which explore dimensional reduction techniques for handling high-dimensional data problems. Nevertheless, all these MPE methods rely on the irreducible assumption or its variants. In reality, the distribution of tax data is more complex and may not conform to these assumptions. Consequently, directly employing these conventional MPE methods for estimating label frequency is unreliable. Investigating an assumption-free MPE method for estimating label frequency in taxation scenarios is essential. Definition and Problem Formulation In this section, we first present key definitions pertinent to the tax scenario, followed by the systematic formulations of tax evasion detection, PU learning, and the MPE problem. Definition 1: Compliant Taxpayer. This term refers to a taxpayer who adheres to tax laws and regulations. In the context of this study, Compliant Taxpayers are designated as negative instances. Definition 2: Tax Evader. A taxpayer who deliberately infringes upon tax laws by engaging in fraudulent practices, concealment, or other illicit activities to evade tax payments is classified as a Tax Evader. For the purpose of this study, Tax Evaders are identified as positive instances. Definition 3: Identified Taxpayer. In real-world tax auditing scenarios, only a small proportion of Tax Evaders are distinctly identified as such, owing to the substantial cost of labeling. In the framework of PU learning, these individuals are regarded as positive instances. Definition 4: Unidentified Taxpayer. This categorization refers to taxpayers who have not been labeled by tax auditors. An Unidentified Taxpayer could either be a Compliant Taxpayer or a Tax Evader. In PU learning, Unidentified Taxpayers are denoted as unlabeled instances. Formulation of Tax Evasion Detection Problem In the context of tax evasion, a small subset of tax evaders is identified, with a considerable majority of taxpayers remaining unidentified. We designate X ∈Rd as the variable for instances and Y ∈R as the variable for labels. Moreover, X is defined as the feature space, and Y = {0,1} is the label space. An instance x ∈X signifies a taxpayer in real-world taxation scenarios. If y = 1, taxpayer x is classified as a tax evader (positive instance), and if y = 0, as a compliant taxpayer (negative instances). S is introduced as a binary variable indicating whether an instance x is identified; in other words, a taxpayer x is identified if s = 1 and unidentified if s = 0. Given the general unavailability of the true label y, the aim of tax evasion detection is to identify tax evaders based on a limited number of identified taxpayers and a large pool of unidentified taxpayers. Formulation of PU Learning Problem In this context, we designate SP = {xp i ,sp i = 1}np i=1 as the ensemble of identified taxpayers and SU = {xu i ,su i = 0}nu i=1 as the collective of unidentified taxpayers. Here, np and nu represent the quantities of identified and unidentified taxpayers, respectively. Within the framework of PU learning, SP and SU are correspondingly defined as the positive and unlabeled data. The primary objective of PU learning is to develop a binary classifier, which is proficient in predicting the posterior probability P(Y |X = x) = [P(Y = 0|X = x), P(Y = 1|X = x)]⊤, based on the union of SP and SU. In essence, this probability indicates whether a given taxpayer x is a tax evader. PU learning typically utilizes one of two problem settings, each dependent on the data sampling methodology employed: the single-training-set setting (Gong et al. 2019) and the case-control setting (Niu et al. 2016; Bekker and Davis 2020). Under the singletraining-set setting, it is postulated that all unlabeled training instances are sampled from the marginal density p(x). If a given instance x is positive, its positive label is discerned with a probability c, leaving x unlabeled with a probability of 1 −c. Herein, c = P(S = 1|Y = 1) symbolizes the label frequency. In contrast, should x be negative, the negative label remains unobserved, thereby consistently leaving x unlabeled. Contrarily, the case-control setting postulates that positive instances and unlabeled instances are independently sampled from the marginal densities p(x|Y = 1) and p(x), respectively,signified by SP ∼p(x|Y = 1) and SU ∼p(x). In real-world tax scenarios, an exhaustive collection of taxpayer information is initially gathered, after which a portion of taxpayers is designated as tax evaders by auditors. This process of data generation mirrors the methodology of the single-training-set setting, thereby justifying its selection for use in this paper. Formulation of MPE Problem Problem definition Given two distributions G and H over a metric space X, and a parameter κ ∈(0, 1), let F be a convex combination of G and H, i.e., F = (1 −κ)G + κH. The MPE problem involves determining κ from instances SF and SH i.i.d. drawn from the mixture distribution F and the distribution H, respectively. In the tax evasion framework, the known distributions F and H correspond to the probability density functions of unidentified taxpayers and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8248 tax evaders, respectively, while the unknown distribution G characterizes the probability density function of compliant taxpayers. The parameter κ represents the fraction of tax evaders among unidentified taxpayers. Upon determination of κ, the label frequency c is readily computable, i.e., c = |P| /(κ |U| + |P|), where |P| and |U| denote the number of identified and unidentified taxpayers, respectively. We denote κ(F|H) ≜sup {κ|F = (1 −κ)G + κH} as the maximum proportion of H in F. Consequently, any κ ∈(0, κ(F|H)) could serve as a feasible solution to the MPE problem (Yao et al. 2020). Without introducing certain assumptions, the MPE problem is ill-posed. To make κ identifiable, a number of propositions concerning distribution G have been presented. Presently, the irreducible assumption (Yao et al. 2021), which is defined as follows, stands as the least constraining of these propositions: Definition 5: Irreducible Assumption. The distribution G is deemed irreducible with respect to the distribution H if G does not contain H in its mixture. Formally, this condition implies that the decomposition G = (1−β)Q+βH does not exist, where Q denotes a distribution over the metric space X and 0 < β ≤1. Assuming that the unknown distribution G is irreducible with respect to H, the parameter κ will converge to its supremum, represented as κ(F|H). This condition enables the identifiability of the MPE problem under the irreducible assumption. Extant MPE methodologies, encompassing EN (Elkan and Noto 2008), KM (Ramaswamy, Scott, and Tewari 2016), ROC (Scott 2015), AlphaMax (Jain et al. 2016), and DEDPUL (Ivanov 2020), aim to estimate κ(F|H) to approximate the true κ. The irreducible assumption, when valid, results in κ(F|H) being a robust approximation. However, any violation of this assumption could lead to significant estimation errors. Regrouping-MPE Despite its significance, the irreducible assumption remains challenging to validate due to the unobservable nature of distribution G. This influences the effectiveness of MPE algorithms. To address this issue, the Regrouping-MPE method (Yao et al. 2020) has recently been introduced. This method ingeniously addresses the violation of the irreducible assumption by formulating an entirely new MPE problem, in which a new component distribution H′—generated through a regrouping technique—complies with the irreducible assumption. In this restructured context, the solution to the new MPE problem expressed as κ′, aligns with the original solution, κ. This alignment allows traditional MPE methodologies to resolve the new MPE problem with diminished bias, regardless of whether the original MPE problem adheres to the irreducible assumption or not. While the Regrouping-MPE method is successful in reducing the estimation bias of κ, it continues to construct a classifier based on the label frequency, which is potentially error-prone due to it being derived from the estimated κ. This two-stage procedure may not yield optimal results, as it doesn’t account for error accumulation during classifier training that stems from the estimation bias in the initial stage. Hence, the direct application of this algorithm in tax evasion may provide unreliable results. These challenges emphasize the need for a more robust algorithm. Proposed Method This section provides an in-depth examination of the RRPU method, which fundamentally incorporates two principal stages: 1) Label Frequency Initialization, and 2) Revision. Label Frequency Initialization Stage In this context, we denote the probability density function of positive (tax evaders), negative (compliant taxpayers), and unlabeled (unidentified taxpayers) instance spaces as fP (x), fN(x), and fU(x), respectively. Given the single-trainingset configuration , we observe the following equation: fU(x) = θ+fP (x) + (1 −θ+)fN(x), (1) Herein, θ+ represents the ratio of positive instances (tax evaders) within the unlabeled instance (unidentified taxpayers). Following the division of the PU training data, denoted Str, into positive data SP and unlabeled data SU, the task of estimating θ+ from Eq. (1) becomes an MPE problem. To solve this, we employ the Regrouping-MPE method, using SP and SU as inputs to the algorithm. Notably, by duplicating p × |SU| instances with low negative class-posterior probability from SU to SP , we generate a novel regrouped distribution, P ′, which is irreducible relative to distribution N. A standard MPE solver is subsequently utilized to estimate κ(U|P ′), which approximates θ+ with SP ′ (i.i.d drawn from distribution P ′) and SU. Finally, we derive the estimator ˆθ+, which is defined as: ˆθ+ = κ(U|P ′) = inf fP ′(x)>0 fU(x) fP ′(x), (2) Upon establishing θ+, the positive class prior—denoted as π+ = P(Y = 1) and indicating the ratio of tax evaders amongst all taxpayers—can be formulated as (θ+ |SU| + |SP |)/(|SU| + |SP |). In this context, |SP | and |SU| correspond to the volumes of the positive and unlabeled data, respectively. Further, in the framework of PU learning, the label frequency—denoted as c—can be expressed as: c = P(S = 1|Y = 1) = P(S = 1, Y = 1) P(Y = 1) = P(S = 1) π+ , (3) The relationship encapsulated in Eq. (3) is valid under the premise that all labeled instances (identified taxpayers) are inherently positive instances (tax evaders), i.e., P(S = 1, Y = 1) = P(S = 1). Here, P(S = 1) = |SP | /(|SP | + |SU|) represents the fraction of labeled instances among the total instances. As a result, the initial label frequency, denoted as ˆc, can be further defined as: ˆc = |SP | |SP | + ˆθ+ |SU| . (4) Revision Stage Consider g(x) as the predictive function, producing the actual class posterior probability P(Y |X = x) = [P(Y = 0|X = x), P(Y = 1|X = x)]⊤, thereby determining if a taxpayer x is a tax evader. Let q(x) be another predictive The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8249 function, approximating the posterior probability P(S|X = x) = [P(S = 0|X = x), P(S = 1|X = x)]⊤, ascertaining whether a taxpayer x is labeled. Define f(x) = arg maxj∈{0,1} gj(x) as the decision function, where gj(x) represents an estimation of P(Y = j|X = x), and h(x) = arg maxj∈{0,1} qj(x) as the decision function, where qj(x) estimates P(S = j|X = x). Allow T(X = x) to symbolize the transition matrix where Tij(X = x) = P(S = j|Y = i, X = x), from which q(x) can be deduced, i.e., T(X = x)⊤g(x) = q(x). In real-world taxation scenarios, auditors typically select a random subset for further auditing, making the labeling of a tax evader entirely arbitrary. Thus, we adopt the Selected Completely At Random (SCAR) assumption (Li et al. 2019a), which asserts that the label frequency is independent of instance features, i.e., P(S = j|Y = i, X = x) = P(S = j|Y = i), ∀i, j ∈{0, 1}. Under this premise, the transition matrix estimator, denoted as ˆT, can be simplified accordingly as follows: ˆT =  1 0 1 −ˆc ˆc  . (5) Following Eq. (5), we infer that the determination of the initial label frequency ˆc precedes the identification of the transition matrix ˆT. Consequently, we utilize ˆT ⊤g(x) as an approximation of q(x), denoted as q(x) ≈ˆq(x) = ˆT ⊤g(x). Keeping the transition matrix ˆT constant, the backbone network g(x) is updated by minimizing the unweighted risk Runweighted(g), defined as: Runweighted(g) = E[ℓ(ˆq(x), s)] = E[ℓ( ˆT ⊤g(x), s)], (6) Here E represents the expectation over the joint density p(x, s), and ℓ: R × {0, 1} −→R is a specific loss function. We represent the distributions for PU data and clean data (the fully labeled data) as DP U and D, respectively. Leveraging the importance reweighting technique (Liu and Tao 2015), we reformulate the expected risk associated with distribution D as: Rℓ,D(f) = E(X,Y )∼D[ℓ(f(X), Y )] = E(X,S)∼DP U [ PD(X, Y ) PDP U (X, S)ℓ(f(X), S)] = E(X,S)∼DP U [ PD(Y |X)PD(X) PDP U (S|X)PDP U (X)ℓ(f(X), S)] = E(X,S)∼DP U [ PD(Y |X) PDP U (S|X)ℓ(f(X), S)]. (7) Eq. (7) establishes that the expected risk corresponding to clean data and the loss ℓ(f(X), Y ) is comparable to an expected risk linked to PU data and a weighted loss. Once the transition matrix T is identified, we can express the weighted risk in Eq. (7) as: Rweighted(T, f) = E(X,S)∼DP U [ gDP U (X) (T ⊤g)DP U (X)ℓ(f(X), S)]. (8) It is critical to recognize the potential estimation error between the initial label frequency and its true value. Algorithm 1: RR-PU Input: PU training data Str and PU validation data Sv. Output: Binary classifier f and the estimator of label frequency ˆc. 1: Split the PU training data Str into positive data SP and unlabeled data SU. 2: Take SU, SP as inputs SF and SH in Regrouping-MPE to estimate label frequency ˆc. 3: Initialize the transition matrix ˆT and minimize the Runweighted(g) to optimize the backbone network g while keeping the transition matrix ˆT fixed. 4: Minimize Rweighted( ˆT + ∆T, f) to learn ∆T and f simultaneously. // Stopping criterion: when the ˆP(S|X = x) yields the minimum classification error on validation set Sv. 5: Update the transition matrix ˆT ←ˆT + ∆T and then perform row normalization. 6: Update the label frequency by ˆc ←ˆT11. To enhance the precision of the estimated label frequency, we introduce a slack variable ∆T, substituting the transition matrix ˆT with ( ˆT + ∆T) in Eq. (8). By minimizing Rweighted( ˆT + ∆T, f), we achieve synergistic optimization of the backbone network g and the slack variable ∆T. This methodology proves effective as it minimizes the weighted risk, which is asymptotically identical to the expected risk on clean data, leading to a more robust classifier ˆP(Y |X). Concurrently, we validate the slack variable on the validation set, ensuring ˆP(S|X) aligns with the validation set. More precise ˆP(Y |X) and ˆP(S|X) facilitate the estimation of ∆T, further mitigating bias in the initial stage. We detail the implementation of the RR-PU in Algorithm 1, with a visual representation provided in Fig 1. Experiments Datasets and evaluation metrics In light of the absence of a standard public tax dataset for method evaluation, we procured raw tax data from value-added invoices and taxpayer registration details gathered by tax bureaus spanning diverse regions in China. Basic features such as tax amount, longterm debt, and profit ratio were extracted from the taxpayer registration data, and transaction features were derived from the value-added invoices via PnCGCN (Gao et al. 2021). Consequently, we assembled three real-world tax datasets TaxS, TaxH, and TaxZ, with all instances being fully labeled. The datasets were divided into training, validation, and test sets following a 3:1:1 ratio. To simulate actual taxation scenarios, we randomly picked 50 percent of the original positive instances, combining them with all negative instances to create unlabeled instances in the training and validation sets. For the test set, instances were assigned their ground truth labels. Comprehensive details about the datasets are further discussed in the Appendix. The adopted evaluation metrics for our experiments include Accuracy, F1-score, and AUC, with their respective definitions outlined in the Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8250 Softmax Regrouping Figure 1: The overview of the proposed algorithm. Comparative methods and experimental setup RRPU was compared with contemporary state-of-the-art approaches. Specifically, for label frequency estimation, RRPU was juxtaposed with MPE methods such as EN (Elkan and Noto 2008), KM1 and KM2 (Ramaswamy, Scott, and Tewari 2016), AlphaMax (AM) (Jain et al. 2016), and DEDPUL (DP) (Ivanov 2020). Furthermore, to evaluate our algorithm’s efficacy in detecting tax evasion behaviors, we incorporated the leading tax evasion detection methods: FBNE-PU (Gao et al. 2021), Eagle (Shi et al. 2023) and related PU-learning algorithms including Biased PU (Li et al. 2019b), nnPU (Kiryo et al. 2017), uPU (Du Plessis, Niu, and Sugiyama 2015), RankPruning (Northcutt, Wu, and Chuang 2017), WSVM (Elkan and Noto 2008), VPU (Chen et al. 2020) and Dist-PU (Zhao et al. 2022) for comparison. To guarantee an equitable comparison, recommended hyperparameters were employed for all the comparison methods. Alongside, we set the hyper-parameter p at 10% in the Regrouping-MPE, a value confirmed as optimal for minimizing estimation error (Yao et al. 2020). A five-layer multilayer perceptron (MLP) served as the backbone network for all the methods. In the revision stage, the network was trained utilizing the Adam method with a learning rate of 5e −5, weight decay of 3e −4, and batch size of 128. All the experiments were executed using Pytorch on two GPUs (NVIDIA RTX 3090) functioning in parallel. Experimental Results Label frequency estimation RR-PU leverages the regrouping technique to fortify the MPE method and introduces a reKM1AMKM2 DP EN 0.0 0.2 0.4 Error TaxS KM1AMKM2 DP EN 0.0 0.2 0.4 Error TaxH KM1AMKM2 DP EN 0.0 0.2 0.4 Error TaxZ base with-R1 with-R2 with-RR Figure 2: Label frequency estimation error with different methods on TaxS, TaxH, and TaxZ. vision stage to bolster the estimation of label frequency. To corroborate the effectiveness of this two-stage PU learning framework, five SOTA MPE methods are utilized as baseline procedures (base). Simultaneously, experimental analysis is conducted on the baseline, augmented with the regrouping technique (with-R), and baseline supplemented with both the regrouping technique and the revision stage (with-RR). These methods are analyzed concerning the estimation error of label frequency. Fig. 2 encapsulates the outcomes of different methods on TaxS, TaxH, and TaxZ, where the X-axis and Y-axis denote the baselines and estimation error, respectively. Compared with baselines, the regrouping technique yields a diminished estimation error, which further contracts upon the integration of the revision stage. Thus, both the regrouping technique and the revision stage are integral to the enhancement of estimation accuracy. Tax evasion detection To evaluate the performance of RR-PU in identifying tax evaders, one tax evasion detection method and six PU learning methods mentioned above, are selected as comparison baselines. Considering that the comparison methods such as FBNE-PU, uPU, and nnPU require a known positive class prior and are all designed under a case-control setting, the KM1 estimator is employed in advance to estimate the positive class prior, ensuring a fair comparison. Subsequently, the positive data is reinserted into the unlabeled set to comply with the sampling requirements of these methods. The experimental results are exhibited in Table 1, with the best results emphasized in bold. The results indicate that RR-PU outperforms the baselines in detecting tax evasion behaviors across all datasets. More0.0 0.5 1.0 FPR 0.0 0.5 1.0 TPR TaxS 0.0 0.5 1.0 FPR 0.0 0.5 1.0 TPR TaxH 0.0 0.5 1.0 FPR 0.0 0.5 1.0 TPR TaxZ Biased…PU uPU nnPU RankPruning WSVM VPU FBNE-PU RR-PU Figure 3: The ROC curves of different methods on TaxS, TaxH, and TaxZ. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8251 Method Accuracy (%) F1-score AUC TaxS TaxH TaxZ TaxS TaxH TaxZ TaxS TaxH TaxZ Biased PU 74.081 92.662 95.520 0.5612 0.7014 0.8763 0.9411 0.9027 0.8754 nnPU 70.357 92.569 95.751 0.5804 0.7182 0.8816 0.8309 0.9170 0.9831 uPU 70.644 92.730 95.231 0.4881 0.7090 0.8795 0.8424 0.9272 0.9827 RankPruning 71.867 92.327 95.095 0.6184 0.7065 0.8871 0.7624 0.9310 0.9454 WSVM 77.232 92.415 93.968 0.7668 0.7056 0.8037 0.8946 0.9426 0.8523 VPU 79.365 92.721 95.412 0.7679 0.7192 0.8941 0.8757 0.9448 0.9890 FBNE-PU 80.403 93.327 95.456 0.7325 0.6921 0.8850 0.9334 0.9547 0.9891 Eagle 84.573 92.377 95.642 0.7883 0.7197 0.8966 0.9376 0.9582 0.9843 Dist-PU 86.719 92.851 95.813 0.8159 0.7337 0.8965 0.9511 0.9514 0.9735 RR-PU 90.286 93.658 96.726 0.9088 0.7834 0.9182 0.9742 0.9643 0.9903 Table 1: Tax evasion detection results on the real-world datasets Method Accuracy (%) F1-score TaxS TaxH TaxZ TaxS TaxH TaxZ base 87.611 92.471 95.615 0.8536 0.7155 0.8893 with-R 89.451 92.597 96.161 0.8749 0.7268 0.9067 with-RR 90.847 93.517 96.734 0.8938 0.7645 0.9219 Table 2: The results of ablation experiments over, receiver operating characteristic (ROC) curves of different methods are illustrated in Fig. 3, wherein the X-axis and Y-axis represent the False Positive Rate (FPR) and True Positive Rate (TPR) respectively. As depicted in Fig. 3, the ROC curve corresponding to RR-PU covers the largest area. These results show the superiority of RR-PU in tax evasion detection. Ablation experiments We conducted ablation experiments to underscore the significance of both the regrouping technique and the revision stage in augmenting classifier performance. Specifically, AlphaMax, which yields the minimum estimation error (as per Fig.2), was chosen as the base MPE method. Adhering to prior definitions, the performance of the classifier was assessed on the baseline, ’with-R’, and ’with-RR’, in terms of accuracy and F1-score. The results are delineated in Table 2. The findings show that the regrouping technique refines the classifier, and its performance is further amplified with the revision stage. Analysis of Results The results reveal that RR-PU surpasses the best extant method by 3.567%, 0.0929, and 0.0231 in terms of accuracy, F1-score, and AUC on TaxS, respectively. These enhancements are markedly more considerable than those observed on TaxZ and TaxH. To comprehend the mechanism driving these results, we employed the t-SNE technique to project the tax data onto a two-dimensional plane, thereby visualizing its distribution (as depicted in the Appendix). The pronounced advancement on the TaxS dataset can be elucidated by the following aspects: 1) Breach of irreducibility: The TaxS dataset exhibits notable overlap between positive and unlabeled instances, leading to severe violation of the irreducibility assumption by their corresponding density functions. Direct implementation of MPE methods for estimating the positive class prior, devoid of any modifications, results in substantial estimation errors, causing performance deterioration in methods such as nnPU, uPU, and FBNE-PU; 2) Data separability challenges: The poor separability of data complicates the development of a non-traditional classifier (NTC) that outputs the posterior probability P(s = 1|x). For comparison methods that rely on NTC, such as Biased PU, WSVM, RankPruning, and VPU, their performances are significantly affected. Conversely, RR-PU employs the regrouping technique to lessen the estimation error caused by the irreducibility violation and introduces a revision stage to revise the estimator. Hence, RR-PU is less impacted by these adverse factors and achieves superior performance. Conclusion In this work, we propose a novel synergistic two-stage PU learning framework RR-PU for robust detection of tax evasion. The first stage employs an enhanced MPE method, exploiting the regrouping technique to initialize the label frequency. Subsequently, RR-PU introduces a slack variable to revise the initially estimated label frequency, simultaneously optimizing this slack variable and the classifier to mitigate potential bias. Extensive experiments show that RR-PU outperforms a range of comparison methods in tax evasion detection. Despite its advantages, RR-PU faces limitations due to its MPE-based design that inherently relies on kernel density estimation. This reliance becomes a significant challenge when dealing with high-dimensional input, as it can lead to inflated estimation errors that adversely affect RRPU’s effectiveness. In the future, we aim to address these limitations and explore advanced interpretable models for tax evasion detection, striving to provide understandable evidence from the model’s findings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8252 Acknowledgments This research was partially supported by the Key Research and Development Project in Shaanxi Province No. 2022GXLH-01-03, the National Science Foundation of China under Grant Nos. 62002282, 62250009, and 6219278, and the Major Technological Innovation Project of Hangzhou No. 2022AIZD0113. References Abe, N.; Melville, P.; Pendus, C.; Reddy, C. K.; Jensen, D. L.; Thomas, V. P.; Bennett, J. J.; Anderson, G. F.; Cooley, B. R.; Kowalczyk, M.; et al. 2010. Optimizing debt collections using constrained reinforcement learning. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 75–84. Baesens, B.; Van Vlasselaer, V.; and Verbeke, W. 2015. Fraud analytics using descriptive, predictive, and social network techniques: a guide to data science for fraud detection. John Wiley & Sons. Bekker, J.; and Davis, J. 2020. Learning from positive and unlabeled data: A survey. Machine Learning, 109(4): 719– 760. Berlinet, A.; and Thomas-Agnan, C. 2011. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media. Blanchard, G.; Lee, G.; and Scott, C. 2010. Semi-supervised novelty detection. The Journal of Machine Learning Research, 11: 2973–3009. Chen, H.; Liu, F.; Wang, Y.; Zhao, L.; and Wu, H. 2020. A variational approach for learning from positive and unlabeled data. Advances in Neural Information Processing Systems, 33: 14844–14854. De Roux, D.; Perez, B.; Moreno, A.; Villamil, M. d. P.; and Figueroa, C. 2018. Tax fraud detection for under-reporting declarations using an unsupervised machine learning approach. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 215–222. Du Plessis, M.; Niu, G.; and Sugiyama, M. 2015. Convex formulation for learning from positive and unlabeled data. In International conference on machine learning, 1386–1394. PMLR. Elkan, C.; and Noto, K. 2008. Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, 213–220. Gao, Y.; Shi, B.; Dong, B.; Wang, Y.; Mi, L.; and Zheng, Q. 2021. Tax evasion detection with FBNE-PU algorithm based on PnCGCN and PU learning. IEEE Transactions on Knowledge and Data Engineering. Gong, C.; Shi, H.; Liu, T.; Zhang, C.; Yang, J.; and Tao, D. 2019. Loss decomposition and centroid estimation for positive and unlabeled learning. IEEE transactions on pattern analysis and machine intelligence, 43(3): 918–932. Hemberg, E.; Rosen, J.; Warner, G.; Wijesinghe, S.; and O’Reilly, U.-M. 2016. Detecting tax evasion: a coevolutionary approach. Artificial Intelligence and Law, 24(2): 149–182. Ivanov, D. 2020. Dedpul: Difference-of-estimated-densitiesbased positive-unlabeled learning. In 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), 782–790. IEEE. Jain, S.; White, M.; Trosset, M. W.; and Radivojac, P. 2016. Nonparametric semi-supervised learning of class proportions. arXiv preprint arXiv:1601.01944. Junqu´e de Fortuny, E.; Stankova, M.; Moeyersoms, J.; Minnaert, B.; Provost, F.; and Martens, D. 2014. Corporate residence fraud detection. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1650–1659. Kiryo, R.; Niu, G.; Du Plessis, M. C.; and Sugiyama, M. 2017. Positive-unlabeled learning with non-negative risk estimator. Advances in neural information processing systems, 30. Krivko, M. 2010. A hybrid model for plastic card fraud detection systems. Expert Systems with Applications, 37(8): 6070–6076. Li, T.; Wang, C.-C.; Ma, Y.; Ortal, P.; Zhao, Q.; Stenger, B.; and Hirate, Y. 2019a. Learning classifiers on positive and unlabeled data with policy gradient. In 2019 IEEE International Conference on Data Mining (ICDM), 399–408. IEEE. Li, T.; Wang, C.-C.; Ma, Y.; Ortal, P.; Zhao, Q.; Stenger, B.; and Hirate, Y. 2019b. Learning classifiers on positive and unlabeled data with policy gradient. In 2019 IEEE International Conference on Data Mining (ICDM), 399–408. IEEE. Li, X.; and Liu, B. 2003. Learning to classify texts using positive and unlabeled data. In IJCAI, volume 3, 587–592. Citeseer. Li, X.; Liu, T.; Han, B.; Niu, G.; and Sugiyama, M. 2021. Provably end-to-end label-noise learning without anchor points. In International Conference on Machine Learning, 6403–6413. PMLR. Liu, B.; Lee, W. S.; Yu, P. S.; and Li, X. 2002. Partially supervised classification of text documents. In ICML, volume 2, 387–394. Sydney, NSW. Liu, T.; and Tao, D. 2015. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3): 447–461. Mi, L.; Dong, B.; Shi, B.; and Zheng, Q. 2020. A tax evasion detection method based on positive and unlabeled learning with network embedding features. In International Conference on Neural Information Processing, 140–151. Springer. Niu, G.; du Plessis, M. C.; Sakai, T.; Ma, Y.; and Sugiyama, M. 2016. Theoretical comparisons of positive-unlabeled learning against positive-negative learning. Advances in neural information processing systems, 29. Northcutt, C. G.; Wu, T.; and Chuang, I. L. 2017. Learning with confident examples: Rank pruning for robust classification with noisy labels. arXiv preprint arXiv:1705.01936. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8253 Patrini, G.; Nielsen, F.; Nock, R.; and Carioni, M. 2016. Loss factorization, weakly supervised learning and label noise robustness. In International conference on machine learning, 708–717. PMLR. Ramaswamy, H.; Scott, C.; and Tewari, A. 2016. Mixture proportion estimation via kernel embeddings of distributions. In International conference on machine learning, 2052–2060. PMLR. Ruan, J.; Yan, Z.; Dong, B.; Zheng, Q.; and Qian, B. 2019. Identifying suspicious groups of affiliated-transaction-based tax evasion in big data. Information Sciences, 477: 508–532. Scott, C. 2015. A rate of convergence for mixture proportion estimation, with application to learning from noisy labels. In Artificial Intelligence and Statistics, 838–846. PMLR. Shi, B.; Dong, B.; Xu, Y.; Wang, J.; Wang, Y.; and Zheng, Q. 2023. An edge feature aware heterogeneous graph neural network model to support tax evasion detection. Expert Systems with Applications, 213: 118903. Tian, F.; Lan, T.; Chao, K.-M.; Godwin, N.; Zheng, Q.; Shah, N.; and Zhang, F. 2016. Mining suspicious tax evasion groups in big data. IEEE Transactions on Knowledge and Data Engineering, 28(10): 2651–2664. Wang, Y.; Zheng, Q.; Ruan, J.; Gao, Y.; Chen, Y.; Li, X.; and Dong, B. 2020. T-egat: A temporal edge enhanced graph attention network for tax evasion detection. In 2020 IEEE International Conference on Big Data (Big Data), 1410–1415. IEEE. Wu, R.-S.; Ou, C.-S.; Lin, H.-y.; Chang, S.-I.; and Yen, D. C. 2012. Using data mining technique to enhance tax evasion detection performance. Expert Systems with Applications, 39(10): 8769–8777. Wu, Y.; Zheng, Q.; Gao, Y.; Dong, B.; Wei, R.; Zhang, F.; and He, H. 2019. Tedm-pu: A tax evasion detection method based on positive and unlabeled learning. In 2019 IEEE International Conference on Big Data (Big Data), 1681–1686. IEEE. Yao, Y.; Liu, T.; Han, B.; Gong, M.; Niu, G.; Sugiyama, M.; and Tao, D. 2020. Towards mixture proportion estimation without irreducibility. arXiv preprint arXiv:2002.03673. Yao, Y.; Liu, T.; Han, B.; Gong, M.; Niu, G.; Sugiyama, M.; and Tao, D. 2021. Rethinking class-prior estimation for positive-unlabeled learning. In International Conference on Learning Representations. Zhang, F.; Shi, B.; Dong, B.; Zheng, Q.; and Ji, X. 2020. TTED-PU: A Transferable Tax Evasion Detection Method Based on Positive and Unlabeled Learning. In 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), 207–216. IEEE. Zhao, Y.; Xu, Q.; Jiang, Y.; Wen, P.; and Huang, Q. 2022. Dist-pu: Positive-unlabeled learning from a label distribution perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14461– 14470. Zheng, Q.; Xu, Y.; Liu, H.; Shi, B.; Wang, J.; and Dong, B. 2023. A Survey of Tax Risk Detection Using Data Mining Techniques. Engineering. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8254
2024
917
18,759
Hierarchical and Incremental Structural Entropy Minimization for Unsupervised Social Event Detection Yuwei Cao1, Hao Peng2, Zhengtao Yu3, Philip S. Yu1 1 Department of Computer Science, University of Illinois Chicago, Chicago, USA 2 School of Cyber Science and Technology, Beihang University, Beijing, China 3 Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, China {ycao43, psyu}@uic.edu, [email protected], [email protected] Abstract As a trending approach for social event detection, graph neural network (GNN)-based methods enable a fusion of natural language semantics and the complex social network structural information, thus showing SOTA performance. However, GNN-based methods can miss useful message correlations. Moreover, they require manual labeling for training and predetermining the number of events for prediction. In this work, we address social event detection via graph structural entropy (SE) minimization. While keeping the merits of the GNN-based methods, the proposed framework, HISEvent, constructs more informative message graphs, is unsupervised, and does not require the number of events given a priori. Specifically, we incrementally explore the graph neighborhoods using 1-dimensional (1D) SE minimization to supplement the existing message graph with edges between semantically related messages. We then detect events from the message graph by hierarchically minimizing 2-dimensional (2D) SE. Our proposed 1D and 2D SE minimization algorithms are customized for social event detection and effectively tackle the efficiency problem of the existing SE minimization algorithms. Extensive experiments show that HISEvent consistently outperforms GNN-based methods and achieves the new SOTA for social event detection under both closed- and open-set settings while being efficient and robust. Introduction Social event detection serves as a foundation for public opinion mining (Beck et al. 2021), fake news detection (Mehta, Pacheco, and Goldwasser 2022), etc., and is attracting increasing attention in industry and academia. Existing studies (Ren et al. 2022a; Cao et al. 2021; Liu et al. 2020a; Peng et al. 2019, 2022) commonly formalize the task of social event detection as extracting clusters of co-related messages from sequences of social media messages. Recent years have witnessed the booming of social event detection studies (Ren et al. 2023, 2022a; Peng et al. 2022; Cao et al. 2021; Peng et al. 2019) that are based on Graph Neural Networks (GNN) (Kipf and Welling 2017; Veliˇckovi´c et al. 2018; Hamilton, Ying, and Leskovec 2017). These methods typically follow a two-step strategy: they first construct message graphs that contain all the candidate messages, with ones that share common attributes (user Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. mentions, hashtags, named entities, etc.) linked together. Figure 1A.2 shows an example message graph. They then partition the message graph using GNNs, which incorporate the natural language representations of the messages with that of their neighbors. The resulting graph partitions (e.g., Figure 1B.3) serve as the detected social events. Despite their SOTA performance, GNN-based methods merely link messages that share exactly the same attributes. The useful correlations between messages that are semantically close yet have no common attributes are missing. Furthermore, the GNN components of these models require supervision for training and predetermining the total number of events for prediction. Recent GNN-based methods (Ren et al. 2022a; Peng et al. 2022; Cao et al. 2021), unlike earlier ones (Peng et al. 2019), adopt contrastive learning, inductive learning, and pseudo label generation to alleviate the reliance on labels. However, manual labeling is still necessary for the initial training and periodical maintenance. In this work, we address the above issues from an information-theoretic perspective. We gain inspiration from structural entropy (SE) (Li and Pan 2016), a metric that assesses the amount of information contained in a graph. Specifically, minimizing one-dimensional (1D) SE discloses the reliable node correlations contained in the raw, noisy graphs and is applied in biomedical studies (Li, Yin, and Pan 2016). We explore message graph neighborhoods via 1D SE minimization and supplement the existing message graph with edges between the semantically close messages. Unlike previous studies (Li, Yin, and Pan 2016; Li et al. 2018), our exploration is conducted in an incremental manner to maximize efficiency. Minimizing higher-dimensional SE decrypts the higher-order structure of the graphs (Li and Pan 2016). Given this, we further partition the message graph via two-dimensional (2D) SE minimization. Though effective and requiring no supervision, 2D SE minimization can be prohibitively slow to perform on complex, large-scale message graphs. We effectively tackle this by customizing a 2D SE minimization algorithm for social event detection. Our algorithm addresses the message correlations in a hierarchical manner: it repeatedly splits the message graph, detects clusters, and combines the clusters into new ones while keeping the previously detected partitions. Our proposed framework, hierarchical and incremental structural entropy minimization-guided social event detector (HISEvThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8255 1DPHGHQWLWLHV +DVKWDJV 8VHUV %9DQLOOD'6(PLQLPL]DWLRQ $6RFLDOPHVVDJHV $ $0HVVDJHJUDSK $GG $GG %+LHUDUFKLFDO'6(PLQLPL]DWLRQ ,QLWLDO 2SWLPDO $0HVVDJH*UDSK&RQVWUXFWLRQ %0HVVDJH*UDSK3DUWLWLRQ %6RFLDOHYHQWV (YHQW (YHQW Figure 1: The proposed HISEvent framework. A and B are message graph construction and partitioning processes, respectively. An initial message graph A.2 is constructed by linking social messages (A.1) that share common attributes. Further adding semantic-similarity-based edge set Es results in the final message graph (A.3). B.2 shows our proposed hierarchical 2D SE minimization algorithm, which repeatedly detects clusters (P′) from sub-graphs (G′). B.1 shows how clusters are detected in a single sub-graph via vanilla 2D SE minimization. B.3 shows the detected social events. ent), holds the merits of the GNN-based methods, learns more informative message graphs, and does not require supervision or the number of events given a priori. Experiments on two public Twitter datasets show that HISEvent consistently outperforms strong baselines under both closed- and open-set settings and is the new SOTA for social event detection. We also empirically show the efficiency and robustness of HISEvent as well as the effectiveness of all its components. Our contributions are: • We address social event detection from an informationtheoretic lens. Compared to the GNN-based methods, the proposed HISEvent learns more informative message graphs and requires no labeled samples or a predetermined number of events. To the best of our knowledge, we are the first to apply SE minimization for social event detection. • We design novel SE minimization algorithms for social event detection. Besides being effective, HISEvent efficiently runs on complex, large-scale message graphs. HISEvent incrementally and hierarchically minimizes 1D and 2D SE, significantly reducing time complexity compared to the existing SE minimization algorithms. • We conduct extensive experiments on two large, public Twitter datasets to show the new SOTA performance, efficiency, and robustness of HISEvent. Our code is publicly available 1. Preliminary Structural entropy (SE) (Li and Pan 2016) is defined as the minimum number of bits to encode the vertex that is accessible with a step of random walk on a graph. The SE of a 1https://github.com/SELGroup/HISEvent graph measures the complexity of the underlying essential structure and corresponds to an encoding tree. SE can be of different dimensions, which measure the structural information of different orders and correspond to encoding trees of different heights. We present the formal definitions of encoding tree and SE as follows. Notations used in this paper are summarized in Appendix. Definition 1. (Li and Pan 2016). The encoding tree T of a graph G = (V, E) is a hierarchical partition of G. It is a tree that satisfies the following: 1) Each node α in T is associated with a set Tα ⊆V. For the root node λ of T , Tλ = V. Any leaf node γ in T is associated with a single node in G, i.e., Tγ = {v}, v ∈ V. 2) For each node α in T , denote all its children as β1, ..., βk, then (Tβ1, ..., Tβk) is a partition of Tα. 3) For each node α in T , denote its height as h(α). Let h(γ) = 0 and h(α−) = h(α)+1, where α−is the parent of α. The height of T , h(T ) = max α∈T {h(α)}. Definition 2. (Li and Pan 2016). The structural entropy (SE) of graph G on encoding tree T is defined as: HT (G) = − X α∈T ,α̸=λ gα vol(λ)log vol(α) vol(α−), (1) where gα is the summation of the degrees (weights) of the cut edges of Tα (edges in E that have exactly one endpoint in Tα). vol(α), vol(α−), and vol(λ) refer to the volumes, i.e., summations of the degrees of all the nodes, of Tα, Tα−, and Tλ, respectively. The d-dimensional SE of G, defined as H(d)(G) = min ∀T :h(T )=d{HT (G)}, is realized by acquiring an optimal encoding tree of height d, in which the disturbance derived The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8256 Algorithm 1: Determine Es via incremental 1D SE minimization. Input: Message graph node set V Output: Semantic-similarity-based edge set Es 1 SEs ←∅ 2 Embed V via PLM and get {hmi}|V| i=1 3 for i = 1, ..., |V| do // Sort neighbors 4 neighbmi = (mj)|V| j=1 s.t. j ̸= i and Cos(hmi, hmj−1) > Cos(hmi, hmj) 5 E ←{1st element in neighbmi}|V| i=1 6 Calculate H(1)(G) via Eq. 2 7 Append H(1)(G) to SEs 8 k = 2 9 while k < |V| do // Search for the 1st stable point 10 E = E ∪{k-th element in neighbmi}|V| i=1 11 Calculate H(1)′(G) via Eq. 3 12 Append H(1)′(G) to SEs 13 if (k −1) is a stable point* then 14 break 15 k = k + 1 16 Es ←{(mi, mj)|mj ∈ the first (k-1) elements in neighbmi}|V| i=1 17 return Es *(k −1) is a stable point if the (k −1)-th element in SEs is smaller than the elements before and after it. from noise or stochastic variation is minimized. Figure 1B.1 shows a toy example that constructs and optimizes T given G′. Methodology Figure 1 shows an overview of HISEvent. Following the previous methods (Ren et al. 2022a), we adopt a two-step, message graph construction-partitioning strategy. We first formalize the task. Next, we propose to incorporate a novel semantic-similarity-based approach for message graph construction. We then present our unsupervised message graph partitioning. Finally, we analyze the time complexity. HISEvent, as a batched (retrospective) method, can be easily extended to streaming scenarios (discussed in Appendix). Problem Formalization Given a sequence of social messages m1, ..., mN as input, the task of social event detection can be fulfilled by constructing and partitioning a message graph G = (V, E). The node set V = {m1, ..., mN}. The edge set E is initially empty and to be expanded by the message graph construction process. Partitioning G results in {e1, ..., eM}, ei ⊂ V, ei ∩ej = ∅, which is a partition of V containing M clusters (sets) of messages that correspond to the M detected social events. Message Graph Construction with Incremental 1D SE Minimization Ideally, the edges in the message graph should faithfully reflect the reliable message correlations while eliminating the noisy ones. Following the GNN-based studies (Ren et al. 2022a; Cao et al. 2021), we capture the common-attributebased message correlations, visualized in Figure 1A. Specifically, for each message mi, we extract its attributes Ai = {ui} ∪{umi1, umi2, ...} ∪{hi1, hi2, ...} ∪{nei1, nei2, ...}, where the RHS refers to a union of the sender, mentioned users, hashtags, and named entities associated with mi. We add an edge (mi, mj) into Ea iif mi and mj share some common attributes, i.e., Ea = {(mi, mj)|Ai ∩Aj ̸= ∅}. Ea alone, however, can miss useful correlations, as there are messages that have similar semantics yet share no common attributes. To mitigate this, we supplement the message graph with semantic-similarity-based edges, denoted as Es. The similarity between two messages can be measured by embedding 2 them via pre-trained language models (PLMs), i.e., SBERT (Reimers and Gurevych 2019) then calculating the cosine similarity between their representations. The idea is to link each message to its k-nearest neighbors, where k needs to be carefully chosen to keep only the reliable connections. 1D SE minimization has been applied in biomedical studies (Li, Yin, and Pan 2016) to select the most correlated neighbors. Nonetheless, (Li, Yin, and Pan 2016) calculates the 1D SE from scratch for every candidate k, which is inefficient. We propose incremental 1D SE minimization for correlated neighbor selection. Specifically, we start with Es = ∅and incrementally insert sets of edges into G, with the k-th set (referred to as k-NN edge set) containing edges between each node and its k-th nearest neighbor. The initial 1D SE with k = 1 is: H(1)(G) = − |V| X i=1 di vol(λ)log di vol(λ), (2) and the successive updates follow: H(1)′(G) = vol(λ) vol′(λ)  H(1)(G) −log vol(λ) vol′(λ)  + |ak| X j=1  dj vol′(λ)log dj vol′(λ) − d′ j vol′(λ)log d′ j vol′(λ)  , (3) where di and d′ i denote the original and updated degrees (weighted) of node i in G before and after the insertion of the k-NN edge set, respectively. Initially, di is calculated with i linking to its 1st nearest neighbor. ak is a set of nodes whose degrees are affected by the insertion of the k-NN edge set. vol(λ) and vol′(λ) stand for the volumes of G before and after inserting the k-NN edge set. H(1)(G) and H(1)′(G) stand for the original and updated 1D SE. The derivation of Equation 3 is in Appendix. With the above initialization and update rules, selecting the proper k then follows Algorithm 1. Compared to (Li, 2Before embedding, we preprocess the message contents by filtering out URLs, extra characters, emotion icons, and user IDs, which we believe don’t have clear natural language semantics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8257 Yin, and Pan 2016), the time needed for inspecting each candidate k (lines 10-12) is reduced from O(|V|) to O(|ak|) (|ak| ≤|V| always holds). Another difference is, HISEvent only uses Es as a supplementation to Ea. We, therefore, adopt the first stable point (lines 13-14) instead of the global one. The overall running time, due to lines 3-4, is O(|V|2). Finally, we set E = Ea ∪Es. For each edge (mi, mj), we then set its weight wij = max(cosine(hmi, hmj), 0), where hmi and hmj denote the embeddings of mi and mj learned via PLMs. This accomplishes the construction of the message graph. HISEvent incorporates not only the commonattributes-based message correlations but also the semanticsimilarity-based ones. It constructs more informative message graphs compared to the previous studies (Ren et al. 2022a; Peng et al. 2022; Cao et al. 2021). Event Detection via Hierarchical 2D SE Minimization Message graph partitioning decodes G into P, which contains the detected events in the form of message clusters. A faithful decoding of the message correlations in G assigns related messages to the same cluster and unrelated ones to different clusters. Previous GNN-based detectors (Ren et al. 2022a; Cao et al. 2021) learn to properly partition message graphs through training, which require costly sample labeling and the number of events a priori. To address this issue, HISEvent conducts unsupervised partitioning under the guidance of 2D SE minimization, which eliminates the noise and reveals the essential 2nd-order (cluster-wise) structure underneath the raw graph with no prior knowledge of the number of event clusters. (Li and Pan 2016) proposes a vanilla greedy 2D SE minimization algorithm that repeatedly merges any two nodes in the encoding tree T that would result in the largest decrease in 2D SE until reaches the minimum possible value. Hence it partitions a graph without supervision or a predetermined total number of clusters. We illustrate this algorithm in Appendix. This vanilla 2D SE minimization, however, takes O(|V|3) to run. Though works for small bioinformatics graphs (Wu et al. 2022), it is prohibitively slow for the large, complex message graphs (demonstrated by the Experiments section). To address this, we propose to minimize 2D SE and detect events in a hierarchical manner, shown in Algorithm 2. Specifically, each message is initially in its own cluster (line 1). We split the clusters into subsets of size n (line 3) and merge the clusters involved in each subset using the vanilla greedy algorithm to get new clusters (lines 5-13). The new clusters are then passed on to the next iteration (line 14). This process is repeated until the clusters that contain all the messages are considered simultaneously (lines 15-16). If, at some point, none of the clusters in any subset can be merged, we increase n so that more clusters can be considered in the same subset and, therefore, may be merged (lines 17-18). Figure 1B visualizes this process: m1 to m9 are initially in their own clusters. n = 3 clusters are considered at a time to form a G′. Clusters in each G′ are then merged via vanilla 2D SE minimization to get P′ (Figure 1B.1). The partitions resulted in the previous iteration Algorithm 2: Event detection via hierarchical 2D SE minimization. Input: Message graph G = (V, E), sub-graph size n Output: A partition P of V 1 P ←(m|m ∈V) 2 while True do 3 {Ps} ←consecutively remove the first min(n, size of the remaining part of P) clusters from P that form a set Ps 4 for Ps ∈{Ps} do 5 V ′ ←combine all the clusters in Ps 6 E ′ ←{e ∈E, both endpoints of e ∈V ′} 7 G′ ←(V′, E′) 8 T ′ ←add a root tree node λ 9 for cluster C ∈Ps do 10 Add a tree node α to T ′, s.t. α−= λ, Tα = C 11 for message m ∈C do 12 Add a tree node γ to T ′, s.t. γ−= α, Tγ = {m} 13 P′ ←run vanilla 2D SE minimization (see Appendix) on G′, with the initial encoding tree set to T ′ 14 Append P′ to P 15 if |{V ′}| = 1 then 16 Break 17 if P is the same as at the end of last iteration then 18 n ←2n 19 return P are passed on to the later iteration, as indicated by the blue curved arrows in Figure 1B.2. The process terminates when a P′ that involves all the messages is determined. With a running time of O(n3), Algorithm 2 is much more efficient than its vanilla predecessor, as n is a hyperparameter that can be set to ≪|V|. To summarize, HISEvent detects social events from the complex message graphs in an effective and unsupervised manner. Time Complexity of HISEvent The overall time complexity of HISEvent is O(|Ea|+|V|2 + n3), where |Ea| is the total number of common-attributebased edges in the message graph, |V| is the total number of nodes (i.e., messages) and n is sub-graph size, a hyperparameter that can be set to ≪|V|. Specifically, the running time of constructing Ea is O(|Ea|). The running time of constructing the semantic-similarity-based edge set Es is O(|V|2). The running time of detecting social events from the constructed message graph is O(n3). Note HISEvent can be easily parallelized (discussed in Appendix). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8258 Dataset Metric KPGNN* QSGNN* EventX BERT* SBERT* HISEvent Improv. (%) Event2012 ARI 0.22 0.22 0.05 0.12 0.17 0.50 ↑127 AMI 0.52 0.53 0.19 0.43 0.73 0.81 ↑11 Event2018 ARI 0.15 0.16 0.03 0.05 0.11 0.44 ↑175 AMI 0.44 0.44 0.16 0.34 0.62 0.66 ↑6 Table 1: Closed-set results. * marks results acquired with the ground truth event numbers. Blocks (#events) M1 (41) M2 (30) M3 (33) M4 (38) M5 (30) M6 (44) M7 (57) Metric ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI KPGNN* 0.07 0.37 0.76 0.78 0.58 0.74 0.29 0.64 0.47 0.71 0.72 0.79 0.12 0.51 QSGNN* 0.07 0.41 0.77 0.80 0.59 0.76 0.29 0.68 0.48 0.73 0.73 0.80 0.12 0.54 EventX 0.01 0.06 0.45 0.29 0.09 0.18 0.07 0.19 0.04 0.14 0.14 0.27 0.02 0.13 BERT* 0.03 0.35 0.65 0.76 0.45 0.72 0.19 0.58 0.36 0.67 0.45 0.75 0.07 0.50 SBERT* 0.03 0.38 0.73 0.85 0.68 0.87 0.36 0.80 0.61 0.85 0.53 0.83 0.09 0.61 HISEvent 0.08 0.44 0.79 0.88 0.95 0.94 0.50 0.84 0.62 0.85 0.86 0.90 0.27 0.68 Improv. (%) ↑14 ↑7 ↑3 ↑4 ↑40 ↑8 ↑39 ↑5 ↑2 → ↑18 ↑8 ↑125 ↑11 Blocks (#events) M8 (53) M9 (38) M10 (33) M11 (30) M12 (42) M13 (40) M14 (43) Metric ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI KPGNN* 0.60 0.76 0.46 0.71 0.70 0.78 0.49 0.71 0.48 0.66 0.29 0.67 0.42 0.65 QSGNN* 0.59 0.75 0.47 0.75 0.71 0.80 0.49 0.72 0.49 0.68 0.29 0.66 0.41 0.66 EventX 0.09 0.21 0.07 0.19 0.13 0.24 0.16 0.24 0.07 0.16 0.04 0.16 0.10 0.14 BERT* 0.51 0.74 0.34 0.71 0.55 0.78 0.26 0.62 0.31 0.56 0.13 0.57 0.24 0.55 SBERT* 0.65 0.86 0.47 0.83 0.62 0.85 0.49 0.82 0.63 0.85 0.24 0.70 0.40 0.77 HISEvent 0.74 0.89 0.65 0.88 0.87 0.90 0.62 0.82 0.82 0.90 0.46 0.78 0.85 0.88 Improv. (%) ↑14 ↑3 ↑38 ↑6 ↑23 ↑6 ↑27 → ↑30 ↑6 ↑59 ↑11 ↑102 ↑14 Blocks (#events) M15 (42) M16 (27) M17 (35) M18 (32) M19 (28) M20 (34) M21 (32) Metric ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI KPGNN* 0.17 0.54 0.66 0.77 0.43 0.68 0.47 0.66 0.51 0.71 0.51 0.68 0.20 0.57 QSGNN* 0.17 0.55 0.65 0.76 0.44 0.69 0.48 0.68 0.50 0.70 0.51 0.69 0.21 0.58 EventX 0.01 0.07 0.08 0.19 0.12 0.18 0.08 0.16 0.07 0.16 0.11 0.18 0.01 0.10 BERT* 0.07 0.43 0.43 0.71 0.22 0.56 0.24 0.52 0.28 0.59 0.32 0.60 0.17 0.54 SBERT* 0.17 0.67 0.50 0.78 0.35 0.77 0.52 0.81 0.54 0.83 0.52 0.80 0.24 0.70 HISEvent 0.27 0.72 0.83 0.87 0.56 0.81 0.70 0.80 0.63 0.87 0.69 0.81 0.45 0.69 Improv. (%) ↑59 ↑7 ↑26 ↑12 ↑27 ↑5 ↑35 ↓1 ↑17 ↑5 ↑33 ↑1 ↑88 ↓1 Table 2: Open-set results on Event2012. * marks results acquired with the ground truth event numbers. Experiments We conduct extensive experiments to compare HISEvent to various baselines and show the effectiveness of its components. We further analyze the efficiency as well as hyperparameter sensitivity of HISEvent and present a case study. Experimental Setup Datasets. We experiment on two large, public Twitter datasets, i.e., Event2012 (McMinn, Moshfeghi, and Jose 2013), and Event2018 (Mazoyer et al. 2020). Event2012 contains 68,841 English tweets related to 503 events, spreading over four weeks. Event2018 contains 64,516 French tweets about 257 events and were sent within a span of 23 days. We evaluate under both closed- and open-set settings by adopting the data splits of Ren et al. 2022a and Cao et al. 2021. The former simultaneously consider all the events, while the latter assumes the events happen over time and splits the datasets into day-wise message blocks (e.g., M1 to M21 in Event2012). Data statistics are in Appendix. Baselines. We compare HISEvent to KPGNN (Cao et al. 2021), a GNN-based social event detector, QSGNN (Ren et al. 2022a), which improves upon KPGNN using restricted pseudo labels and is the current SOTA, and EventX (Liu et al. 2020a), a non-GNN-based social event detector leverages community detection. We also experiment on PLMs, i.e., BERT (Kenton and Toutanova 2019), and SBERT (Reimers and Gurevych 2019): we first input the preprocessed message contents to PLMs to learn message embeddings and then apply K-means clustering on the message embeddings to acquire events, i.e., message clusters. Note that KPGNN and QSGNN are supervised. KPGNN, QSGNN, BERT, and SBERT require the total number of events to be specified a priori, which is impractical. HISEvent, in contrast, is unsupervised and does not need the total number of events as an input. Also note we omit the direct comparison with various techniques that are outperformed by the baselines, i.e., TF-IDF (Bafna, Pramod, and Vaidya 2016), LDA (Blei, Ng, and Jordan 2003), WMD (Kusner et al. 2015), LSTM (Graves and Schmidhuber 2005), word2vec (Mikolov et al. 2013), co-clustering (Dhillon, Mallela, and Modha 2003), NMF (Xu, Liu, and Gong 2003), etc. Implementation details are in Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8259 Blocks M1 M2 M3 M4 M5 M6 M7 M8 Metric ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI KPGNN* 0.17 0.54 0.18 0.55 0.15 0.55 0.17 0.55 0.21 0.57 0.21 0.57 0.30 0.61 0.20 0.57 QSGNN* 0.18 0.56 0.19 0.57 0.17 0.56 0.18 0.57 0.23 0.59 0.21 0.59 0.30 0.63 0.19 0.55 EventX 0.02 0.11 0.02 0.12 0.01 0.11 0.06 0.14 0.13 0.24 0.08 0.15 0.02 0.12 0.09 0.21 BERT* 0.16 0.42 0.21 0.44 0.22 0.44 0.17 0.41 0.31 0.56 0.23 0.49 0.23 0.49 0.24 0.50 SBERT* 0.20 0.60 0.29 0.61 0.34 0.63 0.23 0.60 0.47 0.76 0.41 0.73 0.29 0.65 0.50 0.75 HISEvent 0.55 0.77 0.67 0.79 0.47 0.74 0.46 0.72 0.66 0.82 0.61 0.83 0.56 0.81 0.82 0.90 Improv. (%) ↑175 ↑28 ↑131 ↑30 ↑38 ↑17 ↑100 ↑20 ↑40 ↑8 ↑49 ↑14 ↑87 ↑25 ↑64 ↑20 Blocks M9 M10 M11 M12 M13 M14 M15 M16 Metric ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI ARI AMI KPGNN* 0.10 0.46 0.18 0.56 0.16 0.53 0.17 0.56 0.28 0.60 0.43 0.65 0.25 0.58 0.13 0.50 QSGNN* 0.13 0.46 0.19 0.58 0.20 0.59 0.20 0.59 0.27 0.58 0.44 0.67 0.27 0.61 0.13 0.50 EventX 0.07 0.16 0.07 0.19 0.06 0.18 0.09 0.20 0.06 0.15 0.11 0.22 0.11 0.22 0.01 0.10 BERT* 0.17 0.42 0.19 0.46 0.18 0.48 0.32 0.54 0.18 0.40 0.27 0.52 0.28 0.53 0.21 0.43 SBERT* 0.23 0.63 0.39 0.72 0.31 0.70 0.54 0.76 0.34 0.65 0.43 0.68 0.40 0.71 0.25 0.65 HISEvent 0.65 0.73 0.51 0.80 0.44 0.79 0.86 0.88 0.83 0.89 0.80 0.89 0.70 0.84 0.37 0.73 Improv. (%) ↑183 ↑16 ↑31 ↑11 ↑42 ↑13 ↑59 ↑16 ↑144 ↑37 ↑82 ↑31 ↑75 ↑18 ↑48 ↑12 Table 3: Open-set results on Event2018. * marks results acquired with the ground truth event numbers. Setting Closed-set Open-set (Avg.) Metric ARI AMI ARI AMI HISEvent 0.50 0.81 0.63 0.82 −Es 0.24 0.58 0.40 0.60 −Ea 0.42 0.80 0.51 0.77 HISEvent-BERT 0.25 0.65 0.52 0.69 HISEvent-vanilla takes > 10 days 0.62 0.82 Table 4: Ablation study on Event2012. −Es removes the semantic-similarity-based Es and simply relies on the common-attribute-based Ea to capture message correlations. Similarly, −Ea relies solely on Es (unlike in Section 3.2, here we use the global rather than the first stable points since Es is no longer a supplementation but aims to fully capture the message correlations). HISEvent-BERT uses BERT rather than SBERT to measure the edge weights. HISEvent-vanilla partitions the message graph via vanilla 2D SE minimization instead of our proposed hierarchical one. Evaluation Metrics. We measure adjusted mutual information (AMI), adjusted rand index (ARI), and normalized mutual information (NMI, in Appendix), which are broadly used by the previous studies (Cao et al. 2021). Overall Performance Tables 1 - 3 show the social event detection performance. HISEvent consistently outperforms the highest baseline by large margins on both datasets across the closed- and openset settings. E.g., on Event2018, HISEvent improves ARI and AMI upon SBERT, the strongest baseline, by 175% and 6% under the closed-set setting and by 77% and 19% on average under the open-set setting. This verifies that HISEvent better explores the message semantics and the social network structure. Meanwhile, a comparison between the baselines indicates that the quality of the message embeddings matter: SBERT outperforms BERT and the GNNbased methods. Besides being effective and unsupervised, HISEvent does not require predetermining the number of events. This is essential as the number of events is difficult to predict. E.g., in Table 2, the ground truth number of events varies from 27 to 57 and can drop from 42 in M15 to 27 in M16 and raise from 30 in M5 to 44 in M6 between consecutive periods. In contrast, KPGNN and QSGNN require labeled samples while KPGNN, QSGNN, BERT, and SBERT need the total number of events given a priori, which is impossible in practice. In short, HISEvent is more practical than the baselines and is the new SOTA. Ablation Study Table 4 presents the ablation studies on Event2012. All the components of HISEvent help. Especially, Es, absent in KPGNN and QSGNN, is essential for HISEvent’s good performance. E.g., −Es underperforms HISEvent by 52% and 28% in ARI and AMI, respectively, in the closed-set experiment. Also note HISEvent-BERT significantly outperforms BERT (shown in Tables 1 and 2), indicating that HISEvent works despite the choice of PLM. Meanwhile, we observe that HISEvent-BERT underperforms HISEvent, indicating that PLM embeddings that are of High-quality and, in particular, faithfully reflect messages’ semantic similarities (i.e., SBERT embeddings) are indispensable for HISEvent’s good performance. Adopting embedding unsuitable for message similarity measuring (i.e., BERT embeddings), on the other hand, can lead to a decrease in performance (further discussed in Appendix). A comparison to HISEventvanilla shows that HISEvent, adopts the proposed hierarchical 2D SE minimization algorithm, significantly improves efficiency without sacrificing performance: it performs on par with HISEvent-vanilla but is orders of magnitude faster (further verified in the next section). Efficiency of HISEvent We compare the efficiency of the proposed hierarchical 2D SE minimization to its vanilla predecessor. Figure 2 shows their time consumption on Event2012 message blocks. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8260 GHQVLW\ 0    0    [0    !GD\V KUVPLQ KUVPLQ PLQVHF PLQVHF VHF KUVPLQ VHF VHF Figure 2: Running time comparison between vanilla and hierarchical (ours) 2D SE minimization on Event2012. D E D E                     Figure 3: HISEvent results on Event2012 with different n. (a) and (b) show the closed-set and open-set (averaged) results. vanilla algorithm runs prohibitively slow on complex message graphs. E.g., for a large and dense message block such as M1, it takes more than 5 days to complete. In contrast, our proposed hierarchical 2D SE minimization dramatically reduces time consumption. E.g., for M1, our algorithm reduces the running time by >97%. Adopting a smaller subgraph size n further decreases the running time. E.g., adopting a n of 200 rather than 400 further reduces the time needed for M1 by half. Also note that the impact to the performance when n is decreased is rather small (further discussed in the next section). Hyperparameter Sensitivity We study how changing the sub-graph size n affects the performance of HISEvent. Figure 3 shows that HISEvent is relatively robust to the changes in n: increasing n slightly prompts the performance at the cost of longer running time. Take Figure 3(a), the closed-set results on Event2012, for (YHQWB+XUULFDQH6DQG\ E 6%(57UHVXOW &OXVWHU ^` &OXVWHU ^ ` (YHQWB$IWHUPDWKRI6DQG\ (YHQWB1<VWRFNDIWHU6DQG\ (YHQWB,QGLDHYDFXDWLRQ (YHQWB3ULGHRI%ULWDLQ (YHQWB3XUFKDVH/XFDVILOP/WG (YHQWB(OHFWULFDOILUHDWD6DXGLZHGGLQJ (YHQWB3UD\LQJIRUSHRSOHDIIHFWHGE\6DQG\ &OXVWHU ^` &OXVWHU« (YHQWB%DKUDLQEDQVSURWHVWVJDWKHULQJV F +,6(YHQWUHVXOW &OXVWHU ^ ` &OXVWHU ^` #VGRRF\ -XVWVDZRPLQ #:6-ZHDWKHU302FW $12$$DQDO\VLVUDQNV6DQG\¶VVXUJHGHVWUXFWLYHQHVVDWRQ DVFDOH,KDYHQHYHUVHHQDYDOXHWKDWKLJKDRPOQRDDJRY« D VDPSOHPHVVDJH Figure 4: Detection of Event 43 Hurricane Sandy. (a) is a sample message. (b) and (c) are clusters detected by SBERT and HISEvent that contain the target event messages. example, increasing n from 100 to 400 introduces moderate (10%) and marginal (1%) improvements in ARI and AMI. Moreover, despite the changes in n, HISEvent always outperforms SBERT, the strongest baseline, by 169-197% in ARI and 9-10% in AMI. Case Study Figure 4 presents the detection of Event 43 Hurricane Sandy. We observe that the strongest baseline, SBERT, confuses the target event with many irrelevant events such as Event 15 Pride of Britain and Event 367 Bahrain bans protests gathering. As a result, the meaning of its detected clusters are rather vague. Moreover, SBERT outputs disjoint rather than more favorable, coherent clusters to represent the target event. E.g., it represents the target event with more than 4 clusters. In contrast, the proposed HISEvent detects justifiable clusters with clear meanings. E.g., cluster 1 is the destruction brought by Sandy while cluster 2 reflects the public reaction and recovery afterwards. Moreover, HISEvent fails only on the hard negatives. E.g., cluster 2 includes some messages about Event 432 NY stock after Sandy, which is relevant to the target event. Related Work Social event detection is a long-standing task (Atefeh and Khreich 2015). The main challenges lie in exploring the high-volume, complex, noisy, and dynamic social media components, e.g., text, timestamp, user mention, and social network structure. Studies leverage incremental clustering (Zhao, Mitra, and Chen 2007; Weng and Lee 2011; AgThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8261 garwal and Subbian 2012; Zhang, Zi, and Wu 2007; Feng et al. 2015; Xie et al. 2016), community detection (Fedoryszak et al. 2019; Liu et al. 2020a,b; Yu et al. 2017), and topic modeling (Zhou and Chen 2014; Zhou, Chen, and He 2015; Xing et al. 2016; Wang et al. 2016; Zhao et al. 2011) are common. There are also methods for specific domains (Yao et al. 2020; Arachie et al. 2020; Khandpur et al. 2017) such as airport threats. They extract attributes, e.g., hashtag, from the social media components then pre-process the attributes in highly-customized manners. GNN-based methods (Peng et al. 2019; Cao et al. 2021; Peng et al. 2021; Ren et al. 2022b, 2021, 2022a; Peng et al. 2022) unify the various components concisely by introducing message graphs and quickly became a new trend for their outstanding performance. HISEvent keeps the merits of the GNN-based methods, better captures semantic-based message correlations, and eliminates sample labeling. Please also note that social event detection, which highlights significant occurrences on social media, news story discovery (Yoon et al. 2023), which summarizes long, formal, and plain textual news documents other than short, informal, and structural social messages, event prediction (Zhao, Wang, and Guo 2018; Deng, Rangwala, and Ning 2019; Pan et al. 2020), which forecasts future events, and event extraction (Liu, Huang, and Zhang 2019), which detects the entities, triggers, arguments, etc., of events, are non-comparable tasks. Conclusion We address social event detection from a structural entropy perspective. HISEvent provides an effective, efficient, and unsupervised tool for social event detection and analysis. It keeps the merits of the GNN-based methods, better explores message correlations, and eliminates the need for labeling or predetermining the number of events. Experiments show that HISEvent achieves the new SOTA under both closedand open-set settings while being efficient and robust. Acknowledgments The corresponding author is Hao Peng. This work is supported by National Key R&D Program of China through grant 2022YFB3104700, NSFC through grants 62322202, U21B2027, 61972186, U23A20388 and 62266028, Beijing Natural Science Foundation through grant 4222030, Yunnan Provincial Major Science and Technology Special Plan Projects through grants 202302AD080003, 202202AD080003 and 202303AP140008, General Projects of Basic Research in Yunnan Province through grants 202301AS070047 and 202301AT070471, and the Fundamental Research Funds for the Central Universities. Philip S. Yu was supported in part by NSF under grant III-2106758. References Aggarwal, C. C.; and Subbian, K. 2012. Event detection in social streams. In Proceedings of the 2012 SIAM international conference on data mining, 624–635. SIAM. Arachie, C.; Gaur, M.; Anzaroot, S.; Groves, W.; Zhang, K.; and Jaimes, A. 2020. Unsupervised detection of sub-events in large scale disasters. In Proceedings Of The AAAI conference on artificial intelligence, volume 34, 354–361. Atefeh, F.; and Khreich, W. 2015. A survey of techniques for event detection in twitter. Computational Intelligence, 31(1): 132–164. Bafna, P.; Pramod, D.; and Vaidya, A. 2016. Document clustering: TF-IDF approach. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), 61–66. IEEE. Beck, T.; Lee, J.-U.; Viehmann, C.; Maurer, M.; Quiring, O.; and Gurevych, I. 2021. Investigating label suggestions for opinion mining in German Covid-19 social media. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. Blei, D. M.; Ng, A. Y.; and Jordan, M. I. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan): 993–1022. Cao, Y.; Peng, H.; Wu, J.; Dou, Y.; Li, J.; and Yu, P. S. 2021. Knowledge-preserving incremental social event detection via heterogeneous gnns. In Proceedings of the Web Conference 2021, 3383–3395. Deng, S.; Rangwala, H.; and Ning, Y. 2019. Learning dynamic context graphs for predicting social events. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1007–1016. Dhillon, I. S.; Mallela, S.; and Modha, D. S. 2003. Information-theoretic co-clustering. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, 89–98. Fedoryszak, M.; Frederick, B.; Rajaram, V.; and Zhong, C. 2019. Real-time event detection on social data streams. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2774–2782. Feng, W.; Zhang, C.; Zhang, W.; Han, J.; Wang, J.; Aggarwal, C.; and Huang, J. 2015. STREAMCUBE: Hierarchical spatio-temporal hashtag clustering for event exploration over the Twitter stream. In 2015 IEEE 31st international conference on data engineering, 1561–1572. IEEE. Graves, A.; and Schmidhuber, J. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks, 18(5-6): 602–610. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Proceedings of Advances in neural information processing systems, 1025– 1035. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT, 4171–4186. Khandpur, R.; Ji, T.; Ning, Y.; Zhao, L.; Lu, C.-T.; Smith, E.; Adams, C.; and Ramakrishnan, N. 2017. Determining relative airport threats from news and social media. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 4701–4707. Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of ICLR 2017. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8262 Kusner, M.; Sun, Y.; Kolkin, N.; and Weinberger, K. 2015. From word embeddings to document distances. In International conference on machine learning, 957–966. PMLR. Li, A.; and Pan, Y. 2016. Structural information and dynamical complexity of networks. IEEE Transactions on Information Theory, 62(6): 3290–3339. Li, A.; Yin, X.; and Pan, Y. 2016. Three-dimensional gene map of cancer cell types: Structural entropy minimisation principle for defining tumour subtypes. Scientific reports, 6(1): 1–26. Li, A.; Yin, X.; Xu, B.; Wang, D.; Han, J.; Wei, Y.; Deng, Y.; Xiong, Y.; and Zhang, Z. 2018. Decoding topologically associating domains with ultra-low resolution Hi-C data by graph structural entropy. Nature communications, 9(1): 1– 12. Liu, B.; Han, F. X.; Niu, D.; Kong, L.; Lai, K.; and Xu, Y. 2020a. Story forest: Extracting events and telling stories from breaking news. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(3): 1–28. Liu, X.; Huang, H.; and Zhang, Y. 2019. Open domain event extraction using neural latent variable models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Liu, Y.; Peng, H.; Li, J.; Song, Y.; and Li, X. 2020b. Event detection and evolution in multi-lingual social streams. Frontiers of Computer Science, 14(5): 1–15. Mazoyer, B.; Cag´e, J.; Herv´e, N.; and Hudelot, C. 2020. A french corpus for event detection on twitter. In Proceedings of the 12th language resources and evaluation conference, 6220–6227. McMinn, A. J.; Moshfeghi, Y.; and Jose, J. M. 2013. Building a large-scale corpus for evaluating event detection on twitter. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, 409– 418. Mehta, N.; Pacheco, M. L.; and Goldwasser, D. 2022. Tackling Fake News Detection by Continually Improving Social Context Representations using Graph Neural Networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1363–1380. Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Pan, Z.; Huang, Z.; Lian, D.; and Chen, E. 2020. A variational point process model for social event sequences. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 173–180. Peng, H.; Li, J.; Gong, Q.; Song, Y.; Ning, Y.; Lai, K.; and Yu, P. S. 2019. Fine-grained event categorization with heterogeneous graph convolutional networks. In Proceedings of IJCAI 2022, 3238–3245. Peng, H.; Li, J.; Song, Y.; Yang, R.; Ranjan, R.; Yu, P. S.; and He, L. 2021. Streaming social event detection and evolution discovery in heterogeneous information networks. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(5): 1–33. Peng, H.; Zhang, R.; Li, S.; Cao, Y.; Pan, S.; and Yu, P. 2022. Reinforced, incremental and cross-lingual event detection from social messages. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–18. Reimers, N.; and Gurevych, I. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of EMNLP 2019. Ren, J.; Jiang, L.; Peng, H.; Cao, Y.; Wu, J.; Yu, P. S.; and He, L. 2022a. From Known to Unknown: Quality-aware Self-improving Graph Neural Network for Open Set Social Event Detection. In Proceedings of ACM CIKM 2022, 1696– 1705. Ren, J.; Jiang, L.; Peng, H.; Liu, Z.; Wu, J.; and Philip, S. Y. 2022b. Evidential Temporal-aware Graph-based Social Event Detection via Dempster-Shafer Theory. In 2022 IEEE International Conference on Web Services (ICWS), 331–336. IEEE. Ren, J.; Peng, H.; Jiang, L.; Liu, Z.; Wu, J.; Yu, Z.; and Philip, S. Y. 2023. Uncertainty-guided Boundary Learning for Imbalanced Social Event Detection. IEEE Transactions on Knowledge and Data Engineering. Ren, J.; Peng, H.; Jiang, L.; Wu, J.; Tong, Y.; Wang, L.; Bai, X.; Wang, B.; and Yang, Q. 2021. Transferring Knowledge Distillation for Multilingual Social Event Detection. arXiv preprint arXiv:2108.03084. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2018. Graph attention networks. In Proceedings of ICLR 2018. Wang, Y.; Liu, J.; Huang, Y.; and Feng, X. 2016. Using hashtag graph-based topic model to connect semanticallyrelated words without co-occurrence in microblogs. IEEE Transactions on Knowledge and Data Engineering, 28(7): 1919–1933. Weng, J.; and Lee, B.-S. 2011. Event detection in twitter. In Proceedings of the international aaai conference on web and social media, volume 5, 401–408. Wu, J.; Chen, X.; Xu, K.; and Li, S. 2022. Structural entropy guided graph hierarchical pooling. In Proceedings of International Conference on Machine Learning, 24017–24030. PMLR. Xie, W.; Zhu, F.; Jiang, J.; Lim, E.-P.; and Wang, K. 2016. Topicsketch: Real-time bursty topic detection from twitter. IEEE Transactions on Knowledge and Data Engineering, 28(8): 2216–2229. Xing, C.; Wang, Y.; Liu, J.; Huang, Y.; and Ma, W.-Y. 2016. Hashtag-based sub-event discovery using mutually generative lda in twitter. In Proceedings of the AAAI conference on artificial intelligence, volume 30. Xu, W.; Liu, X.; and Gong, Y. 2003. Document clustering based on non-negative matrix factorization. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, 267– 273. Yao, W.; Zhang, C.; Saravanan, S.; Huang, R.; and Mostafavi, A. 2020. Weakly-supervised fine-grained event recognition on social media texts for disaster management. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8263 In Proceedings of the AAAI conference on artificial intelligence, volume 34, 532–539. Yoon, S.; Lee, D.; Zhang, Y.; and Han, J. 2023. Unsupervised Story Discovery from Continuous News Streams via Scalable Thematic Embedding. arXiv preprint arXiv:2304.04099. Yu, W.; Li, J.; Bhuiyan, M. Z. A.; Zhang, R.; and Huai, J. 2017. Ring: Real-time emerging anomaly monitoring system over text streams. IEEE Transactions on Big Data, 5(4): 506–519. Zhang, K.; Zi, J.; and Wu, L. G. 2007. New event detection based on indexing-tree and named entity. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, 215–222. Zhao, L.; Wang, J.; and Guo, X. 2018. Distant-supervision of heterogeneous multitask learning for social event forecasting with multilingual indicators. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Zhao, Q.; Mitra, P.; and Chen, B. 2007. Temporal and information flow based event detection from social text streams. In Proceedings of the 22nd national conference on Artificial intelligence-Volume 2, 1501–1506. Zhao, W. X.; Jiang, J.; Weng, J.; He, J.; Lim, E.-P.; Yan, H.; and Li, X. 2011. Comparing twitter and traditional media using topic models. In Advances in Information Retrieval: 33rd European Conference on IR Research, ECIR 2011, Dublin, Ireland, April 18-21, 2011. Proceedings 33, 338–349. Springer. Zhou, D.; Chen, L.; and He, Y. 2015. An unsupervised framework of exploring events on twitter: Filtering, extraction and categorization. In Proceedings of the AAAI conference on artificial intelligence, volume 29. Zhou, X.; and Chen, L. 2014. Event detection over twitter social media streams. The VLDB journal, 23(3): 381–400. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8264
2024
918
18,760
Distributional Off-Policy Evaluation for Slate Recommendations Shreyas Chaudhari1 David Arbour2, Georgios Theocharous2, Nikos Vlassis2 1University of Massachusetts Amherst 2Adobe Research [email protected], {arbour,theochar,vlassis}@adobe.com Abstract Recommendation strategies are typically evaluated by using previously logged data, employing off-policy evaluation methods to estimate their expected performance. However, for strategies that present users with slates of multiple items, the resulting combinatorial action space renders many of these methods impractical. Prior work has developed estimators that leverage the structure in slates to estimate the expected off-policy performance, but the estimation of the entire performance distribution remains elusive. Estimating the complete distribution allows for a more comprehensive evaluation of recommendation strategies, particularly along the axes of risk and fairness that employ metrics computable from the distribution. In this paper, we propose an estimator for the complete off-policy performance distribution for slates and establish conditions under which the estimator is unbiased and consistent. This builds upon prior work on offpolicy evaluation for slates and off-policy distribution estimation in reinforcement learning. We validate the efficacy of our method empirically on synthetic data as well as on a slate recommendation simulator constructed from real-world data (MovieLens-20M). Our results show a significant reduction in estimation variance and improved sample efficiency over prior work across a range of slate structures. Introduction Recommendation services are ubiquitous throughout industry (Bobadilla et al. 2013; Lu et al. 2015). A common variant of recommendation consists of suggesting multiple items to a user simultaneously, often termed recommendation slates, where each position, (a.k.a., a slot), can take multiple possible values (Sarwar et al. 2000). For example, webpage layouts of news or streaming services have separate slots for each category of content where each slot can display any of the items from the category for that slot. The items are suggested to the user based on a recommendation strategy, called a policy and the user response is encoded into a reward (Li et al. 2010). A crucial problem for selection and improvement of recommendation strategies is to evaluate the efficacy of a slate policy by estimating the expected reward of that policy. One of the simplest and most effective approaches to policy evaluation, often employed in industrial settings, is A/B Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. testing (Gomez-Uribe and Hunt 2015; Kohavi and Longbotham 2017; Feitelson, Frachtenberg, and Beck 2013). This involves randomly assigning users to receive the item recommended by one of two candidate policies, and the relative performance of each policy is directly measured. However, A/B testing involves deploying the new policy online, which may be infeasible in many settings due to practical or ethical considerations. As a result, it is often necessary to employ offline off-policy evaluation, in which interaction data collected (offline) from previously deployed policies (offpolicy) is used to estimate statistics of the expected performance, risk and other metrics of interest for a new target policy, without actually deploying it online. This ensures that policies with undesirable outcomes are not deployed online. A large amount of literature addresses the problem of offpolicy evaluation in the non-slate setting (Dud´ık et al. 2014; Wang, Agarwal, and Dudık 2017; Thomas, Theocharous, and Ghavamzadeh 2015), where the majority of methods rely on some version of importance sampling (Horvitz and Thompson 1952). Applied to the slate setting, these methods result in very large importance weights that result in high variance estimates due to the combinatorially large action space on which slate policies operate. Recent work addresses this deficiency, by introducing methods that leverage the structure in slate actions and rewards to address the high variance. Swaminathan et al. (2017); Vlassis et al. (2021) leverage reward additivity across slot actions to propose estimators for estimation of the expected reward of target slate policies. McInerney et al. (2020) propose a Markov structure to the slate rewards for sequential interactions. All the aforementioned methods provide solutions for estimating the expected reward for a target policy. However, in many scenarios where recommendation systems are used, such as those with large financial stakes and in healthcare applications, practitioners are concerned with evaluation metrics such as the behavior of the policy at extreme quantiles and the expected performance at a given risk tolerance (CVaR). These quantities need the full reward distribution for estimation, which renders prior work which estimates the expected reward inapplicable. A notable exception is Chandak et al. (2021) who provide a method for off-policy estimation of the target policy’s cumulative reward distribution using ideas similar to importance sampling, allowing for the computation of various metrics of interest, but the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8265 estimator is intractable in the slate setting. In this work, we propose slate universal off-policy evaluation (SUnO), a method that allows for off-policy estimation of target reward distribution (Theorem 3) for slate recommendation policies. SUnO applies the core ideas from the universal off-policy evaluation (UnO) method (Chandak et al. 2021) to the slate setting, leveraging an additive decomposition of the conditional reward distribution. This makes it possible to perform off-policy estimation in structured high dimensional action spaces without incurring prohibitively high estimation variance. We highlight how the estimator can readily be adapted to other generalized decompositions of reward while continuing to be unbiased. Finally, we provide an empirical evaluation comparing against UnO, where the proposed estimator shows significant variance reduction, improved sample efficiency, and robust performance even when the conditions for unbiasedness of the estimator are not met. The main contributions of our work are: • We propose an unbiased estimator for the off-policy reward distribution for slate recommendations under an additively decomposable reward distribution, generalizing prior results for slate off-policy evaluation to the distributional setting. • We theoretically demonstrate how the estimator readily generalizes to slate rewards that do not decompose additively over slots. • We empirically demonstrate the efficacy of the proposed estimator on slate simulators using synthetic as well as real world data, on a range of slate reward structures. Background and Notation We first formulate the slate recommendation system as a contextual bandit with a combinatorial action space. Each slate action has K dimensions where each dimension is a slot-level action. The user-bandit interaction results in a random tuple (X, A, R) at each step, where X ⇠dX(·) is the user context, A is the slate action generated by the recommendation strategy where A = [Ak]K k=1 is composed of K slot-level actions, and R ⇠dR(· | A, X) is the scalar slatelevel reward. Since the rewards are observed only at the slate level and not at a per-slot level, we use reward and slate reward interchangeably. Each slot-level action can take upto N candidate values, leading to a combinatorially large action space of the order !N K " . A logging policy µ(A | X) = Pr(A | X) recommends slate actions conditioned on user context X is deployed online to collect a dataset for offline evaluation. The offline dataset consists of n i.i.d. samples Dn = {(Xi, Ai, Ri)}n i=1, generated by the user-bandit interaction. We focus on the case where µ is a factored policy, that is, µ(A | X) = K Y k=1 µk ! Ak | X " where K is the number of slots. Data collection with factored uniform logging policies is standard in practice (Swaminathan et al. 2017). Off-policy evaluation is the task of utilizing data Dn logged using a policy µ, to evaluate a target policy ⇡by computing evaluation metrics from the target reward under ⇡. Standard methods focus on the estimation of the expected reward under the target policy. In this work, our focus is on the estimation of quantities that go beyond just the expected target reward by estimating the reward distribution. Throughout the paper, the sample estimates of any quantity y are denoted by ˆyn where the subscript indicates the number (n) of data points used for estimation. For instance, the cumulative reward distribution observed for a policy ⇡is denoted by F ⇡(⌫). The sample estimate of the distribution will be denoted by ˆF ⇡ n (⌫). Related Work on Off-Policy Evaluation Importance sampling (IS) (Horvitz and Thompson 1952; Sutton and Barto 2018), also known as inverse propensity scoring (IPS) (Dud´ık et al. 2014), provides a technique for unbiased estimation of expected target reward. However, it suffers from high variance in large action spaces. There are numerous extensions to IS for variance reduction (Dud´ık, Langford, and Li 2011; Thomas 2015; Kallus and Uehara 2019). The IS estimator and all methods derived from it rely on a standard common-support assumption. We use a weaker form of this assumption that requires support at the slot level instead of the entire slate. Assumption 1 (Common Support). The set Dn contains i.i.d. tuples generated using µ, such that for some (unknown) " > 0, µk(Ak | X) < " =) ⇡(Ak | X) = 0; 8 k, X, A. Applying the methods to slates, (Swaminathan et al. 2017) assume additivity of slate rewards as a way to reduce the estimation variance of the expected reward of the target policy. Further variance reduction is obtained by using control variates (Vlassis et al. 2021). McInerney et al. (2020) assume a Markov structure to the slate rewards during sequential item interaction and propose an estimator for the expected target reward. We refer the reader to these papers for additional references on slate recommendations. These off-policy estimators, along with most others in the literature, provide estimates of the expected reward of the target policy (Li et al. 2018). However, the expected value is usually not sufficient for comprehensive off-policy analysis, particularly in the case of risk assessment that is crucial for recommendation systems (Shani and Gunawardana 2011). Additional metrics of interest, often those computable from the whole reward distribution are necessary in practice (Keramati et al. 2020; Altschuler, Brunel, and Malek 2019). For example, metrics like value at risk are used risk analysis of a new recommendation strategy. To that end, work on universal off-policy estimation (UnO) (Chandak et al. 2021) uses ideas motivated by importance sampling to estimate the whole cumulative distribution of the reward under the target policy. However, in combinatorially large action spaces, as with the slate problems we consider here, the UnO estimator can incur prohibitive variance. Our proposed estimator utilizes possible structure in slate rewards to circumvent this issue. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8266 Structure in Slate Rewards The combinatorial action space of slates becomes a key challenge for most general methods like importance sampling (IS) for off-policy evaluation (Wang, Agarwal, and Dudık 2017). The generality of the approach results in them not fully leveraging the structure present in slate rewards (Sunehag et al. 2015) and thus general IS-based approaches frequently suffer from high variance. Prior work leverages user behavior patterns while interacting with slates, which are encoded into structured rewards (e.g., time spent, items clicked, etc.). Some examples of structure in slate rewards are Markov structure for observed slot-level rewards (McInerney et al. 2020), the dependence of slate reward only on the selected slot (Ie et al. 2019), and unobserved slotlevel rewards with an additively decomposable slate reward (Swaminathan et al. 2017; Vlassis et al. 2021). The last one is of particular interest, where the additivity of expected reward (Cesa-Bianchi and Lugosi 2012) posits that the conditional mean slate-level reward decomposes additively as the sum of (arbitrary) slot-level latent functions, i.e., E[R | A, X] = PK k=1 φk(Ak, X). This has been leveraged to obtain a significant reduction in estimation variance for off-policy evaluation (Swaminathan et al. 2017; Vlassis et al. 2021). This decomposition captures the individual effects of each slot. It may readily be generalized to capture non-additive joint effects of more than one slot action, for example, to capture the effects of pairs of slot-actions, one may consider the decomposition: E[R | A, X] = K X k=1 K X j=k φjk(Ak, Aj, X) . (1) It may further be generalized to capture the combined effects of m-slots. Note that in the most general case, for m = K, the reward does not permit any decomposition over slots. Analogous to the above structural conditions, we posit a condition that allows us to perform consistent and unbiased estimation of the target off-policy distribution. Assumption 2 (Additive CDF). There exists an additive decomposition of the conditional cumulative density function (CDF) of the slate reward as the sum of (arbitrary) slot-level latent functions: FR(⌫) = K X k=1 k(Ak, X, ⌫), 8⌫ where FR(⌫) := Pr(R ⌫| A, X) . The slot-level rewards, if any, are unobserved. The condition just assumes that an additive decomposition exists and does not require knowledge of the constituent slot-level functions. We demonstrate empirically that this condition is often a close approximation for real-world data and that our estimator performs better than more general methods even when this condition happens to be an inexact approximation, i.e., when a perfect decomposition does not exist. This decomposition may also be generalized to capture the combined effects of m-slots. In line with prior work (Wen, Kveton, and Ashkan 2015; Kale, Reyzin, and Schapire 2010; Kveton et al. 2015; Swaminathan et al. 2017; Vlassis et al. 2021; Ie et al. 2019), we focus our analysis for the case m = 1, which proves to be effective in practice as corroborated by empirical analysis. We additionally provide derivations for how the estimator can readily generalize to cases where m > 1, along with a theoretical analysis of its properties. It is worth noting that an additively decomposable reward CDF always implies an additive expected reward by definition and the former often serves to be a close approximation when the latter holds. This is helpful since a commonly used metric for evaluating the performance of slates, the normalized discounted cumulative gain (nDCG) (Burges et al. 2005), is an additively decomposable metric, and it has been used in the past for defining the slate reward (J¨arvelin and Kek¨al¨ainen 2017; Swaminathan et al. 2017). Slate Universal Off-Policy Evaluation We will now turn to off-policy evaluation in slates as an off-policy reward distribution estimation task. The core idea builds upon the framework of Chandak et al. (2021) who use importance weights ⇢in an estimator of the reward CDF of a target policy from logged data Dn ⇠µ. In the case of slates, the the importance weight ⇢comprises of probability ratios over all slot actions. The most direct approach for defining ⇢in the case of a factored logging policy µ is to consider a formulation analogous to importance sampling by taking the product of the slot-level probabilities (Equation (2)). This approach will be plagued by high variance when the size of the slate K is large. To remedy this, our proposed algorithm SUnO utilizes the structure in slates provided by Assumption 2 wherein the CDF of the slate level reward admits an additive decomposition. In place of ⇢, we define an importance weight G (Equation (2)) that is a sum of slot-density ratios. ⇢= ⇡(A|X) µ(A|X) = K Y k=1 ⇡ ! Ak | X " µk(Ak | X) G = K X k=1 ⇡ ! Ak | X " µk ! Ak | X " −1 ! + 1 (2) The estimator for the target distribution counts the number of samples for which the reward is less than a threshold ⌫and reweighs that count with importance weights to be reflective of the counts under the target policy ⇡. In expectation, the counts reflect the probabilities of obtaining a reward less than ⌫, providing the value of the reward CDF at ⌫, denoted by F ⇡(⌫). Use of the importance weight G in place of ⇢results in significantly lower variance in estimation and improved effective sample size, while keeping the estimator unbiased. It is easy to confirm that under a factored logging policy, Eµ[G] = 1. Below we prove that this importance weight allows for a change of distribution of the expected slate reward and can be used for estimating the target reward CDF. Our main result is the following: Theorem 3. Let R be a real-valued random variable that denotes the slate reward and admits an additive decompoThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8267 Algorithm 1: SUnO(⌫) Input: ⇡, µ, ⌫, {(Xi, Ai, Ri)}n i=1 ⇠Dn Output: ˆF ⇡ n (⌫) 1: s⌫= 0 {Initialize counter and iterate over the logged dataset} 2: for i = 1, 2, . . . , n do 3: Gi 1 −K + PK k=1 ⇡(Ak i |Xi) µk(Ak i |Xi) {Compute the importance weight (Equation (2))} 4: s⌫ s⌫+ 1{Ri ⌫}Gi {Add to counter if reward is less than ⌫} 5: end for 6: return ˆF ⇡ n (⌫) = s⌫/n {Normalize and return counter} sition of its conditional cumulative distribution FR(⌫) (Assumption 2). Under a factored µ and Assumption 2, F ⇡(⌫) = Eµ[G · 1{R ⌫}], 8 ⌫ Thus, a weighted expectation of the indicator function, with weights given by G, gives the target CDF. Based on this result, we propose the following sample estimator for F ⇡(⌫) that uses data Dn ⇠µ, ˆF ⇡ n (⌫) := 1 n n X i=1 Gi 1{Ri ⌫}, 8 ⌫ This estimator, called slate universal off-policy estimator (SUnO) is outlined in Algorithm 1. In the following result, we establish that SUnO leverages the additive structure in Assumption 2 to obtain an unbiased and pointwise consistent estimate of the CDF of the target policy. The proofs of both results may be found in the Appendix. Theorem 4. Under Assumption 2, ˆF ⇡ n (⌫) is an unbiased and pointwise consistent estimator of F ⇡(⌫). It is important to note that analogous to Swaminathan et al. (2017) our estimator does not require knowledge of the specific functions ( k’s) in the decomposition of the conditional CDF in Assumption 2; it only assumes the existence of a set of such latent functions, and a corresponding additive decomposition of the conditional CDF, to attain unbiased estimation. Even in cases where the assumption is not satisfied, our method (Algorithm 1) performs robustly, as we demonstrate in our experiments. The estimated target CDF can be used to compute metrics of interest as functions of the CDF (for example, mean, variance, VaR, CVaR, etc.). Some of these metrics are non-linear functions of the CDF (VaR, CVaR) and thus their sample estimates would be biased estimators. This is to be expected (Chandak et al. 2021). Metrics that are linear functions of the CDF however have unbiased sample estimators. Thus, an unbiased target CDF estimator serves to be a “one-shot” solution for most metrics of interest, though unbiasedness holds only for certain metrics. We demonstrate the estimation of some of these metrics in our empirical analysis. Properties of SUnO Variance: The estimator enjoys significantly low variance for target estimation. The key factor is that the estimator uses importance weights that are a sum of slot level density ratios as opposed to a product as is used UnO (Chandak et al. 2019). Particularly in the slate setting, the latter methods suffer from enormous variance and reduced effective sample size, as we demonstrate empirically. Consider the worst-case variance of the two estimators. From Assumption 1 we have 0  ⇡(Ak|X) µk(Ak|X)  1 ✏, which implies Var(UnO) = O ✓1 ✏K ◆ ; Var(SUnO) = O ✓K ✏ ◆ Thus the worst-case variance of SUnO grows linearly with an increase in the size of the slate (K) while that of UnO grows exponentially. Generalization to m-slot reward decomposition As noted earlier, the structural assumptions in slate rewards may be generalized to account for the joint effects of multiple slots. The proposed estimator readily applies to such generalizations. For instance, consider the case where the conditional reward CDF decomposes into terms composed of m slot-actions. FR(⌫) := X 1k1<k2,···<kmK k1:m(Ak1:m, X, ⌫) (3) where k1:m is used as shorthand to denote the indices k1, k2, . . . , km. It can be seen that this decomposition consists of !K m " terms. The importance weight from Equation 2 with a minor change provides an unbiased and consistent off-policy estimator for this reward structure. Define Gm = X 1k1<k2,···<kmK m Y i=1 ⇡(Aki|X) µki(Aki|X) −1 ! +1 (4) to be the importance weight. With a derivation similar to Theorem 3 we show the following result, Corollary 5. Let R be a random variable that denotes the slate reward, and permits a decomposition into (latent) functions of m slots as in Equation 3. Under a factored µ, we have F ⇡(⌫) = Eµ[Gm · 1{R ⌫}], 8 ⌫ The proof of the result may be found in the Appendix. The derivation highlights how the result may be extended to other forms of decomposition of the slate reward. Note The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8268 Figure 1: (left) Movielens-20M: (top) MSE for mean computed from the estimated target CDF for increasing sample sizes. SUnO performs significantly better in terms of bias and variance compared to UnO. The same follows for median estimation (bottom) where it demonstrates much better sample efficiency and lower estimation variance as seen by the error bars. (right) Synthetic Experiment: Estimates of CVaR0.3 (top) and VaR0.3 (bottom) computed from the estimated target CDF. In this setting where Assumption 2 is satisfied, SUnO performs better than UnO in terms of estimation variance, sample efficiency, and estimation accuracy as expected. that when m = K, the importance weight reduces to G = QK i=1 Yk+i which is the same as ⇢as used in UnO. This is because at m = K the reward permits no decomposition. The proposed estimator may then be interpreted as a generalization of UnO to various reward decompositions, where UnO forms a special case wherein the reward does not permit any decomposition. We now empirically study the case where m = 1, in line with prior work. The reward decomposition at m = 1 proves to be a sufficiently accurate approximation to real-world data as we demonstrate in the empirical section that follows. Empirical Analysis We investigate the following questions in the empirical analysis: RQ1: Does the estimator have low estimation variance and high sample efficiency when the slate rewards have an additive structure? RQ2: Does the method accurately estimate the off-policy distribution, and metrics from it? RQ3: Does the method apply to settings where conditions for unbiasedness of the estimator do not hold? To that end, we evaluate the performance of SUnO in a range of slate recommendation settings, comparing its performance against UnO (Chandak et al. 2021), a general estimator that does not make any structural assumptions. Better performance is defined as having lower estimation error and variance, along with improvement in sample efficiency. We begin by evaluating the estimators on synthetic data that follows the additive CDF structure to corroborate theoretical results (RQ1). We then proceed to real-world data experiments and use the additively decomposable metric nDCG (Swaminathan et al. 2017) as the slate reward (RQ2, RQ3). We test our estimator on a publicly available dataset - MovieLens-20M (Harper and Konstan 2015) - and on a semi-synthetic slate simulator - Open Bandit Pipeline (Saito et al. 2020). We develop a procedure to construct a slate simulator from ratings datasets like MovieLens. We evaluate SUnO in settings where the slate reward does not by construction satisfy either the additive CDF or the additive reward conditions, and see it demonstrate robust performance. Implementational note: Algorithm 1 outlines the steps for estimating the target CDF at any reward value ⌫. The reward, in general, takes on continuous real values and implementationally it is not practical to estimate the target CDF at all continuous values of ⌫. In practice, an empirical estimate of the CDF may be computed at discrete points over the range of rewards. In between those points, the value of the CDF is kept constant. Consequently, we compute the target CDF at evenly spaced points over the range of rewards for both estimators in the experiments that follow. The granularity of this discretization of the domain of the CDF reflects in the granularity of the estimated CDF. To ensure accuracy and relative smoothness in the estimated CDF, we choose a very fine level of discretization relative to the range of reward for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8269 (a) Synthetic experiment Sample size 0.5 ⇥103 1 ⇥103 5 ⇥103 10 ⇥103 SUnO 0.131 0.102 0.059 0.049 UnO 0.256 0.191 0.098 0.077 (b) MovieLens Sample size 0.5 ⇥106 1 ⇥106 5 ⇥106 10 ⇥106 SUnO 0.169 0.173 0.181 0.184 UnO 0.441 0.410 0.31912 0.252 Table 1: The tables report the average Kolmogorov-Smirnov statistic for the estimated CDFs for (a) Synthetic experiment and (b) MovieLens. The results demonstrate that SUnO estimates the target CDF more accurately with better sample efficiency. each experiment. Metrics: We define a few metrics that we utilize in our empirical analysis. Value at Risk (VaR↵) (Rockafellar, Uryasev et al. 2000; Wirch and Hardy 2001) denotes the value of the reward such that the probability of observing rewards greater than that value is 1−↵. Correspondingly, Conditional Value at Risk (CVaR↵) denotes the expected value of the rewards given that that observed reward is less than VaR↵. These metrics are used for risk assessment of policies and we compute them from the estimated target reward distribution. To evaluate the estimation error in our estimated target distribution, we use the Kolmogorov-Smirnov statistic (Stephens 1974) which measures how well the estimated target distribution matches ground-truth distribution by assessing the largest discrepancy between their cumulative probabilities. All the experiments have a factored uniform-random logging policy. The error bars denote one standard error. The code is available at: https://github.com/shreyasc-13/suno. Synthetic Experiments We begin by synthetically generating data where the slate reward permits the additive CDF structure (Assumption 2). Setup: We consider the non-contextual bandit setting for ease of analysis, and the same may easily be extended to a contextual setting. To construct the data-generating reward distribution, each k is set to a monotonic non-decreasing function by assigning slices of a sigmoid function to the corresponding k’s for each (Ak). This manner of construction ensures that the resultant sum of the functions, the CDF, is again a monotonic non-decreasing function. The k’s are appropriately normalized. For these experiments, we set the number of slots K = 3 and the number of actions in each slot to N = 3. The target policy is a deterministic policy that chooses one action per slot, where the action for each slot is assigned randomly at the start of the experiment and held constant for all experiments. Experiment: We compare the performance of the estimators on two fronts: 1. Goodness-of-fit of CDF: We report the average Kolmogorov-Smirnov statistic of the estimated target CDFs. The ground truth target CDF is computed by executing the target policy on the simulator. 2. Tail measures: We compute the CVaR0.3 and VaR0.3 from the target CDFs estimated by the two estimators. The experiments are run for different logged data sizes and the results are averaged over 1000 trials. Results [Table 1, Figure 1]: Since SUnO leverages the additive structure in rewards, it estimates the CDF and the tail measures with lower variance and estimation error, while being more sample efficient. If a single slot action has a zero probability of occurring under the target policy, the importance weight used by UnO for the entire slate goes to zero since it comprises of the product of slot-level density ratios. This is not the case for SUnO which is thus able to utilize a larger effective sample size and be more sample efficient. Note that with an increase in sample size, both estimators tend to the ground-truth values as they are consistent and unbiased. SUnO has a significantly lower variance in estimation as seen by the error bars in Figure 1. Real-World Data In this section, we first introduce a procedure for converting a recommender system ratings dataset (like MovieLens) to a slate recommendation simulator with additive rewards and proceed to evaluate our method on the simulator. The procedure for constructing the simulator follows. Simulation Setup 1. Learn a user-item preference matrix B along with the user context embedding X. For m users and l item, B 2 Rm⇥l and X 2 {0, 1}m. We follow the steps outlined in (Steck 2019) to learn B from rating data. X is a binary vector that encodes user-item interaction history. An alternative method for learning embeddings could be (Elahi and Chandrashekar 2020). 2. To limit our setup to approximately 10k unique users, we trim the set of users to those that have an interaction history of 10 to 15 items. 3. Compute the ground truth preference scores for each user by computing the product of a user’s context embedding with the preference matrix (x · B). 4. To make the simulator tractable, we trim the action set by retaining the top 20 preferred actions per user based on each user’s ground truth scores (N = 20). 5. For a slate action A, a ranking metric like nDCG can be set as the slate reward R and works well in practice (Vlassis et al. 2021; Swaminathan et al. 2017). Experiment: First, we set up a slate simulator as described above using the MovieLens-20M dataset to estimate B and X. A uniform random factored logging policy is used for creating the offline dataset for evaluating the estimators. We consider an ✏-greedy target policy. For each user, it picks the top K preferred actions (one per slot) with probability 1 −N✏and a uniform random action from the user’s action set with probability ✏. Here N = 20, K = 5, ✏= 0.01 and results are averaged over 50 trials. We analyze: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8270 Sample size 0.5 ⇥105 1 ⇥106 5 ⇥106 SUnO 0.253 0.257 0.269 UnO 0.543 0.541 0.567 Table 2: The table reports the mean squared error for mean computation from the estimated CDFs. Even in settings where the slate reward is not additive, our method continues to perform better than the structure-agnostic estimator. 1. Goodness-of-fit of CDF: We report the average Kolmogorov-Smirnov statistic of the estimated CDFs against the ground truth CDF. The ground truth CDF is computed by executing the target policy on the simulator. 2. Metrics computed from the CDF: We compute the mean and 0.5-quantile (median) from the estimated CDF. Results [Table 1, Figure 1]: The experiments demonstrate that although only the additive reward condition is met and not Assumption 2 , SUnO estimates the target CDF with fewer samples (Table 1) than UnO. Our estimator has a significantly lower estimation variance for metrics computed from the CDF, as seen by the error bars for median computation and the mean squared error for the expected value computation in Figure 1. Note that the mean squared error (MSE) captures both the bias and variance in estimation. Non-Additive Reward Structure Finally, we evaluate the estimators in a setting where neither the additive reward nor the additive CDF conditions are satisfied. Simulator: We use the Open Bandit Pipeline (OBP) slate bandit simulator (Saito et al. 2020) that uses the synthetic slate reward model described in (Kiyohara et al. 2022). It models higher-order interactions among slot actions and thus does not trivially satisfy Assumption 2 or the additive slate reward structure. We use the cascade additive reward model defined in OBP for these experiments. Experiment: Similar to the MovieLens experiments, we observe the estimation error for the target mean computed from the estimated CDF. A uniform random logging policy is used to generate the offline dataset and the target policy defined here1 is evaluated. We set K = 3, N = 10, and the results are averaged over 10 trials. Results [Table 2]: In this setting, we cannot expect unbiased estimates of the mean from SUnO since the additive CDF condition is required for unbiased estimation of the target CDF. Nonetheless, SUnO continues to perform significantly better in terms of the mean squared error for the mean estimation compared to UnO, which does not make any structural assumptions and is an unbiased estimator in this setting. Here K is set to a relatively small value and a large gap in performance between the two estimators can be expected larger K. 1https://github.com/st-tech/zr-obp/blob/master/examples/ quickstart/synthetic slate.ipynb Discussion and Conclusion We proposed an estimator (SUnO) for off-policy estimation of the target reward distribution in slate recommendations modeled as a bandit problem. Under an additively decomposable conditional CDF, the estimator is unbiased and consistent. The proposed estimator leads to significant reduction in estimation variance and an increase in effective sample size as compared to the estimator of Chandak et al. (2021) for the slate setting. We demonstrate estimation gains on synthetic as well as real-world data experiments. The estimator also readily extends to other reward decompositions that capture the joint effects of slot actions. In future work, variance reduction techniques can be applied for further variance gains. For instance, one may consider a self-normalized version of SUnO that incurs some bias but provides further variance reduction akin to weighted importance sampling (Koller and Friedman 2009). Control variates can also be adopted for variance reduction. For example, by recapitulating the analysis of (Vlassis et al. 2021), one can derive an optimal control variate (w⇤) for SUnO at each ⌫. Further analysis and experiments with such methods are left to future work. One must note while SUnO provides unbiased estimates for the target CDF and its linear functions under our assumptions, many metrics of interest, such as variance or CVaR, are not linear functions of the CDF. Another direction for future work would be to extend our results to the unbiased estimation of risk functionals like CVaR (Huang et al. 2021). Finally, it would be interesting to develop techniques for discovering the decomposition structure in rewards so that the appropriate unbiased off-policy estimators can be used. Acknowledgements The research was supported by and partially conducted at Adobe Research. We are also immensely grateful to the four anonymous reviewers who shared their insights and feedback. References Altschuler, J. M.; Brunel, V.-E.; and Malek, A. 2019. Best Arm Identification for Contaminated Bandits. J. Mach. Learn. Res., 20(91): 1–39. Bobadilla, J.; Ortega, F.; Hernando, A.; and Guti´errez, A. 2013. Recommender systems survey. Knowledge-based systems, 46: 109–132. Burges, C.; Shaked, T.; Renshaw, E.; Lazier, A.; Deeds, M.; Hamilton, N.; and Hullender, G. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, 89–96. Cesa-Bianchi, N.; and Lugosi, G. 2012. Combinatorial bandits. Journal of Computer and System Sciences, 78(5): 1404–1422. Chandak, Y.; Niekum, S.; da Silva, B.; Learned-Miller, E.; Brunskill, E.; and Thomas, P. S. 2021. Universal off-policy evaluation. Advances in Neural Information Processing Systems, 34. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8271 Chandak, Y.; Theocharous, G.; Kostas, J.; Jordan, S.; and Thomas, P. 2019. Learning action representations for reinforcement learning. In International conference on machine learning, 941–950. PMLR. Dud´ık, M.; Erhan, D.; Langford, J.; and Li, L. 2014. Doubly robust policy evaluation and optimization. Statistical Science, 29(4): 485–511. Dud´ık, M.; Langford, J.; and Li, L. 2011. Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601. Elahi, E.; and Chandrashekar, A. 2020. Learning representations of hierarchical slates in collaborative filtering. In Proceedings of the 14th ACM Conference on Recommender Systems, 703–707. Feitelson, D. G.; Frachtenberg, E.; and Beck, K. L. 2013. Development and deployment at facebook. IEEE Internet Computing, 17(4): 8–17. Gomez-Uribe, C. A.; and Hunt, N. 2015. The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4): 1–19. Harper, F. M.; and Konstan, J. A. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4): 1–19. Horvitz, D. G.; and Thompson, D. J. 1952. A generalization of sampling without replacement from a finite universe. Journal of the American statistical Association, 47(260): 663–685. Huang, A.; Leqi, L.; Lipton, Z.; and Azizzadenesheli, K. 2021. Off-policy risk assessment in contextual bandits. Advances in Neural Information Processing Systems, 34: 23714–23726. Ie, E.; Jain, V.; Wang, J.; Narvekar, S.; Agarwal, R.; Wu, R.; Cheng, H.-T.; Chandra, T.; and Boutilier, C. 2019. SlateQ: A tractable decomposition for reinforcement learning with recommendation sets. J¨arvelin, K.; and Kek¨al¨ainen, J. 2017. IR evaluation methods for retrieving highly relevant documents. In ACM SIGIR Forum, volume 51, 243–250. ACM New York, NY, USA. Kale, S.; Reyzin, L.; and Schapire, R. E. 2010. Nonstochastic bandit slate problems. Advances in Neural Information Processing Systems, 23. Kallus, N.; and Uehara, M. 2019. Intrinsically efficient, stable, and bounded off-policy evaluation for reinforcement learning. Advances in neural information processing systems, 32. Keramati, R.; Dann, C.; Tamkin, A.; and Brunskill, E. 2020. Being optimistic to be conservative: Quickly learning a cvar policy. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 4436–4443. Kiyohara, H.; Saito, Y.; Matsuhiro, T.; Narita, Y.; Shimizu, N.; and Yamamoto, Y. 2022. Doubly robust off-policy evaluation for ranking policies under the cascade behavior model. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 487–497. Kohavi, R.; and Longbotham, R. 2017. Online Controlled Experiments and A/B Testing. Encyclopedia of machine learning and data mining, 7(8): 922–929. Koller, D.; and Friedman, N. 2009. Probabilistic graphical models: principles and techniques. MIT press. Kveton, B.; Wen, Z.; Ashkan, A.; and Szepesvari, C. 2015. Tight regret bounds for stochastic combinatorial semibandits. In Artificial Intelligence and Statistics, 535–543. PMLR. Li, L.; Chu, W.; Langford, J.; and Schapire, R. E. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, 661–670. Li, S.; Abbasi-Yadkori, Y.; Kveton, B.; Muthukrishnan, S.; Vinay, V.; and Wen, Z. 2018. Offline evaluation of ranking policies with click models. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1685–1694. Lu, J.; Wu, D.; Mao, M.; Wang, W.; and Zhang, G. 2015. Recommender system application developments: a survey. Decision support systems, 74: 12–32. McInerney, J.; Brost, B.; Chandar, P.; Mehrotra, R.; and Carterette, B. 2020. Counterfactual evaluation of slate recommendations with sequential reward interactions. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1779–1788. Rockafellar, R. T.; Uryasev, S.; et al. 2000. Optimization of conditional value-at-risk. Journal of risk, 2: 21–42. Saito, Y.; Shunsuke, A.; Megumi, M.; and Yusuke, N. 2020. Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation. arXiv preprint arXiv:2008.07146. Sarwar, B.; Karypis, G.; Konstan, J.; and Riedl, J. 2000. Analysis of recommendation algorithms for e-commerce. In Proceedings of the 2nd ACM Conference on Electronic Commerce, 158–167. Sen, P. K.; and Singer, J. M. 1994. Large sample methods in statistics: an introduction with applications, volume 25. CRC press. Shani, G.; and Gunawardana, A. 2011. Evaluating recommendation systems. In Recommender systems handbook, 257–297. Springer. Steck, H. 2019. Embarrassingly shallow autoencoders for sparse data. In The World Wide Web Conference, 3251– 3257. Stephens, M. A. 1974. EDF statistics for goodness of fit and some comparisons. Journal of the American statistical Association, 69(347): 730–737. Sunehag, P.; Evans, R.; Dulac-Arnold, G.; Zwols, Y.; Visentin, D.; and Coppin, B. 2015. Deep reinforcement learning with attention for slate markov decision processes with high-dimensional states and actions. arXiv preprint arXiv:1512.01124. Sutton, R. S.; and Barto, A. G. 2018. Reinforcement learning: An introduction. MIT press. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8272 Swaminathan, A.; Krishnamurthy, A.; Agarwal, A.; Dudik, M.; Langford, J.; Jose, D.; and Zitouni, I. 2017. Off-policy evaluation for slate recommendation. Advances in Neural Information Processing Systems, 30. Thomas, P.; Theocharous, G.; and Ghavamzadeh, M. 2015. High-confidence off-policy evaluation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29. Thomas, P. S. 2015. Safe reinforcement learning. Vlassis, N.; Chandrashekar, A.; Amat, F.; and Kallus, N. 2021. Control variates for slate off-policy evaluation. Advances in Neural Information Processing Systems, 34. Wang, Y.-X.; Agarwal, A.; and Dudık, M. 2017. Optimal and adaptive off-policy evaluation in contextual bandits. In International Conference on Machine Learning, 3589– 3597. PMLR. Wen, Z.; Kveton, B.; and Ashkan, A. 2015. Efficient learning in large-scale combinatorial semi-bandits. In International Conference on Machine Learning, 1113–1122. PMLR. Wirch, J. L.; and Hardy, M. R. 2001. Distortion risk measures: Coherence and stochastic dominance. In International congress on insurance: Mathematics and economics, 15–17. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8273
2024
919
18,761
Learning Content-Enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation Qi Bi, Shaodi You, Theo Gevers Computer Vision Research Group, University of Amsterdam, Netherlands {q.bi, s.you, th.gevers}@uva.nl Abstract Domain-generalized urban-scene semantic segmentation (USSS) aims to learn generalized semantic predictions across diverse urban-scene styles. Unlike generic domain gap challenges, USSS is unique in that the semantic categories are often similar in different urban scenes, while the styles can vary significantly due to changes in urban landscapes, weather conditions, lighting, and other factors. Existing approaches typically rely on convolutional neural networks (CNNs) to learn the content of urban scenes. In this paper, we propose a Content-enhanced Mask TransFormer (CMFormer) for domain-generalized USSS. The main idea is to enhance the focus of the fundamental component, the mask attention mechanism, in Transformer segmentation models on content information. We have observed through empirical analysis that a mask representation effectively captures pixel segments, albeit with reduced robustness to style variations. Conversely, its lower-resolution counterpart exhibits greater ability to accommodate style variations, while being less proficient in representing pixel segments. To harness the synergistic attributes of these two approaches, we introduce a novel content-enhanced mask attention mechanism. It learns mask queries from both the image feature and its down-sampled counterpart, aiming to simultaneously encapsulate the content and address stylistic variations. These features are fused into a Transformer decoder and integrated into a multi-resolution content-enhanced mask attention learning scheme. Extensive experiments conducted on various domain-generalized urban-scene segmentation datasets demonstrate that the proposed CMFormer significantly outperforms existing CNN-based methods by up to 14.0% mIoU and the contemporary HGFormer by up to 1.7% mIoU. The source code is publicly available at https: //github.com/BiQiWHU/CMFormer. Introduction Urban-scene semantic segmentation (USSS) is a challenging problem because of the large scene variations due to changing landscape, weather, and lighting conditions (Sakaridis, Dai, and Van Gool 2021; Mirza et al. 2022; Bi, You, and Gevers 2023; Chen et al. 2022). Unreliable USSS can pose a significant risk to road users. Nevertheless, a segmentation model trained on a specific dataset cannot encompass Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. CityScapes BDD Mapillary GTAV SYNTHIA (b) generalization on unseen domains 61.5% 62.6% + 1.1% HGFormer* Ours 72.1% 73.6% +1.5% 59.4% 60.7% +1.3% 41.3% 43.0% +1.7% (a) style variation of urban-scenes Figure 1: (a) In domain generalized USSS, the domain gap is mainly from the extremely-varied styles. (b) A segmentation model is supposed to show good generalization on unseen target domains. all urban scenes across the globe. As a result, the segmentation model is prone to encountering unfamiliar urban scenes during the inference stage. Hence, domain generalization is essential for robust USSS (Pan et al. 2018; Huang et al. 2019a; Choi et al. 2021), where a segmentation model can effectively extrapolate its performance to urban scenes that it hasn’t encountered before (Fig. 1). In contrast to common domain generalization, domain generalized USSS requires special attention because the domain gap is mainly caused by large style variations whereas changes in semantics largely remain consistent (example in Fig. 2). Existing approaches can be divided into two groups. One group focuses on the style de-coupling. This is usually achieved by a normalization (Pan et al. 2018; Huang et al. 2019a; Peng et al. 2022) or whitening (Pan et al. 2019; Choi et al. 2021; Xu et al. 2022; Peng et al. 2022) transformation. However, the de-coupling methodology falls short as the content is not learnt in a robust way. The other group is based on adverse domain training (Zhao et al. 2022; Lee et al. 2022; Zhong et al. 2022). However, these methods usually do not particularly focus on urban styles and therefore their performance is limited. Recent work has shown that mask-level segmentation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 819 Similar Content Varied Styles BDD100K GTA5 tree, cars, road, building, …… tree, cars, road, building, …… BDD100K GTA5 rainy, day-time, European street, …… clear, night-time, American street, …… Figure 2: Domain-generalized USSS demonstrates a distinctive feature of consistent content with diverse styles. An example is given for BDD100K and GTA5. Transformer (e.g., Mask2Former) (Ding et al. 2023) is a scalable learner for domain generalized semantic segmentation. However, based on our empirical observations, a highresolution mask-level representation excels at capturing content down to pixel semantics but is more susceptible to style variations. Conversely, its down-sampled counterpart is less proficient in representing content down to pixel semantics but exhibits greater resilience to style variations. A novel content-enhanced mask attention (CMA) mechanism is proposed. It jointly leverages both mask representation and its down-sampled counterpart, which show complementary properties on content representing and handling style variation. Jointly using both features helps the style to be uniformly distributed while the content to be stabilized in a certain cluster. The proposed CMA takes the original image feature together with its down-sampled counterpart as input. Both features are fused to learn a more robust content from their complementary properties. The proposed content-enhanced mask attention (CMA) mechanism can be integrated into existing mask-level segmentation Transformer in a learnable fashion. It consists of three key steps, namely, exploiting high-resolution properties, exploiting low-resolution properties, and contentenhanced fusion. Besides, it can also be seamlessly adapted to multi-resolution features. A novel Content-enhanced Mask TransFormer (CMFormer) is proposed for domaingeneralized USSS. Large-scale experiments are conducted with various domain generalized USSS settings, i.e., trained on one dataset from (Richter et al. 2016; Ros et al. 2016; Cordts et al. 2016; Neuhold et al. 2017; Yu et al. 2018) as the source domain, and validated on the rest of the four datasets as the unseen target domains. All the datasets contain the same 19 semantic categories as the content, but vary in terms of scene styles. The experiments show that the proposed CMFormer achieves up to 14.00% mIoU improvement compared to the state-of-the-art CNN based methods (e.g., SAW (Peng et al. 2022), WildNet (Lee et al. 2022)). Furthermore, it demonstrates a mIoU improvement of up to 1.7% compared to the modern HGFormer model (Ding et al. 2023). It also shows state-of-the-art performance on synthetic-to-real and clearto-adverse generalization. Our contribution is summarized as follows: • A content-enhanced mask attention (CMA) mechanism is proposed to leverage the complementary content and style properties from mask-level representation and its down-sampled counterpart. • On top of CMA, a Content-enhanced Mask Transformer (CMFormer) is proposed for domain generalized urbanscene semantic segmentation. • Extensive experiments show a large performance improvement over existing SOTA by up to 14.0% mIoU, and HGFormer by up to 1.7% mIoU. Related Work Domain Generalization has been studied on no taskspecific scenarios in the field of both machine learning and computer vision. Hu et al. (Hu and Lee 2022) proposed a framework for image retrieval in an unsupervised setting. Zhou et al. (Zhou et al. 2020) proposed a framework to generalize to new homogeneous domains. Qiao et al. (Qiao, Zhao, and Peng 2020) and Peng et al. (Peng, Qiao, and Zhao 2022) proposed to learn domain generalization from a single source domain. Many other methods have also been proposed (Zhao et al. 2020; Mahajan, Tople, and Sharma 2021; Wang et al. 2020; Chattopadhyay, Balaji, and Hoffman 2020; Segu, Tonioni, and Tombari 2023). Domain Generalized Semantic Segmentation is more practical than conventional semantic segmentation (Pan et al. 2022; Ji et al. 2021; Li et al. 2021; Ji et al. 2022; Zhou, Yi, and Bi 2021; Ye et al. 2021), which focuses on the generalization of a segmentation model on unseen target domains. Existing methods focus on the generalization of in-the-wild (Piva, de Geus, and Dubbelman 2023), scribble (Tjio et al. 2022) and multi-source images (Kim et al. 2022; Lambert et al. 2020), where substantial alterations can occur in both the content and style. Domain Generalized USSS focuses on the generalization of driving-scenes (Cordts et al. 2016; Yu et al. 2018; Neuhold et al. 2017; Ros et al. 2016; Richter et al. 2016). These methods use either normalization transformation (e.g., IBN (Pan et al. 2018), IN (Huang et al. 2019a), SAN (Peng et al. 2022)) or whitening transformation (e.g., IW (Pan et al. 2019), ISW (Choi et al. 2021), DIRL (Xu et al. 2022), SAW (Peng et al. 2022)) on the training domain, to enable the model to generalize better on the target domains. Other advanced methods for domain generalization in segmentation typically rely on external images to incorporate more diverse styles (Lee et al. 2022; Zhao et al. 2022; Zhong et al. 2022; Li et al. 2023), and leverage content consistency across multi-scale features (Yue et al. 2019). To the best of our knowledge, all of these methods are based on CNN. Mask Transformer for Semantic Segmentation uses the queries in the Transformer decoder to learn the masks, e.g., Segmenter (Strudel et al. 2021), MaskFormer (Cheng, Schwing, and Kirillov 2021). More recently, Mask2Former (Cheng et al. 2022) further simplifies the pipeline of MaskFormer and achieves better performance. Preliminary Problem Definition Domain generalization can be formulated as a worst-case problem (Li, Namkoong, and Xia 2021; Zhong et al. 2022; Volpi et al. 2018). Given a source domain S, and a set of unseen target domains T1, T2, · · · , a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 820 (b) objective style content (a) observation style content BDD Mapillary GTA5 SYNTHIA Figure 3: (a) In the domain-generalized USSS setting, within the content-style space, samples from various domains tend to cluster closely along the content dimension while displaying dispersion along the style dimension. (b) An optimal generalized semantic segmentation scenario would involve uniform distribution of styles while maintaining content stability (as indicated by the brown bounding box). model parameterized by θ with the task-specific loss Ltask, the generic domain generalization task can be formulated as a worst-case problem, given by min θ supp T :D(S;T1,T2,··· )≤ρ ET [Ltask(θ; T1, T2, · · · )], (1) where θ denotes the model parameters, D(S; T1, T2, · · · ) corresponds to the distance between the source S and target domain T , and ρ denotes the constraint threshold. Content-style Feature Space Here we analyze the feature space. Figure 3a illustrates that in the context of domaingeneralized USSS, samples from distinct domains might exhibit analogous patterns and cluster tightly along the content dimension. Conversely, samples from diverse domains may segregate into separate clusters along the style dimension. An optimal and adaptable segmentation representation should achieve content stability while simultaneously exhibiting resilience in the face of significant style variations. Illustrated in Figure 3b, our objective is to cultivate a content-style space wherein: 1) samples from diverse domains can occupy analogous positions along the content dimension, and 2) samples can be uniformly dispersed across the style dimension. Both learning objectives allow us to therefore minimize the domain gap. Overall Idea Recent work has shown that mask-level segmentation Transformer (e.g., Mask2Former) (Ding et al. 2023) is a scalable learner for domain generalized semantic segmentation. However, we empirically observe that, a mask-level representation is better at representing content, but more sensitive to style variations (similar to Fig. 3a); its low-resolution counterpart, on the contrary, is less capable to represent content, but more robust to the style variations (similar to the style dimensions in Fig. 3b). Overall, the mask representation and its down-sampled counterpart shows complementary properties when handling samples from different domains. Thus, it is natural to jointly leverage both mask representation and its down-sampled counterparts, so as to at the same time stabilize the content and be insensitive to the style variation. Difference between Existing Pipelines Existing methods usually focus on decoupling the styles from urban scenes, so that along the style dimension the samples from different domains are more uniformly distributed. In contrast, the proposed method intends to leverage the content representation ability of mask-level features and the style handling ability of its down-sampled counterpart, so as to realize the aforementioned learning objective. Methodology Recap on Mask Attention Recent studies show that the mask-level pipelines (Strudel et al. 2021; Cheng, Schwing, and Kirillov 2021; Cheng et al. 2022) have stronger representation ability than conventional pixel-wise pipelines for semantic segmentation, which can be attributed to the mask attention mechanism. It learns the query features as the segmentation masks by introducing a mask attention matrix based on the selfattention mechanism. Let Fl and Xl denote the image features from the image decoder and the features of the lth layer in a Transformer decoder, respectively. When l = 0, X0 refers to the input query features of the Transformer decoder. The key Kl and value Vl on Fl−1 are computed by linear transformations fK and fV , respectively. Similarly, the query Ql on Xl−1 is computed by linear transformation fQ. Then, the query feature Xl is computed by Xl = softmax(Ml−1 + QlKT l )Vl + Xl−1, (2) where Ml−1 ∈{0, 1}N×HlWl is a binary mask attention matrix from the resized mask prediction of the previous (l −1)th layer, with a threshold of 0.5. M0 is binarized and resized from X0. It filters the foreground regions of an image, given by Ml−1(x, y) = 0 if Ml−1(x, y)=1 -∞ else . (3) Exploiting High-Resolution Properties Highlighted within the green block in Figure 4, our empirical observations reveal that the high-resolution mask representation exhibits the following characteristics: 1) greater proficiency in content representation, and 2) reduced susceptibility to domain variation. Achieving uniform mixing of samples from four domains presents a challenge. To leverage the properties from high-resolution mask representations, we use the self-attention mechanism to exploit the amplified content representation from Xl. Let QXl, VXl and KXl denote its query, value and key, and dk denotes their dimension. Then, the self-attention is computed as Attention(QXl, KXl, VXl) = Softmax(QXlKXl √dk )VXl, (4) where Softmax denotes the softmax normalization function, and the final output is denoted as ˜Xl. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 821 Queries … avgpool 𝐐𝑙 𝐐𝑙 𝑑 𝐗𝑙 𝑑 𝐗𝑙 Framework Overview mask attention 𝐌𝑙−1 𝐅𝑙−1 𝐅𝑙−1 mask attention … 𝐌𝑙−1 𝑑 𝐕𝑙 𝑑 𝐊𝑙 𝑑 𝐊𝑙 𝐕𝑙 self-attention mining self-attention mining exploiting high-resolution properties … … MLP content enhanced fusion refined feature space … BDD Mapillary GTA5 SYNTHIA CMA CMA CMA CMA CMA CMA CMA CMA CMA image encoder image decoder Transformer decoder exploiting low-resolution properties style × content × content √ content √ style √ style √ Figure 4: (a) The proposed Content-enhanced Mask Attention (CMA) consists of three key steps, namely, exploiting highresolution properties (in green), exploiting low-resolution properties (in brown), and content enhanced fusion (in gray). (b) Framework overview (in yellow) of the proposed Content-enhanced Mask TransFormer (CMFormer) for domain generalized semantic segmentation. The image decoder is directly inherited from the Mask2Former (Cheng et al. 2022). Exploiting Low-Resolution Properties As shown in the brown block of Fig. 4, the low-resolution mask-level representation has the following properties: 1) less qualified to represent the content; 2) more capable to handle the style variation. In the feature space, samples from different domains are more uniformly distributed. We propose to build a low-resolution mask representation derived from its high-resolution counterpart. This approach capitalizes on the attributes of the low-resolution representation to effectively address domain variations. The low-resolution counterpart Fd l is computed by average pooling avgpool from the original image feature Fl by Fd l = avgpool(Fl), (5) where the width and height of Fl is both twice the width and height of Fd l . Similarly, the key and value from Fd l are computed by linear transformations, and can be denoted as Kd l and Vd l , respectively. The query from Xd l−1 is also computed by linear transformation, and can be denoted as Kd l . The mask attention on the low-resolution feature Xd l is computed as Xd l = softmax(Md l−1 + QlKdT l )Vd l + Xd l−1. (6) To exploit the properties from the low-resolution mask representation Xd l , we use the self-attention mechanism. Let QXd l , VXd l and KXd l denote its query, value and keys. Then, the self-attention is computed by Attention(QXd l , KXd l , VXd l ) = Softmax( QXd l KXd l √dk )VXd l . (7) The final output is denoted as ˜Xd l . It inherits the characteristics of the low-resolution mask representation, which is adept at accommodating style variations while being less resilient in capturing pixel-level intricacies. Content-enhanced Fusion Our idea is to leverage the complementary properties of mask-level representation and its down-sample counterpart, so as to enhance both the pixel-wise representing and style variation handing (shown in the gray box of Fig. 4). The joint use of both representations aids the segmentation masks in concentrating on scene content while reducing sensitivity to style variations. To this end, we fuse both representations ˜Xl and ˜Xd l in a simple and straight-forward way. The fused feature Xfinal l serves as the final output of the lth Transformer decoder, and it is computed as Xfinal l = hl([ ˜Xl, ˜Xd l ]), (8) where [·, ·] represents the concatenation operation, and hl(·) refers to a linear layer. Network Architecture and Implementation Details The overall framework is shown in the yellow box of Fig. 4. The Swin Transformer (Liu et al. 2021) is used as the backbone. The pre-trained backbone from ImageNet (Deng et al. 2009) is utilized for initialization. The image decoder from (Cheng et al. 2022) uses the off-the-shelf multi-scale deformable attention Transformer (MSDeformAttn) (Zhu et al. 2021) with the default setting in (Zhu et al. 2021; Cheng et al. 2022). By considering the image features from the Swin-Based encoder as input, every 6 MSDeformAttn layers are used to progressively up-sample the image features in ×32, ×16, ×8, and ×4, respectively. The 1/4 resolution feature map is fused with the features from the Transformer decoder for dense prediction. The Transformer decoder is also directly inherited from Mask2Former (Cheng et al. 2022), which has 9 selfattention layers in the Transformer decoder to handle the ×32, ×16 and ×8 image features, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 822 Following the default setting of MaskFormer (Cheng, Schwing, and Kirillov 2021) and Mask2Former (Cheng et al. 2022), the final loss function L is a linear combination of the binary cross-entropy loss Lce, dice loss Ldice, and the classification loss Lcls, given by L = λceLce + λdiceLdice + λclsLcls, (9) with hyper-parameters λce = λdice = 5.0, λcls = 2.0 as the default of Mask2Former without any tuning. The Adam optimizer is used with an initial learning rate of 1 × 10−4. The weight decay is set 0.05. The training terminates after 50 epochs. Experiment Dataset & Evaluation Protocols Building upon prior research in domain-generalized USSS, our experiments utilize five different semantic segmentation datasets. Specifically, CityScapes (Cordts et al. 2016) provides 2,975 and 500 well-annotated samples for training and validation, respectively. These driving-scenes are captured in Germany cities with a resolution of 2048×1024. BDD100K (Yu et al. 2018) also provides diverse urban driving scenes with a resolution of 1280×720. 7,000 and 1,000 fineannotated samples are provided for training and validation of semantic segmentation, respectively. Mapillary (Neuhold et al. 2017) is also a real-scene large-scale semantic segmentation dataset with 25,000 samples. SYNTHIA (Ros et al. 2016) is large-scale synthetic dataset, and provides 9,400 images with a resolution of 1280×760. GTA5 (Richter et al. 2016) is a synthetic semantic segmentation dataset rendered by the GTAV game engine. It provides 24,966 simulated urban-street samples with a resolution of 1914×1052. We use C, B, M, S and G to denote these five datasets. Following prior domain generalized USSS works (Pan et al. 2018, 2019; Choi et al. 2021; Peng et al. 2022), the segmentation model is trained on one dataset as the source domain, and is validated on the rest of the four datasets as the target domains. Three settings include: 1) G to C, B, M, S; 2) S to C, B, M, G; and 3) C to B, M, G, S. mIoU (in percentage %) is used as the validation metric. All of our experiments are performed three times and averaged for fair comparison. All the reported performance is directly cited from prior works under the ResNet-50 backbone (Pan et al. 2018, 2019; Choi et al. 2021; Peng et al. 2022). Existing domain generalized USSS methods are included for comparison, namely, IBN (Pan et al. 2018), IW (Pan et al. 2019), Iternorm (Huang et al. 2019b), DRPC (Yue et al. 2019), ISW (Choi et al. 2021), GTR (Peng et al. 2021), DIRL (Xu et al. 2022), SHADE (Zhao et al. 2022), SAW (Peng et al. 2022), WildNet (Lee et al. 2022), AdvStyle (Zhong et al. 2022), SPC (Huang et al. 2023), and HGFormer (Ding et al. 2023). Comparison with State-of-the-art GTA5 Source Domain Table 1 reports the performance on target domains of C, B, M and S, respectively. The proposed CMFormer shows a performance improvement of 10.66%, Method Trained on GTA5 (G) →C →B →M →S IBN 33.85 32.30 37.75 27.90 IW 29.91 27.48 29.71 27.61 Iternorm 31.81 32.70 33.88 27.07 DRPC 37.42 32.14 34.12 28.06 ISW 36.58 35.20 40.33 28.30 GTR 37.53 33.75 34.52 28.17 DIRL 41.04 39.15 41.60 SHADE 44.65 39.28 43.34 SAW 39.75 37.34 41.86 30.79 WildNet 44.62 38.42 46.09 31.34 AdvStyle 39.62 35.54 37.00 SPC 44.10 40.46 45.51 CMFormer (Ours) 55.31 49.91 60.09 43.80 Table 1: G →{C, B, M, S} setting. Performance comparison between the proposed CMFormer and existing domain generalized USSS methods. ’-’: The metric is either not reported or the official source code is not available. Evaluation metric mIoU is given in (%). 9.45%, 14.00% and 12.46% compared to existing stateof-the-art CNN based methods on each target domain, respectively. These outcomes demonstrate the feature generalization ability of the proposed CMFormer. Notice that the source domain GTA5 is a synthetic dataset, while the target domains are real images. It further validates the performance of the proposed method. SYNTHIA Source Domain Table 2 reports the performance. The proposed CMFormer shows a 5.67%, 8.73% and 11.49% mIoU performance gain against the best CNN based methods, respectively. However, on the BBD-100K (B) dataset, the semantic-aware whitening (SAW) method (Peng et al. 2022) outperforms the proposed CMFormer by 1.80% mIoU. Nevertheless, the proposed CMFormer still outperforms the rest methods. The performance gain of the proposed CMFormer when trained on SYNTHIA dataset is not as significant as it is trained on CityScapes or GTA5 dataset. The explanation may be that the SYNTHIA dataset has much fewer samples than GTA5 dataset, i.e., 9400 v.s. 24966, and a transformer may be under-trained. CityScapes Source Domain Table 3 reports the performance. As HGFormer only reports one decimal results (Ding et al. 2023), we also report one decimal results when compared with it. The proposed CMFormer (with Swin-Base backbone) shows a performance gain of 6.32%, 10.43%, 9.50% and 12.11% mIoU on the B, M, G and S dataset against the state-of-the-art CNN based method. As BDD100K dataset contains many nigh-time urban-street images, it is particularly challenging for existing domain generalized USSS methods. Still, a performance gain of 6.32% is observed by the proposed CMFormer. On the other hand, when comparing ours with the contemporary HGFormer with the Swin-Large backbone, it shows an mIoU improvement of 1.1%, 1.5%, 1.3% and 1.7% on the B, M, G and S target domain, respectively. From Synthetic Domain to Real Domain We also test The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 823 Method Trained on SYNTHIA (S) →C →B →M →G IBN 32.04 30.57 32.16 26.90 IW 28.16 27.12 26.31 26.51 DRPC 35.65 31.53 32.74 28.75 ISW 35.83 31.62 30.84 27.68 GTR 36.84 32.02 32.89 28.02 SAW 38.92 35.24 34.52 29.16 AdvStyle 37.59 27.45 31.76 CMFormer (Ours) 44.59 33.44 43.25 40.65 Table 2: S →{C, B, M, G} setting. Performance comparison between the proposed CMFormer and existing domain generalized USSS methods. ’-’: The metric is either not reported or the official source code is not available. Evaluation metric mIoU is given in (%). Method Backbone Trained on Cityscapes (C) →B →M →G →S IBN Res50 48.56 57.04 45.06 26.14 IW Res50 48.49 55.82 44.87 26.10 Iternorm Res50 49.23 56.26 45.73 25.98 DRPC Res50 49.86 56.34 45.62 26.58 ISW Res50 50.73 58.64 45.00 26.20 GTR Res50 50.75 57.16 45.79 26.47 DIRL Res50 51.80 46.52 26.50 SHADE Res50 50.95 60.67 48.61 27.62 SAW Res50 52.95 59.81 47.28 28.32 WildNet Res50 50.94 58.79 47.01 27.95 HGFormer Swin-T 53.4 66.9 51.3 33.6 Ours Swin-B 59.27 71.10 58.11 40.43 HGFormer Swin-L 61.5 72.1 59.4 41.3 Ours Swin-L 62.6 73.6 60.7 43.0 Table 3: C →{B, M, G, S} setting. Performance comparison between the proposed CMFormer and existing domain generalized USSS methods. ’-’: the metric is either not reported or the official source code is not available. Evaluation metric mIoU is given in (%). †: HGFormer only reports one decimal results (Ding et al. 2023). the generalization ability of the CMFormer when trained on the synthetic domains (G+S) and validated on the three realworld domains B, C and M, respectively. The results are shown in Table 4. The proposed CMFormer significantly outperforms the instance normalization based (IBN (Pan et al. 2018)), whitening transformation based (ISW (Choi et al. 2021)) and adversarial domain training based (SHADE (Zhao et al. 2022), AdvStyle (Zhong et al. 2022)) methods by >10% mIoU. From Clear to Adverse Conditions we further validate the proposed CMFormer’s performance on the adverse conditions dataset with correspondance (ACDC) (Sakaridis, Dai, and Van Gool 2021). We set the fog, night, rain and snow as four different unseen domains, and directly use the model pre-trained on CityScapes for inference. The results are shown in Table 5. It significantly outperforms existing domain generalized segmentation methods (Pan et al. 2018; Backbone Trained on Two Synthetic Domains (G+S) →Citys →BDD →MAP mean Res50 35.46 25.09 31.94 30.83 IBN 35.55 32.18 38.09 35.27 ISW 37.69 34.09 38.49 36.75 SHADE 47.43 40.30 47.60 45.11 AdvStyle 39.29 39.26 41.14 39.90 SPC 46.36 43.18 48.23 45.92 Ours 59.70 53.36 61.61 58.22 Table 4: Generalization of the proposed CMFormer when trained on two synthetic datasets and generalized on real domains. Evaluation metric mIoU is presented in (%). Method Trained on Cityscapes (C) →Fog →Night →Rain →Snow IBN 63.8 21.2 50.4 49.6 IW 62.4 21.8 52.4 47.6 ISW 64.3 24.3 56.0 49.8 ISSA 67.5 33.2 55.9 53.2 Ours 77.8 33.7 67.6 64.3 Table 5: Generalization of the proposed CMFormer to the adverse condition domains (rain, fog, night and snow) on ACDC dataset (Sakaridis, Dai, and Van Gool 2021). Content Enhancement Trained on CityScapes (C) ×32 ×16 ×8 →B →M →G →S 55.43 66.12 55.05 38.19 ✓ 56.17 67.55 55.42 38.83 ✓ ✓ 58.10 69.72 55.54 39.41 ✓ ✓ ✓ 59.27 71.10 58.11 40.43 Table 6: Ablation studies on each component of the proposed CMFormer. ×32, ×16 and ×8 denote the image features of ×32, ×16 and ×8 resolution. ✓refers to the content enhancement is implemented. Evaluation metric mIoU. Huang et al. 2019a; Pan et al. 2019; Choi et al. 2021; Li et al. 2023) by up to 10.3%, 0.5%, 11.6%, 11.1% on the fog, night, rain and snow domains, respectively. Ablation Studies On Content-enhancement of Each Resolution Table 6 reports the performance of the proposed CMFormer when ×32, ×16 and ×8 image features are or are not implemented with content enhancement. The content enhancement on a certain resolution feature allows the exploiting of its low-resolution properties. When no image features are implemented with content enhancement, CMFormer degrades into a Mask2Former (Cheng et al. 2022) which only includes the high-resolution properties. When only implementing content enhancement on the ×32 image feature, the down-sampled ×128 image feature may propagate little content information to the segmentation mask, and only a performance gain of 0.74%, 1.43%, 0.37% and 0.64% on B, M, G and S target domain is observed. When further implementing content enhancement on the ×16 image feature, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 824 Unseen images Ground truth IBN SAW Ours ISW IW Figure 5: Unseen domain segmentation prediction of existing CNN based domain generalized semantic segmentation methods (IBN (Pan et al. 2018), IW (Pan et al. 2019), ISW (Choi et al. 2021), SAW (Peng et al. 2022)) and the proposed CMFormer under the C →B, M, G, S setting. Unseen images Ground truth IBN SAW Ours IW ISW Figure 6: Unseen domain segmentation prediction of existing CNN based domain generalized semantic segmentation methods (IBN (Pan et al. 2018), IW (Pan et al. 2019), ISW (Choi et al. 2021), SAW (Peng et al. 2022)) and the proposed CMFormer under the C →adverse domain setting. the enhanced content information begins to play a role, and an additional performance gain of 1.93%, 2.17%, 0.12% and 0.58% is observed. Then, the content enhancement on the ×8 image feature also demonstrates a significant impact on the generalization ability. Quantitative Segmentation Results Some segmentation results on the C →B, M, G, S setting and C →adverse domain setting are visualized in Fig. 5 and 6. Compared with the CNN based methods, the proposed CMFormer shows a better segmentation prediction, especially in terms of the completeness of objects. Conclusion In this paper, we explored the feasibility of adapting the mask Transformer for domain-generalized urban-scene semantic segmentation (USSS). To address the challenges of style variation and robust content representation, we proposed a content-enhanced mask attention (CMA) mechanism. This mechanism is designed to capture more resilient content features while being less sensitive to style variations. Furthermore, we integrate it into a novel framework called the Content-enhanced Mask TransFormer (CMFormer). Extensive experiments on multiple settings demonstrated the superior performance of CMFormer compared to existing domain-generalized USSS methods. Boarder Social Impact. The proposed method has the potential to enhance the accuracy and reliability of semantic segmentation models, thereby contributing to safer and more efficient autonomous systems. Overall, the proposed content-enhanced mask attention mechanism offers promising advancements in domain-generalized USSS. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 825 References Bi, Q.; You, S.; and Gevers, T. 2023. Interactive Learning of Intrinsic and Extrinsic Properties for All-Day Semantic Segmentation. IEEE Transactions on Image Processing, 32: 3821–3835. Chattopadhyay, P.; Balaji, Y.; and Hoffman, J. 2020. Learning to balance specificity and invariance for in and out of domain generalization. In European Conference on Computer Vision, 301–318. Springer. Chen, W.-T.; Huang, Z.-K.; Tsai, C.-C.; Yang, H.-H.; Ding, J.-J.; and Kuo, S.-Y. 2022. Learning Multiple Adverse Weather Removal via Two-Stage Knowledge Learning and Multi-Contrastive Regularization: Toward a Unified Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17653–17662. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1290–1299. Cheng, B.; Schwing, A.; and Kirillov, A. 2021. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34: 17864–17875. Choi, S.; Jung, S.; Yun, H.; Kim, J.; Kim, S.; and Choo, J. 2021. RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11580– 11590. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Ding, J.; Xue, N.; Xia, G.-S.; Schiele, B.; and Dai, D. 2023. HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15413–15423. Hu, C.; and Lee, G. H. 2022. Feature Representation Learning for Unsupervised Cross-Domain Image Retrieval. In European Conference on Computer Vision, 529–544. Springer. Huang, L.; Zhou, Y.; Zhu, F.; Liu, L.; and Shao, L. 2019a. Iterative Normalization: Beyond Standardization towards Efficient Whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4874–4883. Huang, L.; Zhou, Y.; Zhu, F.; Liu, L.; and Shao, L. 2019b. Iterative normalization: Beyond standardization towards efficient whitening. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, 4874–4883. Huang, W.; Chen, C.; Li, Y.; Li, J.; Li, C.; Song, F.; Yan, Y.; and Xiong, Z. 2023. Style Projected Clustering for Domain Generalized Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3061–3071. Ji, W.; Li, J.; Bi, Q.; Liu, J.; Cheng, L.; et al. 2022. Promoting Saliency From Depth: Deep Unsupervised RGB-D Saliency Detection. In International Conference on Learning Representations. Ji, W.; Yu, S.; Wu, J.; Ma, K.; Bian, C.; Bi, Q.; Li, J.; Liu, H.; Cheng, L.; and Zheng, Y. 2021. Learning calibrated medical image segmentation via multi-rater agreement modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12341–12351. Kim, J.; Lee, J.; Park, J.; Min, D.; and Sohn, K. 2022. Pin the memory: Learning to generalize semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4350–4360. Lambert, J.; Liu, Z.; Sener, O.; Hays, J.; and Koltun, V. 2020. MSeg: A composite dataset for multi-domain semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2879–2888. Lee, S.; Seong, H.; Lee, S.; and Kim, E. 2022. WildNet: Learning Domain Generalized Semantic Segmentation from the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9936–9946. Li, J.; Ji, W.; Bi, Q.; Yan, C.; Zhang, M.; Piao, Y.; Lu, H.; et al. 2021. Joint semantic mining for weakly supervised RGB-D salient object detection. Advances in Neural Information Processing Systems, 34: 11945–11959. Li, M.; Namkoong, H.; and Xia, S. 2021. Evaluating model performance under worst-case subpopulations. Advances in Neural Information Processing Systems, 34: 17325–17334. Li, Y.; Zhang, D.; Keuper, M.; and Khoreva, A. 2023. IntraSource Style Augmentation for Improved Domain Generalization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 509–519. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 10012–10022. Mahajan, D.; Tople, S.; and Sharma, A. 2021. Domain generalization using causal matching. In International Conference on Machine Learning, 7313–7324. PMLR. Mirza, M. J.; Masana, M.; Possegger, H.; and Bischof, H. 2022. An Efficient Domain-Incremental Learning Approach to Drive in All Weather Conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3001–3011. Neuhold, G.; Ollmann, T.; Rota Bulo, S.; and Kontschieder, P. 2017. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, 4990–4999. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 826 Pan, J.; Bi, Q.; Yang, Y.; Zhu, P.; and Bian, C. 2022. Labelefficient hybrid-supervised learning for medical image segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2026–2034. Pan, X.; Luo, P.; Shi, J.; and Tang, X. 2018. Two at Once: Enhancing Learning and Generalization Capacities via IBNNet. In Proceedings of the European Conference on Computer Vision (ECCV), 464–479. Pan, X.; Zhan, X.; Shi, J.; Tang, X.; and Luo, P. 2019. Switchable Whitening for Deep Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1863–1871. Peng, D.; Lei, Y.; Hayat, M.; Guo, Y.; and Li, W. 2022. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2594–2605. Peng, D.; Lei, Y.; Liu, L.; Zhang, P.; and Liu, J. 2021. Global and local texture randomization for synthetic-to-real semantic segmentation. IEEE Transactions on Image Processing, 30: 6594–6608. Peng, X.; Qiao, F.; and Zhao, L. 2022. Out-of-Domain Generalization From a Single Source: An Uncertainty Quantification Approach. IEEE Transactions on Pattern Analysis and Machine Intelligence. Piva, F. J.; de Geus, D.; and Dubbelman, G. 2023. Empirical Generalization Study: Unsupervised Domain Adaptation vs. Domain Generalization Methods for Semantic Segmentation in the Wild. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 499–508. Qiao, F.; Zhao, L.; and Peng, X. 2020. Learning to learn single domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12556–12565. Richter, S. R.; Vineet, V.; Roth, S.; and Koltun, V. 2016. Playing for data: Ground truth from computer games. In European conference on computer vision, 102–118. Springer. Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; and Lopez, A. M. 2016. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3234–3243. Sakaridis, C.; Dai, D.; and Van Gool, L. 2021. ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10765–10775. Segu, M.; Tonioni, A.; and Tombari, F. 2023. Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135: 109115. Strudel, R.; Garcia, R.; Laptev, I.; and Schmid, C. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 7262–7272. Tjio, G.; Liu, P.; Zhou, J. T.; and Goh, R. S. M. 2022. Adversarial semantic hallucination for domain generalized semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 318–327. Volpi, R.; Namkoong, H.; Sener, O.; Duchi, J. C.; Murino, V.; and Savarese, S. 2018. Generalizing to unseen domains via adversarial data augmentation. Advances in neural information processing systems, 31. Wang, S.; Yu, L.; Li, C.; Fu, C.-W.; and Heng, P.-A. 2020. Learning from extrinsic and intrinsic supervisions for domain generalization. In European Conference on Computer Vision, 159–176. Springer. Xu, Q.; Yao, L.; Jiang, Z.; Jiang, G.; Chu, W.; Han, W.; Zhang, W.; Wang, C.; and Tai, Y. 2022. DIRL: Domaininvariant representation learning for generalizable semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2884–2892. Ye, Q.; Shen, X.; Gao, Y.; Wang, Z.; Bi, Q.; Li, P.; and Yang, G. 2021. Temporal cue guided video highlight detection with low-rank audio-visual fusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7950–7959. Yu, F.; Xian, W.; Chen, Y.; Liu, F.; Liao, M.; Madhavan, V.; and Darrell, T. 2018. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2(5): 6. Yue, X.; Zhang, Y.; Zhao, S.; Sangiovanni-Vincentelli, A.; Keutzer, K.; and Gong, B. 2019. Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization Without Accessing Target Domain Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2100–2110. Zhao, S.; Gong, M.; Liu, T.; Fu, H.; and Tao, D. 2020. Domain generalization via entropy regularization. Advances in Neural Information Processing Systems, 33: 16096–16107. Zhao, Y.; Zhong, Z.; Zhao, N.; Sebe, N.; and Lee, G. H. 2022. Style-hallucinated dual consistency learning for domain generalized semantic segmentation. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII, 535– 552. Springer. Zhong, Z.; Zhao, Y.; Lee, G. H.; and Sebe, N. 2022. Adversarial Style Augmentation for Domain Generalized UrbanScene Segmentation. In Advances in Neural Information Processing Systems. Zhou, B.; Yi, J.; and Bi, Q. 2021. Differential convolution feature guided deep multi-scale multiple instance learning for aerial scene classification. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing, 4595–4599. Zhou, K.; Yang, Y.; Hospedales, T.; and Xiang, T. 2020. Learning to generate novel domains for domain generalization. In European conference on computer vision, 561–578. Springer. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2021. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 827
2024
92
18,762
Uncertainty-Aware Yield Prediction with Multimodal Molecular Features Jiayuan Chen1, Kehan Guo2, Zhen Liu3, Olexandr Isayev3, Xiangliang Zhang2* 1The Ohio State University 2 Department of Computer Science and Engineering, University of Notre Dame 3Department of Chemistry, Carnegie Mellon University [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Predicting chemical reaction yields is pivotal for efficient chemical synthesis, an area that focuses on the creation of novel compounds for diverse uses. Yield prediction demands accurate representations of reactions for forecasting practical transformation rates. Yet, the uncertainty issues broadcasting in real-world situations prohibit current models to excel in this task owing to the high sensitivity of yield activities and the uncertainty in yield measurements. Existing models often utilize single-modal feature representations, such as molecular fingerprints, SMILES sequences, or molecular graphs, which is not sufficient to capture the complex interactions and dynamic behavior of molecules in reactions. In this paper, we present an advanced Uncertainty-Aware Multimodal model (UAM) to tackle these challenges. Our approach seamlessly integrates data sources from multiple modalities by encompassing sequence representations, molecular graphs, and expert-defined chemical reaction features for a comprehensive representation of reactions. Additionally, we address both the model and data-based uncertainty, refining the model’s predictive capability. Extensive experiments on three datasets, including two high throughput experiment (HTE) datasets and one chemist-constructed Amide coupling reaction dataset, demonstrate that UAM outperforms the stateof-the-art methods. The code and used datasets are available at https://github.com/jychen229/Multimodal-reaction-yieldprediction. Introduction Computer-Assisted Synthesis Prediction (CASP) has emerged as a key area of focus in the intersection of artificial intelligence in scientific domains. The goal of CASP revolves around tackling a diverse array of chemical challenges, including the prediction of reaction products (Coley et al. 2017) and the intricacies of retro-synthesis (Ishida et al. 2019). Yield prediction, among the spectrum of CASP tasks, is particularly crucial. The target of yield prediction is to accurately estimate the practical conversion rates in chemical reactions, illustrating the transition from reactants to products. In this context, yield prediction lays the foundation for reaction-related predictions, thereby supporting the advancements in CASP (Ahneman et al. 2018). *The corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. When conceptualized as a machine learning problem, yield prediction is essentially a regression task. The development of an effective yield prediction model depends critically on obtaining high-quality representations of the reactants and products involved in chemical reactions. Early, molecular fingerprints were employed to depict chemical structures, yet their efficacy in handling complex structures was limited. Deep learning-based methods can automatically learn intricate patterns and features from data. For instance, (Schwaller et al. 2020) employ BERT (Devlin et al. 2018), a bidirectional transformer language model, for learning the representation of molecules involved in chemical reactions based on their sequential SMILES expressions. This learned representation is then utilized in a subsequent regression model to predict yields. Similarly, (Kwon et al. 2022) employ molecular graphs to represent molecules within chemical reactions and utilize graph neural networks to learn useful features for yield prediction. These current yield prediction models exhibit strong performance on specially curated reaction datasets, such as the HighThroughput (HTE) datasets (Ahneman et al. 2018; Perera et al. 2018). However, when applied to real-world tasks, their efficacy diminishes significantly (Saebi et al. 2023). One primary reason for this decline is the pervasive issue of uncertainty in real-world yield prediction datasets, manifesting in two major aspects. High sensitivity of yield. In chemical reactions, structural isomers—compounds with identical molecular formulas but different arrangements of atoms—can significantly impact the yield. Even minor structural variations within the reactants themselves can lead to pronounced discrepancies in the resulting yields. For example, the addition of a methoxy group that is far from the reaction center can lower the reaction center by as much as 55% (Schierle et al. 2020). This highlights how real-world reactions can be extremely sensitive to slight variations in the reactants and products involved. Existing models, as referenced by (Schwaller et al. 2021), primarily utilize single-modal data such as graphs or sequences, and thus may not adequately capture the subtle structural variations in molecules. These subtle yet critical variations include minor differences in stereochemistry and the presence of specific functional groups, both of which can have a significant impact on reaction pathways and yields. Uncertainty in the yield measurement. The yield from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8274 the reaction process depends on many factors in the reaction cycle, including the properties of the molecules, the environmental condition, and human operations. As a result, the same reaction can exhibit significant yield variations. For example, (Liu, Moroz, and Isayev 2023) pointed out that the yield standardized deviation can be as large as 23.7% when the same reaction was reported by different research groups. Although (Kwon et al. 2022) considered yield prediction uncertainty and introduced an uncertainty-related loss for training the prediction model, the inherent intricacies of data uncertainty hinder a precise prediction. To address the aforementioned challenges, we propose an advanced Uncertainty-Aware Multimodal model (UAM) for yield prediction by taking into account multi-modal features to combat the prediction uncertainty. Specifically, we introduce a multi-modal feature extractor that integrates sequence features, graph structural features, and humandefined reaction condition features to acquire a more comprehensive representation of reactants and products. Moreover, aided by cross-modal contrastive learning, we facilitate modal fusion to capture the shared information and distinctive features across modalities to alleviate discrepancies induced by the high sensitivity of yield. Additionally, we incorporate a Mixture-of-Experts (MoE) module to enhance model expressiveness without additional computational costs. This facilitates a dynamic equilibrium between the model’s sensitivity to variations and its ability to discern reaction types. Last, we introduce an uncertainty quantification module, which mitigates the inherent training uncertainty of the model while focusing on quantifying the uncertainty presented in the data itself, thereby enhancing predictive accuracy. Our contributions in this work are summarized as follows: • We study the reaction yield prediction problem and proposed a novel model called UAM to tackle the uncertainty issue by fusing multi-modal molecular features; • We explore an innovative and effective way to utilize cross-modal contrastive learning and an additional MoE module is added to enhance the reaction representation; • Experimental results on three real-world datasets demonstrate the effectiveness of UAM in comparison to the state-of-the-art approaches. Related Work Molecular Representation Learning Molecular representation learning is a crucial link between machine learning and chemistry and is gaining rising awareness in computational chemistry. Early techniques manually compute chemical descriptors like Morgan fingerprints (Pattanaik and Coley 2020; Sandfort et al. 2020) or Density functional theory (DFT) descriptors (Hu et al. 2003) to obtain numerical vector representations of molecules. Lately, deep learning is gaining attention with two main categories: sequence-based and graph-based methods. The first category builds upon the practice that molecules are often represented as SMILES string (Weininger, Weininger, and Weininger 1989). These methods leverage sequence deep neural network models such as Recurrent Neural Network (Segler et al. 2018) and Transformer (Schwaller et al. 2019, 2021) to effectively encode molecular information. The second category, graph-based methods, concentrates on the atom-atom connection patterns within molecules (Guo et al. 2023c). This approach stems from the understanding that a molecule’s activity and properties are often closely linked to its structural information. Although SMILES string captures sequential details, they can lose global context in cases of lengthy SMILES sequences. In contrast, graph-based molecular representation (Hu et al. 2019; Guo et al. 2021; Wang et al. 2021; Li, Zhao, and Zeng 2022) preserves structural information by naturally mapping molecules into graphs with atoms as nodes and bonds as edges. However, molecular representations that rely on a single modality have inherent limitations. Graph-based models may not inherently represent the stereochemistry of molecules, such as the R/S configuration in chiral centers or E/Z configuration in double bonds. SMILES, however, can be extended to include stereochemical information by using or symbols. While human-defined features incorporate abundant domain knowledge, they require complex pre-computation and may not produce the most task-relevant and generalizable molecular features. In this paper, we propose a multi-modal molecular representation encoding followed by a late fusion, so it effectively captures the inherent characteristics of chemical reactions. Reaction Yield Prediction Chemical reaction yield prediction is a crucial application in machine learning for chemical synthesis. The reaction yield is typically a certain percentage of the theoretical chemical conversion. Therefore, in evaluating the reaction yield, the representation learning of both reactants and products plays an important role. Earlier, (Ahneman et al. 2018) utilizes molecular descriptors with off-the-shelf machine learning models such as Random Forest to predict cross-coupling reactions. However, such methods are limited to specific reaction categories and require expert intuition to select the appropriate chemical fingerprints. Deep learning has enabled the utilization of sequence-based and graph-based models for general reaction yield prediction (Guo et al. 2021, 2023a,b). For instance, YieldBert (Schwaller et al. 2020, 2021) employs transformers to encode reaction SMILES for context-dependent molecular information. Meanwhile, other approaches (Gilmer et al. 2017; Kwon et al. 2022) leverage GNNs to predict yields using graph-based molecular representations. However, due to the inherent limitations of learning representations from single-model data, these models exhibit suboptimal performance on real-world datasets. They fail to account for the uncertainty arising from factors such as reaction conditions (temperature, time), side reactions, reactant degradation, and other influences. (Kwon et al. 2022) is the most related work to ours for considering uncertainty in yield prediction. However, it merely predicted additional variance for auxiliary training without conducting an intricate and comprehensive analysis of uncertainty inherent in chemical reactions. In this paper, we analyze the sources of uncertainty and employ uncertainty quantification techniques to enhance the performance of yield predictions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8275 SMILES Encoder Graph Encoder Dense Layer Dense Layer FFN 1 FFN 2 ... FFN k Router Add& Norm MoE O N H N H O O N N N O HN O O O CC(C)=CCC(=O)O.Nc 1ccccc1>>CC(C)=CCC (=O)Nc1ccccc1 Human-defined Features Late Fusion Contrastive Pre-training Transfer SMILES Encoder Graph Encoder ... ... D12 D21 D22 ... ... ... D11 ... ... ... Dn1 Dn2 ... Dnn D2n D1n Yields Figure 1: The framework of our approach UAM, which consists of three encoders: graph encoder, SMILES encoder, and human-defined feature encoder. The top part shows the contrastive pre-training for combining the representation from SMILES and graph encoder. The lower part depicts the encoding process for human-defined features. This process is structured with a densely-connected layer, followed by the Mixture of Experts (MoE) module, and then another series of dense layers. The late fusion module is designed by either voting fusion, feature concatenation, or self-attention weighted fusion for predicting the yields. The SMILES and graph encoders are initially pre-trained through contrastive learning, and then, along with the dense layers, the MoE and fusion modules, they undergo an end-to-end fine-tuning. Methodology In this section, we first define the multi-modal yield prediction problem and then present the details of our model. Problem Definition Let R = {R1, ..., RN} be a set of chemical reactions and Y = {y1, ..., yN} be the reaction yields representing the percentage conversion of reactants into products, where N is the number of reactions. Given a reaction Ri ∈R, our model’s input comprises molecular graphs {Gi r1, ..., Gi rn, Gi p1, ..., Gi pm}, SMILES sequence Si, and human-defined features Hi (e.g., molecular fingerprints, reaction conditions), where r denotes reactants, p represents products, and n and m are their respective quantities. Typically, most of the reactions involve n=2 reactants and m=1 or 2 products. The yield of a reaction yi is a real value between 0 and 1. The goal of yield prediction is to develop a mapping function, fΘ : R →Y . This function involves encoding Ri into representation vectors and subsequently associating these vectors with the prediction target yi. Model Architecture The architecture of our approach is shown in Figure 1. The model consists of four components: graph encoder, SMILES encoder, human-defined feature encoder, and multi-modal fusion. The SMILES and graph encoders are pre-trained with a contrastive learning strategy. Subsequently, these encoders, in conjunction with the dense layers, MoE and fusion modules, are subjected to end-toend fine-tuning. The embedding vectors for the reactantproduct SMILES sequences are represented as fS, while those for the reactant-product molecular graphs are denoted as fG. The human-defined reaction features, after being processed through a mixture-of-experts feature encoder, are represented as low-dimensional features fH. These features, derived from the three modalities, are then fed into a perceiver for late fusion. Finally, we introduce an uncertainty quantification module to enhance the model’s performance. The following sections detail each component of the model. Graph Encoder. For reaction Ri, the graph encoder encodes the reactants and products separately, and concatenates them as the output embedding f i G: f i G = Concat  Enc(Gi r1), . . . , Enc(Gi pm)  . (1) As shown in Figure 2, the graph encoder includes a node information propagation module and a graph-level global pooling module. The node information propagation module has two components: feature mapping for nodes and edges, and feature aggregation. Considering the atom heterogeneity and bonding affinity in molecules, we designed a highfrequency information capture layer to enrich the features of the nodes. The graph-level pooling part can be a simple permutation invariant function such as Max and Mean, or a more sophisticated algorithm like GlobalAttention. SMILES Encoder. Similar to YieldBert (Schwaller et al. 2020, 2021), the SMILES encoder is constructed by stacking multiple transformer encoders (Vaswani et al. 2017). It can capture long-range dependencies of elements in reactions and obtain the embedding vector of reaction SMILES sequence: f i S = Enc(Si) (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8276 Figure 2: Graph Encoder, including atom and bond feature propagation, as well as graph-level pooling. For a detailed introduction to the encoders, please refer to the implementation at https://github.com/jychen229/ Multimodal-reaction-yield-prediction. Multi-Modal Contrastive Learning To integrate the long-range dependencies identified in SMILES sequences with the spatial and structural information derived from molecular graphs, we employ a multi-modal contrastive learning strategy. Our approach is built based on the idea that the encoding vectors derived from SMILES sequences and those from molecular graphs should be similar if they correspond to the same reaction, and distinct if they refer to different reactions. Specially, we consider (f j S, f j G) as a positive pair, as they represent the same reaction Ri through both molecular graph and sequence modalities. Conversely, pairs such as (f j S, f k G) and (f k S, f j G), where k ̸= j, are considered negative pairs, since these SMILES sequences and molecular graphs correspond to different reactions. To ensure that positive pairs have closely aligned encoding vectors and negative pairs have divergent ones, we minimize the following contrastive training loss, with learnable temperature τ ∈R+: Lc = −1 2 log e⟨f j G,f j S⟩/τ PN k=1 e⟨f j G,f k S⟩/τ −1 2 log e⟨f j G,f j S⟩/τ PN k=1 e⟨f k G,f j S⟩/τ , where e⟨,⟩ensures dimension flexibility by transforming the multi-modal encoded vectors through a nonlinear projection to fixed-dimensional vectors for contrastive learning (Zhang et al. 2022). In the pre-training stage, the SMILES encoder and graph encoder are trained using this contrastive learning loss on the input dataset. These pre-trained encoders will be fine-tuned later with other modules. Mixture-of-Experts Feature Encoder The humandefined features include Morgan fingerprints, Mordred features, and QM descriptors (Liu, Moroz, and Isayev 2023). Due to the complexity of reactions, these features are often represented as high-dimensional sparse vectors. In order to extract and compress the most relevant information from these high-dimensional inputs, we employ a sparse MoE model, which is designed to uncover the shared subspaces common to subsets of reactions. Each expert can specialize in different aspects found within the highdimensional data, and characterize the common features shared by specific subsets of reactions. The router automates expert assignment for each reaction’s feature extraction. The nature that only a subset of experts is activated per input significantly reduces computational load. Specifically, for the input features H, we first process them through a dense layer and then feed the obtained xH into the MoE layers. The router, a gate function with trainable weights: G(xH) = Softmax (Wg · xH), assigns each input reaction to t out of k experts, E = {E1, ..., Ek}. Each experts Ei is a feed-forward network (FFN). One MoE layer presents the output: MoE(xH) = t X i=1 G(xH)i · Ei(xH) (3) which is a linear combination of the outputs from t FFNs. If required, MoE(xH) can be passed through another MoE layer that possesses the same functional design. Following (Shazeer et al. 2017), we introduce an auxiliary loss La to encourage balanced routing to all experts. The output of MoE is transformed as fH by another dense layers to get integration with fG and fS. Late Fusion and Prediction The multi-modal reaction representation fG, fS and fH can be incorporated with various strategies such as voting fusion, feature concatenation, or self-attention weighted fusion, all aimed at effectively predicting the corresponding yield. The final prediction is denoted as ˆy. We next introduce our prediction loss with uncertainty quantification. Uncertainty Quantification Uncertainty is commonly categorized into aleatoric uncertainty and epistemic uncertainty. In reaction yield prediction, we further attribute uncertainty to model uncertainty and data uncertainty. Our model aims to minimize model uncertainty while employing the Bayesian learning framework (Kendall and Gal 2017) to model data uncertainty to enhance prediction performance and assist users in better evaluation reactions. Molecules in chemical reactions often contain conformers of differing energy levels, which could result in different yields being reported for the same reaction. Therefore, we consider the reaction yield ˆy as a random variable to account for the data uncertainty. By learning a probability distribution with the features x = {fG, fS, fH}, we sample from the distribution to obtain the final yield prediction. Taking the normal distribution as an example, we learn the mean µ(x) and variance σ(x) of the distribution, and obtain the final prediction through the reparameterization trick (Kingma and Welling 2013): ˆy = µ(x) + ϵ ∗σ(x) (4) where ϵ is an input independent variable, and p(ϵ) ∼ N(0, 1). The introduction of reparameterization enables models to consider uncertainty while maintaining differentiability, ensuring end-to-end training. Based on the above uncertainty quantification, the prediction loss function is defined as follows: Lu = 1 N N X i=1 " 1 σ (xi)2 ∥yi −µ (xi)∥2 + log σ (xi)2 # . (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8277 To reduce the model uncertainty, we employ the regularization method proposed in (Wu et al. 2021), where an additional KL-divergence loss Lr is introduced. During the end-to-end training process, the overall loss function L is defined by combining the prediction loss with uncertainty quantification Lu, the aforementioned auxiliary loss La for MoE, and the regularized dropout loss Lr: L = αLu + βLa + γLr (6) where α, β and γ are hyper-parameters. More details of the loss functions and the implementation code are available at https://github.com/jychen229/Multimodal-reactionyield-prediction. Experiment Experimental Setup Datasets We use three evaluation datasets (see Table 1), two of which are popularly employed High-throughput experiment (HTE) datasets and the third one is constructed from patent literature by expert chemists. • High-throughput (HTE) datasets. We used BuchwaldHartwig dataset (Ahneman et al. 2018) and SuzukiMiyaura dataset (Perera et al. 2018), which respectively involve high-throughput experiments on the class of Pdcatalyzed Buchwald-Hartwig C-N cross-coupling reactions and Suzuki-Miyaura cross-coupling reactions. • Amide coupling reaction (ACR) dataset1. This is a recently launched large literature dataset, containing 41,239 amide coupling reactions extracted from Reaxys (Reaxys 2020). It is considerably more complex than the two HTE datasets. In addition to the SMILES representations of reactants and products, it furnishes contextual information about the reactions, including time, temperature, reagents, conditions, and solvent, which are important for yield prediction. Baselines We evaluated the proposed method against three types of baselines: sequence models, graph-based models, and multi-modal models: • One-hot (Chuang and Keiser 2018) represents the chemical reaction as one-hot vectors of reactants and products, indicating the presence or absence of each component. • YieldBert (Schwaller et al. 2020, 2021) takes reaction SMILES as input and applies the large-scale sequence model BERT for yield prediction and is fine-tuned on the dataset based on the rxnfp pre-trained model. • MPNN (Kwon et al. 2022), a graph-based model, represents reaction as a set of molecular graphs and utilizes graph neural networks for prediction. • YieldGNN (Saebi et al. 2023) conducts prediction by combining molecular graphs and chemical features such as Morgan substructure fingerprints calculated by Rdkit (Landrum et al. 2019) and canonical MDS using Tanimoto similarity metric. 1Available at https://github.com/isayevlab/amide reaction data Dataset No. reactions Buchwald-Hartwig reaction 3,955 Suzuki-Miyaura reaction 5,760 Amide coupling reaction 41,239 Table 1: The statistics of experimental datasets. Model MAE ↓ RMSE ↓ R2 ↑ Mordred 15.99 ± 0.14 21.08 ± 0.16 0.168 ± 0.010 YieldBert 16.52 ± 0.20 21.12 ± 0.13 0.172 ± 0.016 YieldGNN 15.27 ± 0.18 19.82 ± 0.08 0.216 ± 0.013 MPNN 16.31 ± 0.22 20.86 ± 0.27 0.188 ± 0.021 Ours 14.76 ± 0.15 19.33 ± 0.10 0.262 ± 0.009 Table 2: Results on the Amide coupling reaction dataset. Implementation Details Our model is implemented by Pytorch and optimized with Adam optimizer and cosine learning rate scheduler with warming up. For the graph-level pooling module, the model utilizes a transformer decoder. The expert assignment in MoE is configured with t=1 and k=6. For the HTE datasets, we adopted the experimental settings from the (Kwon et al. 2022) to ensure a fair comparison. In the experiments on the ACR dataset, the late fusion module is designed with feature concatenation, and the MoE is structured with two stacked layers. We adopted a train/valid/test split of 6/2/2 and employed early-stopping for avoid overfitting. Regarding the baseline models, for YieldBert, we utilized the model with augmented data. As for YieldGNN, the human-defined features utilized as inputs are identical to those employed in our model. To ensure the robustness of evaluation, we perform 10 random shuffles of each dataset, and we subsequently report both the mean and the standard deviation of these results. All experiments are executed on a single NVIDIA RTX3090 GPU. Additional details of the model architecture and specific experimental settings can be found at the shared GitHub link. Results on the ACR Dataset The performance of UAM and baselines on the ACR dataset are reported in Table 2, where the best results are highlighted in bold and the second best baseline scores are underlined. It is observed that UAM achieved the best performance compared to all baselines. Other observations are as follows: Notably, we observe that all models exhibit suboptimal predictive performance on this dataset, with R2 consistently below 0.5. This phenomenon stems from the inherent complexity of the ACR dataset and the presence of numerous incongruous reaction yields. On the contrary, our UAM results significantly surpass those of the baseline models in terms of three key metrics: R2, mean absolute error (MAE), and root mean squared error (RMSE). In comparison to the baseline model, our approach has achieved an improvement of nearly 25% in terms of R2 performance. This underscores the substantial efficacy of our model’s enhancements in addressing uncertainty in real-world datasets. It is indeed the uncerThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8278 Model MAE ↓ RMSE ↓ R2 ↑ One-hot 6.08 ± 0.08 9.02 ± 0.16 0.890 ± 0.005 YieldBert 3.09 ± 0.12 4.80 ± 0.26 0.969 ± 0.004 YiledGNN 3.89 ± 0.14 6.01 ± 0.21 0.953 ± 0.003 MPNN 2.92 ± 0.06 4.43 ± 0.09 0.974 ± 0.001 Ours 2.89 ± 0.06 4.36 ± 0.10 0.976 ± 0.001 Table 3: Results on the Buchwald–Hartwig reactions dataset. Model MAE ↓ RMSE ↓ R2 ↑ One-hot 8.55 ± 0.08 12.27 ± 0.15 0.809 ± 0.023 YieldBert 6.60 ± 0.27 10.52 ± 0.48 0.859 ± 0.012 YiledGNN 6.96 ± 0.25 11.00 ± 0.37 0.845 ± 0.011 MPNN 6.12 ± 0.22 9.47 ± 0.46 0.886 ± 0.010 Ours 6.04 ± 0.18 9.23 ± 0.40 0.888 ± 0.009 Table 4: Results on the Suzuki–Miyaura reactions dataset. tainty within the dataset that hinders the accurate predictions of baselines. Furthermore, UAM not only demonstrates the highest predictive accuracy but also exhibits smaller standard deviations, showcasing the model’s stability. We can also find that YieldGNN outperforms MPNN on the ACR dataset. This can be attributed to YieldGNN’s incorporation of human-defined features, enabling more accurate predictions than MPNN. However, YieldBert and MPNN, which solely utilize sequence or graph structural information, yield less favorable results. And our model not only leverages information from three modalities but also employs enhanced feature extractors, resulting in superior performance on the large-scale real-world dataset. Results on Two HTE Datasets The performance of UAM and baseline models on the two HTE datasets are reported in Table 3 and 4. The results of the baseline models are reported from (Kwon et al. 2022). One can observe that most of the models have achieved R2 values exceeding 0.95 or 0.85 on these two datasets. This can be attributed to the relatively homogeneous reaction types within the HTE datasets, rendering the intrinsic features of reactions easier to extract. Building upon this foundation, our model has achieved noticeable enhancements, affirming the superiority of our model’s encoders. Furthermore, while YieldGNN, MPNN, and our model all incorporate GNN modules, YieldGNN’s performance lags slightly behind. This discrepancy arises due to the adoption of the encoder-decoder pooling architecture in both our model and MPNN, which inherently outperforms the graph convolution utilized in YieldGNN. Notably, one can observe that our model’s performance improvement on the ACR dataset surpasses that on the HTE dataset by a significant margin. This phenomenon can be attributed to the characteristic of the HTE dataset, which consists of reactions carefully curated by chemists, resulting in a relatively straightforward linkage between yields and reactions. Consequently, nearly all baseline models achieve 100 200 Training Set Size 0.600 0.625 0.650 0.675 0.700 0.725 0.750 0.775 R2 Ours Ours-LP MPNN YieldBert Figure 3: Label efficient learning performance on the Buchwald–Hartwig reactions dataset. R2 values above 0.95 or 0.85. In contrast, the ACR dataset represents a large-scale real-world dataset, as we mentioned earlier, and the inherent uncertainty within the dataset poses challenges for baseline models to make accurate predictions. The model design of the UAM effectively addresses these challenges, leading to substantial performance enhancements. Performance of Label Efficient Learning We conducted further analysis of the model’s performance within the context of Label Efficient Learning. Here, we additionally implemented a variant of our model with Linear Probe (Ours-LP). In this setting, the parameters of both the graph encoder and the SMILES encoder are held constant, while the human-defined feature encoder is omitted from the configuration. Training is exclusively conducted for the regressor component of the model. The results in Figure 3 show that our models demonstrate superior performance compared to the baseline models when trained on a limited number of samples (2.5% and 5% of the original training set). Particularly, Ours-LP attains optimal performance. This achievement can be attributed to the benefits of contrastive learning pretraining, which effectively captures the shared and complementary information among different modalities. This underscores the substantial potential of our model in scenarios where limited literature-recorded data are available for specific reaction categories. Ablation Studies In this section, we study the influence of different components in our model, including the uncertainty quantification loss function Lu, the regularized dropout loss Lr, features from the three modalities, and the MoE module. We report the main results in Table 5. Impact of the Uncertainty Quantification Loss Lu To study the impact of the uncertainty quantification loss Lu, we switched the loss function back to the normal L2 loss. The experimental results demonstrated a noticeable decrease in accuracy. This highlights the crucial role of uncertainty The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8279 Model MAE ↓ RMSE ↓ R2 ↑ Ours 14.76 ± 0.15 19.33 ± 0.10 0.262 ± 0.009 w/o UQ 15.08 ± 0.13 19.63 ± 0.09 0.249 ± 0.009 w/o Lr 14.80 ± 0.16 19.51 ± 0.10 0.261 ± 0.010 w/o MoE 15.12 ± 0.18 20.03 ± 0.13 0.230 ± 0.012 w/o Seq. 14.97 ± 0.16 19.55 ± 0.11 0.261 ± 0.010 w/o Graph 15.06 ± 0.15 19.59 ± 0.10 0.260 ± 0.009 w/o H. 15.83 ± 0.20 20.46 ± 0.18 0.212 ± 0.016 Table 5: Results of ablation study on the ACR dataset. UQ represents the Uncertainty Quantification, Lr is the regularized dropout loss, Seq represents the SMILES sequence, H. denotes the human-defined features, and w/o stands for the ablated model variant without a specific design element. assessment in real-world datasets. Meanwhile, there was no significant difference in the standard deviations of the results when changing the loss functions. This suggests that the uncertainty quantification does not adversely affect the robustness of the model. Impact of the Regularized Dropout Loss Lr We conducted ablation experiments regarding the regularized dropout loss Lr, for evaluating its effectiveness on mitigating the model’s intrinsic uncertainty. The results without Lr indicate that the model’s training-time uncertainty does indeed impact its performance to a certain extent. Impact of Mixture-of-Experts Another key design of UAM is to introduce Mixture-of-Experts layers. The MoE module allocates reactions to specific experts, enabling each FFN to handle particular reaction types. In the ablation study, we substituted the MoE module with an equally layered FFN. From Table 5, we observe that the model without MoE exhibited a performance decrease of approximately 10%. This highlights the effectiveness of MoE on extracting and compressing human-defined features compared to FFN. To gain a deeper insight into the expert selection process, we have visualized the distribution of expert selections in both the first and second MoE layers during the testing phase of experiments on the ACR dataset, as shown in Fig. 4. On the left side of the figure, it is evident that in the first layer, each expert is assigned a varying number of reactions. In contrast, the distribution of expert selections in the second layer is considerably more balanced compared to the first. This allocation in the MoE layers significantly boosts the model’s ability to expressively handle high-dimensional yet low-rank molecular descriptors and reaction condition information for predictive analysis. Moreover, this data allocation partitions the overall dataset uncertainty into submodules, leading to heightened prediction stability. Impact of Multi-Modal Features We also investigated the importance of multi-modal features for prediction. From the results in Table 5, it can be observed that both sequence and graph representations have an impact on yield prediction but are not significant. In comparison, humandefined features play a vital role in the prediction outcome. Figure 4: The distribution of expert selection in the first (left) and second (right) MoE layer. This phenomenon can be attributed to two reasons: firstly, the human-defined features include molecular descriptors like fingerprints, which cover partial sequence and graph structural information. Secondly, by incorporating the rich reaction context such as temperature, time, reagents, and conditions, these features provide a crucial supplement for yield prediction. Additionally, removing sequence and graph data has a limited impact on model performance, validating the partial redundancy in the information contained within SMILES and graph representations. It is worth mentioning that while the contribution of each modality varies with specific datasets, it is evident that the integration of multi-modal features positively enhances prediction performance. Conclusion and Broader Impact In this paper, we address the uncertainty inherent in predicting yields within real-world chemical reaction datasets. We introduce an uncertainty-aware multi-modal yield prediction model that synthesizes multi-modal molecular representation and incorporates a dedicated uncertainty quantification loss, thereby elevating predictive accuracy. Our experimental results reveal notable performance enhancements relative to existing yield prediction models. While our model has achieved significant improvement over baselines on the ACR dataset, there is still room for further enhancement. A promising direction could be the incorporation of additional modality, particularly those designed to handle 3D graph data (Sch¨utt et al. 2017; Liu et al. 2021, 2022). This integration could potentially increase the model’s performance by providing a more comprehensive understanding of molecular structures. As our model consists of multiple integrated modules, another future work will delve into the relationships between these components with the aim of refining model interpretability. Acknowledgments This work was supported by the National Science Foundation (CHE–2202693) through the NSF Center for Computer Assisted Synthesis (C-CAS, https://ccas.nd.edu/). O.I acknowledges CHE200122 allocation award from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by NSF grants #2138259, #2138286, #2138307, #2137603, and #2138296. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8280 References Ahneman, D. T.; Estrada, J. G.; Lin, S.; Dreher, S. D.; and Doyle, A. G. 2018. Predicting reaction performance in C–N cross-coupling using machine learning. Science, 360(6385): 186–190. Chuang, K. V.; and Keiser, M. J. 2018. Comment on “Predicting reaction performance in C–N cross-coupling using machine learning”. Science, 362. Coley, C. W.; Barzilay, R.; Jaakkola, T. S.; Green, W. H.; and Jensen, K. F. 2017. Prediction of organic reaction outcomes using machine learning. ACS central science, 3(5): 434–443. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural message passing for quantum chemistry. In ICML, 1263–1272. Guo, T.; Guo, K.; Nan, B.; Liang, Z.; Guo, Z.; Chawla, N. V.; Wiest, O.; and Zhang, X. 2023a. What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks. In NeurIPS. Guo, T.; Ma, C.; Chen, X.; Nan, B.; Guo, K.; Pei, S.; Chawla, N. V.; Wiest, O.; and Zhang, X. 2023b. Modeling non-uniform uncertainty in Reaction Prediction via Boosting and Dropout. arXiv preprint arXiv:2310.04674. Guo, Z.; Guo, K.; Nan, B.; Tian, Y.; Iyer, R. G.; Ma, Y.; Wiest, O.; Zhang, X.; Wang, W.; Zhang, C.; and Chawla, N. V. 2023c. Graph-based Molecular Representation Learning. In IJCAI, 6638–6646. Guo, Z.; Zhang, C.; Yu, W.; Herr, J.; Wiest, O.; Jiang, M.; and Chawla, N. V. 2021. Few-shot graph learning for molecular property prediction. In Proceedings of the Web Conference, 2559–2567. Hu, L.; Wang, X.; Wong, L.; and Chen, G. 2003. Combined first-principles calculation and neural-network correction approach for heat of formation. The Journal of Chemical Physics, 119(22): 11501–11507. Hu, W.; Liu, B.; Gomes, J.; Zitnik, M.; Liang, P.; Pande, V.; and Leskovec, J. 2019. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265. Ishida, S.; Terayama, K.; Kojima, R.; Takasu, K.; and Okuno, Y. 2019. Prediction and interpretable visualization of retrosynthetic reactions using graph convolutional networks. Journal of chemical information and modeling, 59(12): 5026–5033. Kendall, A.; and Gal, Y. 2017. What uncertainties do we need in Bayesian deep learning for computer vision? NeurIPS. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Kwon, Y.; Lee, D.; Choi, Y.-S.; and Kang, S. 2022. Uncertainty-aware prediction of chemical reaction yields with graph neural networks. Journal of Cheminformatics, 14: 2. Landrum, G.; Tosco, P.; Kelley, B.; sriniker; gedeck; NadineSchneider; Vianello, R.; Dalke, A.; Ric; Cole, B.; AlexanderSavelyev; Turk, S.; Swain, M.; Vaucher, A.; N, D.; W´ojcikowski, M.; Pahl, A.; JP; Berenger, F.; strets123; JLVarjo; O’Boyle, N.; Cosgrove, D.; Fuller, P.; Jensen, J. H.; Sforna, G.; DoliathGavid; Leswing, K.; Leung, S.; and van Santen, J. 2019. rdkit/rdkit: 2019 03 4 (Q1 2019) Release. Li, H.; Zhao, D.; and Zeng, J. 2022. KPGT: knowledgeguided pre-training of graph transformer for molecular property prediction. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 857–867. Liu, Y.; Wang, L.; Liu, M.; Zhang, X.; Oztekin, B.; and Ji, S. 2021. Spherical message passing for 3d graph networks. arXiv preprint arXiv:2102.05013. Liu, Z.; Moroz, Y. S.; and Isayev, O. 2023. The challenge of balancing model sensitivity and robustness in predicting yields: a benchmarking study of amide coupling reactions. Chemical Science, 14(39): 10835–10846. Liu, Z.; Zubatiuk, T.; Roitberg, A.; and Isayev, O. 2022. Auto3d: Automatic generation of the low-energy 3d structures with ANI neural network potentials. Journal of Chemical Information and Modeling, 62(22): 5373–5382. Pattanaik, L.; and Coley, C. W. 2020. Molecular Representation: Going Long on Fingerprints. Chem, 6(6): 1204–1207. Perera, D.; Tucker, J. W.; Brahmbhatt, S.; Helal, C. J.; Chong, A.; Farrell, W.; Richardson, P.; and Sach, N. W. 2018. A platform for automated nanomole-scale reaction screening and micromole-scale synthesis in flow. Science, 359(6374): 429–434. Reaxys. 2020. Reaxys Database. Accessed: Feb 10, 2020. Saebi, M.; Nan, B.; Herr, J. E.; Wahlers, J.; Guo, Z.; Zura´nski, A. M.; Kogej, T.; Norrby, P.-O.; Doyle, A. G.; Chawla, N. V.; et al. 2023. On the use of real-world datasets for reaction yield prediction. Chemical Science, 14(19): 4997–5005. Sandfort, F.; Strieth-Kalthoff, F.; K¨uhnemund, M.; Beecks, C.; and Glorius, F. 2020. A structure-based platform for predicting chemical reactivity. Chem, 6(6): 1379–1390. Schierle, S.; Helmst¨adter, M.; Schmidt, J.; Hartmann, M.; Horz, M.; Kaiser, A.; Weizel, L.; Heitel, P.; Proschak, A.; Hernandez-Olmos, V.; et al. 2020. Dual farnesoid X receptor/soluble epoxide hydrolase modulators derived from Zafirlukast. ChemMedChem, 15(1): 50–67. Sch¨utt, K. T.; Kindermans, P.-J.; Sauceda, H. E.; Chmiela, S.; Tkatchenko, A.; and M¨uller, K.-R. 2017. SchNet: A Continuous-Filter Convolutional Neural Network for Modeling Quantum Interactions. In NeurIPS, 992–1002. Schwaller, P.; Laino, T.; Gaudin, T.; Bolgar, P.; Hunter, C. A.; Bekas, C.; and Lee, A. A. 2019. Molecular transformer: a model for uncertainty-calibrated chemical reaction prediction. ACS central science, 5(9): 1572–1583. Schwaller, P.; Vaucher, A. C.; Laino, T.; and Reymond, J.L. 2020. Data augmentation strategies to improve reaction yield predictions and estimate uncertainty. Proceedings of NeurIPS 2020 Machine Learning for Molecules Workshop. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8281 Schwaller, P.; Vaucher, A. C.; Laino, T.; and Reymond, J.L. 2021. Prediction of chemical reaction yields using deep learning. Machine learning: science and technology, 2(1): 015016. Segler, M. H.; Kogej, T.; Tyrchan, C.; and Waller, M. P. 2018. Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS central science, 4(1): 120–131. Shazeer, N.; Mirhoseini, A.; Maziarz, K.; Davis, A.; Le, Q.; Hinton, G.; and Dean, J. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention Is All You Need. NeurIPS. Wang, H.; Li, W.; Jin, X.; Cho, K.; Ji, H.; Han, J.; and Burke, M. D. 2021. Chemical-reaction-aware molecule representation learning. arXiv preprint arXiv:2109.09888. Weininger, D.; Weininger, A.; and Weininger, J. L. 1989. SMILES. 2. Algorithm for generation of unique SMILES notation. Journal of chemical information and computer sciences, 29(2): 97–101. Wu, L.; Li, J.; Wang, Y.; Meng, Q.; Qin, T.; Chen, W.; Zhang, M.; Liu, T.-Y.; et al. 2021. R-drop: Regularized dropout for neural networks. NeurIPS, 10890–10905. Zhang, Y.; Jiang, H.; Miura, Y.; Manning, C. D.; and Langlotz, C. P. 2022. Contrastive learning of medical visual representations from paired images and text. In Machine Learning for Healthcare Conference, 2–25. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8282
2024
920
18,763
Sparse Enhanced Network: An Adversarial Generation Method for Robust Augmentation in Sequential Recommendation Junyang Chen*1, Guoxuan Zou*1, Pan Zhou2, Wu Yirui3, Zhenghan Chen4, Houcheng Su5, Huan Wang†6, Zhiguo Gong5 1College of Computer Science and Software Engineering, Shenzhen University, China 2School of Cyber Science and Engineering, Huazhong University of Science of Technology, China 3College of Computer and Information, Hohai University, China 4Microsoft (China) Co., Ltd, China 5State Key Laboratory of Internet of Things for Smart City, University of Macau, China 6College of Informatics, Huazhong Agricultural University, China [email protected] Abstract Sequential Recommendation plays a significant role in daily recommendation systems, such as e-commerce platforms like Amazon and Taobao. However, even with the advent of large models, these platforms often face sparse issues in the historical browsing records of individual users due to new users joining or the introduction of new products. As a result, existing sequence recommendation algorithms may not perform well. To address this, sequence-based data augmentation methods have garnered attention. Existing sequence enhancement methods typically rely on augmenting existing data, employing techniques like cropping, masking prediction, random reordering, and random replacement of the original sequence. While these methods have shown improvements, they often overlook the exploration of the deep embedding space of the sequence. To tackle these challenges, we propose a Sparse Enhanced Network (SparseEnNet), which is a robust adversarial generation method. SparseEnNet aims to fully explore the hidden space in sequence recommendation, generating more robust enhanced items. Additionally, we adopt an adversarial generation method, allowing the model to differentiate between data augmentation categories and achieve better prediction performance for the next item in the sequence. Experiments have demonstrated that our method achieves a remarkable 4-14% improvement over existing methods when evaluated on the real-world datasets. (https://github.com/junyachen/SparseEnNet) Introduction The aim of sequential recommendation (SR) is to comprehend users’ evolving preferences based on their historical behaviors, facilitating precise predictions of their forthcoming item preferences (Chen et al. 2022; Liu et al. 2021; Xie et al. 2022; Hjelm et al. 2018). SR has garnered significant interest for its effectiveness in predicting user interests. *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. v2 (a) Crop (b) Mask (c) Reorder (d) Propose: adversarial generation v1 v2 v3 v4 v5 mask v3 v4 mask v1 v2 p v3 v4 v3 v4 SparseEnNet v1 v5 v1 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 Figure 1: Examples of data augmentation techniques. For example, FPMC (Rendle, Freudenthaler, and Schmidt-Thieme 2010) synergizes matrix factorization and Markov chain methodologies to enhance recommender systems by deriving personalized transition matrices for users from limited observations, outperforming other models in sequential basket data. Additionally, ICL (Chen et al. 2022) develops users’ intent distribution functions from unlabeled behavior sequences and optimizes SR models with contrastive self-supervised learning by incorporating the learned intents. Nonetheless, these methods struggle to effectively address next-item prediction tasks in scenarios where user historical sequences are short and sparse. To tackle this challenge, recent studies have introduced innovative approaches. CoSeRec (Liu et al. 2021) explores the use of contrastive self-supervised learning, employing informative augmentation operators to create refined views, effectively addressing concerns like data sparsity and noisy data. In addition, CL4SRec (Xie et al. 2022) introduces three data augmentation techniques (crop/mask/reorder) to project user interaction sequences into diverse perspectives, enhancing the learning of superior user representations. NevertheThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8283 less, previous methodologies mainly focus on enhancing the surface layers of the original sequence, overlooking the interconnectedness within the hidden space of the sequence. To illustrate this issue, we provide an example in Fig. 1, where each top row in the subgraphs represents an additional sequence generated using data augmentation methods, while the corresponding original sequence is shown in the bottom row. The figure demonstrates the following scenarios: (a) Random selection of a continuous sub-sequence from the original sequence starting at position v2; (b) Random masking of several items within a sequence; (c) Random shuffling of a sub-sequence. Furthermore, there are alternative approaches, like substituting or inserting various items into a sequence. Collectively, we term the aforementioned approaches as shallow data augmentation. To further comprehensively explore the latent space in sequence recommendation, we integrate previously mentioned data augmentation techniques and introduce a novel adversarial generation approach named SparseEnNet, as depicted in Fig. 1 (d). In this method, we process items v3, v4, and v5 through a specially designed encoder to produce virtual item embeddings, thus enhancing the original sequence. Nevertheless, we observed that an excessive use of data augmentation methods can potentially hamper the predictive accuracy of the original model, especially for longer sequences which inherently contain rich historical information. This excessive augmentation could lead to suboptimal results. Consequently, in order to enhance the robustness of the sequential recommendation performance, we design four main components in the proposed SparseEnNet. More concretely, we first exploit an augmentation discriminator to differentiate between data augmentation categories and achieve better prediction performance for the next item in the sequence. Then, we adopt a stability discriminator to stabilize the generated item embeddings from the same augmentation operation. After that, we employ a negative sample learning approach to maximize the mutual information between two positive pairs while effectively increasing the distance from negative items. Without loss of generality, we utilize a Transformer (Vaswani et al. 2017) as the sequence encoder to predict the next item. Finally, we use the self-training enhanced learning to encode all original sequences using a sequence encoder and then aggregate the sequence representations to form a set of sequence representations, for capturing consistency among similar sequences. Note that all mentioned encoders are served as the parts of the generator. The contributions of our work can be summarized as follows: (1) We propose a robust adversarial generation method, called SparseEnNet, that can fully explore the hidden space in sequence recommendation by generating more robust enhanced items. (2) We design four main components: an augmentation discriminator, a stability discriminator, a negative sample learning module, and a self-training enhanced learning module to achieve the above purpose. Extensive experiments are conducted on three widely used datasets to demonstrate the superiority of our method over several baselines. Related Work Contrastive Learning in SR Early research in sequential recommendation (SR) often relied on Markov Chains (MC) to capture the sequential correlations among items (Zimdars, Chickering, and Meek 2001). As deep learning advanced, various deep learning architectures were incorporated into sequence recommendation tasks. For instance, (Hidasi et al. 2015) integrated the Gated Recurrent Unit (GRU) into sequence recommendation. Moreover, (Kang and McAuley 2018; Ji et al. 2020; Li, Wang, and McAuley 2020) harnessed attention mechanisms to extract contextual information, leading to more promising outcomes. Following that, contrastive learning (CL), a selfsupervised task, has garnered significant attention across various domains, including computer vision (CV) (Chen et al. 2020; He et al. 2020) and natural language processing (NLP) (Fang et al. 2020). More recently, contrastive learning has also found utility in the recommendation field, enhancing performance in sequential recommendation models. For instance, (Yao et al. 2021) proposed a collaborative filtering-based recommendation method that employs DNN-based contrastive learning to enhance item features. Furthermore, several studies have showcased the potential of using contrastive learning in graph neural networks to enhance recommendation performance (Wu et al. 2021; Xia et al. 2021). In the realm of sequential recommendation, S3-Rec (Zhou et al. 2020) employed contrastive learning for pre-training to improve item representation. Conversely, CL4SRec (Xie et al. 2022) incorporated CL into a multi-task learning framework, synergizing contrastive learning with sequential recommendation tasks for performance improvement. Besides, ICLRec (Chen et al. 2022) introduced intent contrastive self-supervised learning to capture user intents. The sequential recommendation methods mentioned above typically depend on extensive historical user-item interactions (Chen et al. 2023). Although some earlier approaches (Xie et al. 2022; Chen et al. 2022) have integrated data augmentation techniques, they often lack an indepth exploration of the underlying embedding space of sequences. As such, they may not effectively generate more resilient item embeddings. Adversarial Learning in SR Adversarial learning, originally introduced by Generative Adversarial Nets (GAN) (Goodfellow et al. 2014), enhances model performance by engaging in a minimax game between the generator and the discriminator. This approach has found wide applications in domains such as domain adaptation (Ganin et al. 2016) and anomaly detection (Akcay, Atapour-Abarghouei, and Breckon 2019). In the realm of recommendation, adversarial methods have been employed to enhance the performance of base models. For instance, Adversarial Personalized Ranking (APR) (He et al. 2018) employs adversarial training to bolster the robustness of Bayesian Personalized Ranking (BPR). In sequence recommendation, MFGAN (Ren et al. 2020) employs multiple factor discriminators to assess recommendations generated by the encoder, effectively disentangling various recommendaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8284 tion factors. In comparison to previous models, our approach incorporates adversarial generation into data augmentation in sequence recommendation. This integration enables the model to distinguish between data augmentation categories, ultimately leading to improved prediction performance for subsequent items in the sequence. Proposed Model Problem Definition We define sequential recommendation as follows: Given a user-item interaction sequence su = [vu 1 , ..., vu t , ..., vu |su|], where u ∈U represents the user set, and v ∈V represents the item set. For a user u, the sequential recommendation task involves predicting the most likely item interaction at next step |su| + 1. This can be formulated as follows: arg max v∈V P(vu |su|+1 = v|su). (1) Formulated Data Augmentation Shallow data augmentation: Without sacrificing generality, we adopt three data augmentation methods for sequence augmentation, following the approach outlined in previous work (Xie et al. 2022). The augmentation can be formulated: Crop (C): A random subsequence of length LC is selected from the original sequence su. The subsequence is cropped from position c, and this process can be defined by: sC u = [vu c , vu c+1, ..., vu c+LC−1]. (2) Mask (M): Items from the original sequence su are randomly chosen for masking. The mask operation can be described as follows: sM u = [ˆvu 1 , ˆvu 2 , ..., ˆvu |su|], (3) where ˆvu t denotes the randomly masked item. Reorder (R): A continuous subsequence of a given length LR starting from r is randomly shuffled within the original sequence su. The resulting reordered sequence can be formulated as: sR u = [vu 1 , ..., ˆvu r , ..., ˆvu r+LR−1, ..., vu |su|]. (4) Deep data augmentation: We have observed that excessive utilization of data augmentation methods might have a negative impact on the predictive accuracy of the original model, particularly for longer sequences that inherently hold substantial historical information. This overzealous augmentation could potentially lead to suboptimal results. To address this, and in order to bolster the robustness of sequential recommendation performance, we introduce the following approaches aimed at thoroughly exploring the latent space in sequence recommendation, thereby generating more robust enhanced items: Pooling (P): We user a pooling operator to explore the hidden information of items. The operation is described as follows: sP u = [vu 1 , ..., vu n−1, p, vu n+LP , ..., vu |su|], (5) where p = mean(vu n, ..., vu n+LP −1) is the generated item, n is the starting position of the pool subsequence, and LP is the number of items used for pooling. Sparse Enhanced Network The complete structure of SparseEnNet is depicted in Fig. 2, where specifics of each component are elucidated as follows. Although the aforementioned data augmentation methods can alleviate the data sparsity issue in short sequences, the generated items might adversely affect the original sequences if there is a substantial disparity in the distribution between the generated items and the original ones. To reduce the impact of distribution differences on model performance, we design an adversarial generation method, allowing the model to differentiate between data augmentation categories and achieve better prediction performance for the next item in the sequence. The details are as follows. Augmentation Discriminator: To solve the above problem, we exploit an augmentation discriminator with a minimax game. To augment the encoder’s ability to capture latent invariance across different augmentations, the augmentation discriminator’s purpose is to differentiate the source augmentation method of the sequence representation. In this context, a single augmented sequence representation is input into the discriminator to determine the employed augmentation operation from the augmentation set. Denoted as a non-linear function, the augmentation discriminator is represented by fAD(·). The loss function for this module employs cross entropy and is defined as follows: LAD = − X u∈U X aj∈A j log(fAD(hu aj)), (6) where aj is the label in the augmentation operation set A (as mentioned in “Formulated Data Augmentation” Section), and hu aj represents the sequence representation derived from the generator fE(·)1. The parameter θAD in the discriminator is optimized by: ˆθAD = arg min θAD LAD(θE, θAD), (7) where θE denotes the parameters in the encoder. Correspondingly, we optimize it as follows: ˆθE = arg max θE LAD(θE, θAD). (8) The classification loss of Eq. (6) on the augmented operations indirectly assesses the encoder’s capacity to capture latent invariance across various augmentations. A larger value of the loss LAD indicates improved ability of the encoder to extract invariance among different augmentation methods. Conversely, a smaller loss signifies effective discrimination by the discriminator across data augmentation categories, resulting in enhanced predictive performance for the subsequent item in the sequence. Stability Discriminator: In this part, we aim to stabilize the generated item embeddings from the same augmentation operation. We design the stability discriminator as a nonlinear function fSD(·) and define the classification loss of the discriminator by cross entropy as follows: LSD = − X u∈U X aj,a′ j∈A log(fSD(hu aj|| hu a′ j)), (9) 1In this paper, all mentioned encoders are served as the parts of the generator. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8285 Crop Eq.(2) Reorder Eq.(4) Mask Eq.(3) Pooling Eq.(5) Augmentation Operators Formulated Data Augmentation Generator Parameter Share Sequence Sequence Representation Generator Generator Sequence Representation Sequence Representation GRL GRL Stability Discriminator Augmentation Discriminator Recommendation Loss Eq.(12) K-means Category Feature Representation  Self-training Enhanced Loss Eq.(13) Negative Sampling Loss Eq.(11) Discriminator  Losses Eq.(6), (9) Augmentation Operators Augmentation Operators Parameter Share Figure 2: Overall structure of SparseEnNet. When examining the structure from bottom to top, two augmentation operators, denoted as ai and aj, are randomly chosen from the augmentation set A = {C, M, R, P}. Then, the Encoder processes both the augmented sequences and the original sequence, generating sequence representations used for calculating the losses. where a ′ j denotes the same data augmentation type as aj, and || denotes the concatenation. We repeat the same enhancement operation twice on the identical sequence, concurrently reducing the discriminator loss during the loss function optimization process. This strategy ensures stable learning for the encoder. Similarly, we update the parameter θSD in the discriminator as follows: ˆθSD = arg min θSD LSD(θE, θSD). (10) Negative Sample Learning: Lately, self-supervised contrastive methods (Xie et al. 2022; Liu et al. 2021) have garnered attention in the recommendation field due to their remarkable success in enhancing negative sample learning during batch training. We employ this negative sample learning approach to maximize the mutual information between two positive pairs while effectively increasing the distance from negative items. The InfoNCE loss is utilized as the loss function, as shown below: LNSL = X u∈U X ai,aj∈A −log exp(sim(hu ai, hu aj)/τ) P ¯u∈neg exp(sim(hu ai, h¯u aj)/τ), (11) where sim(·) represents the dot product, τ is the scale factor, and hu aj and hu ak denote the sequence representations of su ai and su ai, which have been independently augmented by two distinct augmentation operators ai and aj. The pair formed by hu aj and hu ak is considered as a positive pair, while s¯u aj represents the augmented sequence that does not belong to user u but resides within the same training batch. In this context, it is treated as a negative pair. Next Item Prediction: Without loss of generality, we utilize a Transformer (Vaswani et al. 2017) as the sequence encoder to extract sequential information for predicting the next item. We employ a log-likelihood loss function to optimize the prediction at time step t: LNIP (su, t) = −log(σ(hu t · evt+1)) − X vj /∈su log(1 −σ(hu t · evj)), (12) where hu t represents the output of the Transformer encoder at position t, evt+1 stands for the actual next item at time step t, σ denotes the Sigmoid function, and vj corresponds to a randomly selected negative item not present in sequence su and drawn from the batch (Chen et al. 2022). Self-training Enhanced Learning: As mentioned in the “Augumentation Discriminator” Section, we consider two different augmentations of the same sequence as positives and treats augmentations of different sequences as negatives, aiming to bring positives closer together and push negatives farther apart. However, even if two different sequences are somewhat similar, the discriminator might still treat them as negatives and attempt to separate them, which could hinder the feature encoder’s ability to capture consistency among similar sequences. Inspired by (Chen et al. 2022), we encode all original sequences using a sequence encoder and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8286 then aggregate the sequence representations to form a set of sequence representations. Then, appling k-means clustering to obtain pseudo clusters of embedding representations for different categories. Clustering allows us to group featuresimilar sequences into the same category, which we call it as the self-training enhanced learning. We utilize the mean of each category as the category feature representation and apply the sequence within this category to the following loss: LSEL = X g∈G X u∈Ug −log exp(sim(hu a, g)/τ) P ¯g∈G exp(sim(hu aj, ¯g)/τ), (13) where Ug is the set of users belonging to the cluster g, g is the cluster feature representation. Unified Training Loss In essence, our goal is to minimize the recommendation loss for the next item prediction as indicated in Eq. (12). Additionally, we aim to employ negative sample learning to bolster the recommendation task and employ self-training for enhanced learning to mitigate the issue of false negatives, as expressed through Eq. (11) and Eq. (13). Furthermore, the integration of an adversarial mechanism is deemed necessary to enhance the semantic consistency between augmented sequences, and this is realized through Eq. (9) and Eq. (6). Thus, the joint loss function for SparseEnNet is defined as follows: L = LNIP +λ1LNSL +λ2LSEL +λ3(LSD +LAD) (14) where λ1, λ2, and λ3 serve as balancing weights to regulate the emphasis on multi-tasks. GRL for Adversarial Training: To facilitate the end-toend training of adversarial methods, we incorporate a gradient reversal layer (GRL) (Ganin et al. 2016) between the encoder and the discriminators. GRL reverses the gradient during the backpropagation process, causing the parameters before the GRL to be optimized to increase the loss, while the gradient direction of the parameters after the GRL remains unchanged to optimize for decreasing the loss. The optimization objectives of the parameters before and after the GRL are contrary, thus achieving the goal of adversarial learning. The parameter update process is as follows: θ = θ −µ∂L ∂θ , (15) where θ ∈θE, θI, θT represents parameters within the encoder and discriminators, and µ signifies the learning rate. Experiment In this section, we conduct comprehensive experiments on SparseEnNet to address the following inquiry: RQ1: Does the proposed method outperform the baseline models? RQ2: How do various augmentation methods impact the model’s performance? RQ3: How do different components influence the performance of SparseEnNet? RQ4: Does SparseEnNet mitigate the cold-start problem caused by data sparsity? RQ5: How do various hyper-parameters influence the performance of our model? (We have included these results in the supplementary material.) Dataset #Users #Items #Actions Avg.length Sparsity Beauty 22,363 12,101 198,502 8.9 99.93% Toys 19,412 11,924 167,597 8.6 99.93% Yelp 30,431 20,033 316,354 10.4 99.95% Table 1: Dataset statistics Datasets: We perform experiments on three publicly available datasets sourced from real-world data. The Beauty and Toys subsets are extracted from the Amazon reviews dataset (McAuley et al. 2015), derived from one of the world’s largest e-commerce platforms. Additionally, we utilize the Yelp dataset2, which is a frequently employed resource in recommendation tasks and originates from a business platform. We adhere to the established convention (Kang and McAuley 2018; Xie et al. 2022) for dataset processing. Then, we organize each user’s reviews in chronological sequence and transform them into a sequence of useritem interactions suitable for the recommendation models. Evaluation Metrics: We employ the leave-one-out strategy (Kang and McAuley 2018; Zhou et al. 2020; Xie et al. 2022), a widely employed approach in sequence recommendation, to partition the datasets. In this strategy, for each user-item interaction sequence, we treat the last item as the test data, and the item immediately preceding it as the validation data. The remaining items are utilized for training the model. To comprehensively assess all models, we employ the Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG) as evaluation metrics for recommendation performance. HR@top-k quantifies the proportion of the target item in the top-k recommended items. In comparison, NDCG@top-k takes into consideration the rank of the target item among the top-k recommendations. Baseline Methods: The selected baseline methods encompass the following: GRU4Rec (Hidasi et al. 2015) integrates the GRU architecture into session-based recommendation tasks, augmenting model performance through novel loss function design and efficient sampling strategies. SASRec (Kang and McAuley 2018) leverages a Transformerbased one-way attention mechanism to capture sequential patterns and make next-item recommendations. BERT4Rec (Sun et al. 2019) adapts the BERT technique (Devlin et al. 2018), originally successful in NLP, to sequence recommendation. It employs a two-way self-attention mechanism and mask operation for enhanced performance. CL4SRec (Xie et al. 2022) employs contrastive learning within sequence recommendation, introducing crop, mask, and reorder augmentation techniques for the contrastive learning framework. ICLRec (Chen et al. 2022) engages unsupervised modeling of user intentions and incorporates these intentions into contrastive sequential recommendation tasks. Implementation Details: We evaluate the above baselines by either using the RecStudio3 (a widely used gen2https://www.yelp.com/dataset 3https://github.com/ustcml/RecStudio The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8287 Datasets Methods Hit@5 NDCG@5 Hit@10 NDCG@10 Hit@20 NDCG@20 GRU4Rec 0.0256 0.0164 0.0426 0.0218 0.0690 0.0285 SASRec 0.0338 0.0222 0.0532 0.0285 0.0828 0.0359 BERT4Rec 0.0293 0.0183 0.0477 0.0242 0.0688 0.0295 Beauty CL4SRec 0.0427 0.0278 0.0648 0.0349 0.0957 0.0427 ICLRec 0.0461 0.0304 0.0728 0.0389 0.1054 0.0471 SparseEnNet 0.0516 0.0348 0.0762 0.0426 0.1103 0.0512 Improv. 11.93% 14.47% 4.67% 9.51% 4.65% 8.70% GRU4Rec 0.0211 0.0145 0.0337 0.0186 0.0536 0.0236 SASRec 0.0399 0.0264 0.0584 0.0324 0.0832 0.0387 BERT4Rec 0.0304 0.0199 0.0461 0.0248 0.0689 0.0305 Toys CL4SRec 0.0541 0.0449 0.0772 0.0374 0.1063 0.0522 ICLRec 0.0579 0.0395 0.0820 0.0472 0.1131 0.0550 SparseEnNet 0.0619 0.0423 0.0855 0.0499 0.1162 0.0576 Improv. 6.91% 7.09% 4.27% 5.72% 2.74% 4.73% GRU4Rec 0.0152 0.0091 0.0248 0.0124 0.0371 0.0145 SASRec 0.0160 0.0101 0.0260 0.0133 0.0443 0.0179 BERT4Rec 0.0196 0.0121 0.0339 0.0167 0.0564 0.0223 Yelp CL4SRec 0.0227 0.0143 0.0384 0.0194 0.0623 0.0254 ICLRec 0.0234 0.0145 0.0401 0.0199 0.0645 0.0260 SparseEnNet 0.0244 0.0154 0.0414 0.0209 0.0678 0.0275 Improv. 4.27% 6.21% 3.24% 5.03% 5.12% 5.77% Table 2: Performance comparison of various methods on top-N recommendation. The best score for each metric is indicated in bold, and the second-best score is underlined. The final row displays improvements over the best baseline on each dataset. eral recommendation library) or adopting the codes from the papers. We follow the general implementation method and set the maximum length of the sequence to 50, the embedding dimension of the model to 64, and the batch size to 256. For SparseEnNet, we set attention heads of the encoder as 2 and tune the self-attention layer within {1, 2, 3, 4} . We test the hyper-parameters λ1, λ2, λ3 among {0.01, 0.02, 0.05, 0.1, 0.2, 0.5}, and set the cluster number K as 256. We apply dropout in our model and set dropout ratio between 0.1, 0.3, 0.5, 0, 7. We use the Adam optimizer (Kingma and Ba 2014) to optimize the model trainable parameters with a learning of 0.001, β1 = 0.9, β2 = 1. Performance Comparison (RQ1) Table. 2 presents a comprehensive performance comparison of all methods. From the table, we can observe that: (1) The proposed SparseEnNet outperforms all other baselines across all datasets, underscoring the effectiveness of our proposed model. Notably, SparseEnNet demonstrates remarkable performance improvements, showcasing a 11.93%, 6.91%, and 4.27% enhancement over the secondbest model in terms of Hit@5 on the Beauty, Toys, and Yelp datasets, respectively. Moreover, SparseEnNet demonstrates substantial improvements over the second-best performing model across all datasets in terms of NDCG@5, with gains of 14.47%, 7.09%, and 6.21% observed on the Beauty, Toys, and Yelp datasets, respectively. Comparable enhancements are also evident across other evaluation metrics. (2) BERT4Rec consistently outperforms GRU4Rec, afSparseEnNet w/o Pooling w/o Crop w/o Mask w/o Reorder 0.040 0.042 0.044 0.046 0.048 0.050 0.052 0.054 Hit@5 SparseEnNet w/o Pooling w/o Crop w/o Mask w/o Reorder 0.020 0.022 0.024 0.026 0.028 0.030 0.032 0.034 0.036 NDCG@5 Figure 3: Different data augmentation performance in terms of Hit@5 and NDCG@5 on Beauty. firming the efficacy of the self-attention mechanism in extracting sequence features for sequential recommendation tasks. Furthermore, SASRec exhibits superior performance compared to BERT4Rec. This divergence could potentially be attributed to the fact that the masking operation in sparse datasets leads to a more pronounced loss of contextual information within such sparse data. (3) CL4SRec and ICLRec, tailored for sequential data augmentation, outperform the other baseline models. Notably, CL4SRec achieves the second-best performance across all datasets, underscoring the significance and effectiveness of data augmentation techniques in the realm of sequential recommendation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8288 Dataset Beauty Toys Metric Hit@10 NDCG@10 Hit@10 NDCG@10 SparseEnNet 0.0762 0.0426 0.0855 0.0499 w/o D 0.0724 0.0406 0.0865 0.0497 w/o SD 0.0737 0.0407 0.0847 0.0490 w/o AD 0.0738 0.0415 0.0870 0.0500 w/o SEL 0.0713 0.0401 0.0805 0.0451 w/o NSL 0.0649 0.0347 0.0775 0.0427 Table 3: Ablation study for SparseEnNet Different Data Augmentation Analysis (RQ2) To investigate the impact of different data augmentation techniques on model performance, we present the comparative results in Fig. 3. Generally, the ranking of the significance of various augmentation methods is as follows: Recoder > Pooling > Mask ≈Crop. SparseEnNet achieves scores of 0.0516 and 0.0348 on Hit@5 and NDCG@5, respectively. In contrast, the performance of the model without reorder data augmentation is the lowest, with corresponding scores of 0.0471 and 0.0304. Notably, SparseEnNet consistently outperforms these individual methods, underscoring the effectiveness of its holistic approach that considers a range of enhancement techniques. Ablation Study (RQ3) We investigated the impact of various modules on the overall performance of the model. The variants of SparseEnNet encompass the following configurations: (1) w/o D: removing all discriminators of the model and leaving the NSL and SEL parts. (2) w/o SD: removing the stability discriminator. (3) w/o AD: removing the augmentation discriminator. (4) w/o SEL: removing self-training enhanced learning. (5) w/o NSL: removing negative sample learning module. Table 3 presents a performance comparison between SparseEnNet and its five variants on the beauty and toys datasets. Importantly, it is evident that the NSL component plays a critical role within SparseEnNet, resulting in significant improvements in terms of Hit@10 and NDCG@10 scores. Specifically, on the Beauty dataset, NSL contributes to achieving values of 0.0649 for Hit@10 and 0.0347 for NDCG@10. Similarly, on the Toys dataset, NSL contributes to achieving values of 0.0775 for Hit@10 and 0.0427 for NDCG@10. Overall, this table underscores the efficacy of all the designed components within our model. Cold-start Problem by Data Sparsity (RQ4) For the cold-start performance assessment, we have chosen ICLRec, the second-best baseline. Initially, we randomly selected 2000 items that appear in user-item interactions with a count of less than or equal to 10. This was done to simulate cold-start item embeddings. As evident from Figure 4a, it is evident that SparseEnNet is able to encode items into a more condensed embedding space compared to ICLRec. Additionally, we conducted a comparative learning by selecting 2000 items that occur in user-item interactions with a count of 30 or more to simulate embeddings for popular items. The findings from Figure 4b indicate that our model performs at (a) Simulating cold-start item embeddings for user-item interactions <= 10 on SparseEnNet and ICLRec. (b) Simulating popular item embeddings for user-item interactions >= 30 on SparseEnNet and ICLRec. Figure 4: Cold-start performance in sparsity data. a similar level to ICLRec in this scenario. These outcomes underscore the effectiveness of our proposed model in addressing the cold-start issue. Conclusion In this paper, we present SparseEnNet, an innovative and robust adversarial generation method designed to thoroughly explore the latent space in sequence recommendation by generating enhanced items that are more robust. The SparseEnNet framework consists of four essential components: an augmentation discriminator, a stability discriminator, a negative sample learning module, and a self-training enhanced learning module. Through extensive experiments conducted on three well-known datasets, we establish the efficacy of our model. Additionally, our ablation study further validates the contribution of each individual component. Furthermore, a case study focusing on the cold-start problem confirms the ability of SparseEnNet to produce distinct item embeddings in sparse datasets. Acknowledgments This work was supported by NSFC (62102265), Shenzhen University Stable Support Program Project (20231120161634002), Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ, GML-KF-22-29), Natural Science Foundation of Guangdong Province (2022A1515011474), GDST (2020B1212030003). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8289 References Akcay, S.; Atapour-Abarghouei, A.; and Breckon, T. P. 2019. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part III 14, 622–637. Springer. Chen, J.; Wang, J.; Dai, Z.; Wu, H.; Wang, M.; Zhang, Q.; and Wang, H. 2023. Zero-shot Micro-video Classification with Neural Variational Inference in Graph Prototype Network. In Proceedings of the 31st ACM International Conference on Multimedia, 966–974. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Chen, Y.; Liu, Z.; Li, J.; McAuley, J.; and Xiong, C. 2022. Intent contrastive learning for sequential recommendation. In Proceedings of the ACM Web Conference 2022, 2172– 2182. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Fang, H.; Wang, S.; Zhou, M.; Ding, J.; and Xie, P. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1): 2096–2030. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729–9738. He, X.; He, Z.; Du, X.; and Chua, T.-S. 2018. Adversarial personalized ranking for recommendation. In The 41st International ACM SIGIR conference on research & development in information retrieval, 355–364. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Hjelm, R. D.; Fedorov, A.; Lavoie-Marchildon, S.; Grewal, K.; Bachman, P.; Trischler, A.; and Bengio, Y. 2018. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Ji, W.; Wang, K.; Wang, X.; Chen, T.; and Cristea, A. 2020. Sequential recommender via time-aware attentive memory network. In Proceedings of the 29th ACM international conference on information & knowledge management, 565–574. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM), 197–206. IEEE. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Li, J.; Wang, Y.; and McAuley, J. 2020. Time interval aware self-attention for sequential recommendation. In Proceedings of the 13th international conference on web search and data mining, 322–330. Liu, Z.; Chen, Y.; Li, J.; Yu, P. S.; McAuley, J.; and Xiong, C. 2021. Contrastive self-supervised sequential recommendation with robust augmentation. arXiv preprint arXiv:2108.06479. McAuley, J.; Targett, C.; Shi, Q.; and Van Den Hengel, A. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, 43–52. Ren, R.; Liu, Z.; Li, Y.; Zhao, W. X.; Wang, H.; Ding, B.; and Wen, J.-R. 2020. Sequential recommendation with selfattentive multi-adversarial network. In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, 89–98. Rendle, S.; Freudenthaler, C.; and Schmidt-Thieme, L. 2010. Factorizing personalized markov chains for nextbasket recommendation. In Proceedings of the 19th international conference on World wide web, 811–820. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, 1441–1450. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; and Xie, X. 2021. Self-supervised graph learning for recommendation. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, 726–735. Xia, X.; Yin, H.; Yu, J.; Wang, Q.; Cui, L.; and Zhang, X. 2021. Self-supervised hypergraph convolutional networks for session-based recommendation. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 4503– 4511. Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; and Cui, B. 2022. Contrastive learning for sequential recommendation. In 2022 IEEE 38th international conference on data engineering (ICDE), 1259–1273. IEEE. Yao, T.; Yi, X.; Cheng, D. Z.; Yu, F.; Chen, T.; Menon, A.; Hong, L.; Chi, E. H.; Tjoa, S.; Kang, J.; et al. 2021. Selfsupervised learning for large-scale item recommendations. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 4321–4330. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen, J.-R. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8290 ternational conference on information & knowledge management, 1893–1902. Zimdars, A.; Chickering, D. M.; and Meek, C. 2001. Using temporal data for making recommendations. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, 580–588. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8291
2024
921
18,764
Signed Graph Neural Ordinary Differential Equation for Modeling Continuous-Time Dynamics Lanlan Chen1, Kai Wu2*, Jian Lou3, Jing Liu1 1Guangzhou Institute of Technology, Xidian University 2School of Artificial Intelligence, Xidian University 3Zhejiang University [email protected], [email protected], [email protected], [email protected] Abstract Modeling continuous-time dynamics constitutes a foundational challenge, and uncovering inter-component correlations within complex systems holds promise for enhancing the efficacy of dynamic modeling. The prevailing approach of integrating graph neural networks with ordinary differential equations has demonstrated promising performance. However, they disregard the crucial signed information potential on graphs, impeding their capacity to accurately capture realworld phenomena and leading to subpar outcomes. In response, we introduce a novel approach: a signed graph neural ordinary differential equation, adeptly addressing the limitations of miscapturing signed information. Our proposed solution boasts both flexibility and efficiency. To substantiate its effectiveness, we seamlessly integrate our devised strategies into three preeminent graph-based dynamic modeling frameworks: graph neural ordinary differential equations, graph neural controlled differential equations, and graph recurrent neural networks. Rigorous assessments encompass three intricate dynamic scenarios from physics and biology, as well as scrutiny across four authentic real-world traffic datasets. Remarkably outperforming the trio of baselines, empirical results underscore the substantial performance enhancements facilitated by our proposed approach. Our code can be found at https://github.com/beautyonce/SGODE. Introduction Complex systems prevalent in the real world, such as gene regulation (Marbach et al. 2012), social networks (Wasserman, Faust et al. 1994), climate models (Hwang et al. 2021), and traffic systems (Zhao et al. 2019), often find representation as complex networks governed by nonlinear dynamics (Lieberman, Hauert, and Nowak 2005). In contrast to deterministic and easily obtainable graphs (Wu et al. 2020a; Feng et al. 2023), the graph structures of these complex networks are challenging to explicitly articulate. Despite the extensive exploration of nonlinear dynamical systems, a significant number of complex networks continue to evade a clear understanding of their underlying dynamics. In recent years, a noteworthy trend has emerged, involving the fusion of ordinary differential equations (ODEs) with *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. neural networks to acquire insights into continuous-time dynamics (Chen et al. 2018; Rubanova, Chen, and Duvenaud 2019; Kidger et al. 2020; Jhin et al. 2021a; Hwang et al. 2021; Huang, Sun, and Wang 2021; Fang et al. 2021; Choi et al. 2022; Jhin et al. 2022).These hybrids, encompassing both ODEs and graph neural networks (GNNs), have demonstrated promising performance across a variety of tasks. This includes climate modeling (Hwang et al. 2021; Jhin et al. 2021a), traffic flow prediction (Poli et al. 2019; Fang et al. 2021; Choi et al. 2022), node classification (Zang and Wang 2020; Xhonneux, Qu, and Tang 2020), and dynamic interactions (Huang, Sun, and Wang 2021). However, it remains pertinent to note that a substantial portion of complex networks retains enigmatic dynamics yet to be fully unraveled. The majority of existing methodologies predominantly focus on either inferring or utilizing unsigned graphs, wherein only the presence or absence of dependencies between nodes is taken into account, while the type of edges are disregarded. As shown in Table 1, the current GNN-ODE methods are unable to capture and use signed information of dynamics, thus rendering this problem challenging. It is noteworthy that within unsigned graphs, nodes face challenges in shifting towards the opposing trend (rise or decline) when they and their neighboring nodes align in a similar variation trend (decline or rise). This constraint inherently restricts the capacity to represent dynamic processes effectively. A remedy for this limitation is the introduction of signed connections, which can markedly enhance the scenario. Intriguingly, signed graphs find applicability across a multitude of complex systems (Shi, Altafini, and Baras 2019). Notable instances include predation and prey relationships within ecosystems, activation and repression dynamics in gene regulation networks (Karlebach and Shamir 2008), as well as cooperation and antagonism dynamics observed within social and economic networks (Derr, Ma, and Tang 2018). To offer a concrete illustration, we examine the context of a three-way intersection within traffic systems, thereby illustrating the presence of signed graphs (refer to Appendix: https://arxiv.org/abs/2312.11198). In the context of an unknown underlying graph, one approach involves inferring the graph structure, which is inherently tied to the realm of graph structure learning. However, current graph structure learning methods fall short in handling signed information. To address this challenge, various strategies have been explored. One approach entails capturThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8292 Method Formula Graph Information Signed ? NDCN (Zang and Wang 2020) dH(t) dt = f  e A, H(t), θ  e A = D−1 2 (D −A)D−1 2 % STGODE (Fang et al. 2021) dH(t) dt = f( e A, H(t), H(0), θ) e A = α 2 (I + D−1 2 AD−1 2 ) % STG-NCDE (Choi et al. 2022) dH(t) dt = f( e A, H(t), θ) e A = I + ϕ σ EET  % SGODE (This Work) dH(t) dt = f (K, H(t), B(t), θ) K = σ(E1ET 2 ) + (−σ(E3ET 4 )) ! Table 1: ODE-GNN approaches. A ∈Rn×n is the adjacency matrix of the network, and D ∈Rn×n is the corresponding degree matrix. ϕ is a softmax activation, and σ is a rectified linear unit. I ∈Rn×n is the identity matrix. E ∈Rn×d represents the node embedding matrix, where d is the embedding dimension, much smaller than n. H (t) ∈Rn×h represents the state feature in the hidden space at time t with dimension h. In this context, θ refers to the other parameters. While other methods utilize the normalized adjacency matrix e A ∈Rn×n of a homogeneous graph, this study constructs a signed coefficient matrix K ∈Rn×n composed of positive and negative coefficient matrices, which is crucial for capturing different types of information. ing the edge distribution (Kipf et al. 2018; Franceschi et al. 2019; Shang, Chen, and Bi 2021), followed by sampling the graph’s adjacency matrix from this distribution. Yet, the complexity arises when assuming a bipartite adjacency matrix and attempting separate learning of its positive and negative edges. This introduces intricate optimization issues due to the inherent uncertainty tied to the sampled graph. Additionally, distinct optimization processes for positive and negative graphs exacerbate uncertainties, impeding the achievement of convergence. Moreover, approximating the edge distribution through neural networks imposes substantial computational and storage demands on a high-precision ODE solver. Alternatively, an approach directly embeds the graph into a matrix composed of learnable parameters within a GNN (Wu et al. 2019; Bai et al. 2020; Wu et al. 2020b; Deng and Hooi 2021; Choi et al. 2022; Jin et al. 2023; Jiang et al. 2023), enabling simultaneous optimization alongside other pertinent parameters. While methods employing learnable parameter matrices are simple to optimize, they often neglect signed information and remain susceptible to overfitting. Hence, there is a compelling need to devise a GNN-ODE method that strikes a balance between ease of optimization, consideration of signed information, preservation of stability, and mitigation of overfitting. We propose a Signed Graph Neural Ordinary Differential Equation, called SGODE, to capture and use positive and negative information of nodes during continuous dynamics. Our contributions are summarized as follows: 1) The significance of signed information in real-world scenarios is undeniable. In response, we propose a straightforward yet impactful solution that effectively addresses the limitations of the current GNN-ODE methodologies by capturing and leveraging this crucial information. 2) SGODE offers a high degree of flexibility and seamlessly integrates into various frameworks for graph-based dynamic modeling, including graph neural ODEs, graph neural controlled differential equations (NCDEs), and graph recurrent neural networks (RNNs). Empirical results across multiple dynamic modeling datasets and traffic flow datasets substantiate the effectiveness of SGODE. Background Notations. We denote the training data containing n time series as X. Xi represents the features of the i-th time series, and X(t) encompasses the features of all nodes at time t. The remaining notations are described in Table 1. Specifically, if the relationship between node i and node j is positive, we have Kij > 0, while a negative relationship corresponds to Kij < 0. An edge is absent when Kij = 0. For synthetic dynamics, there are a total of S time steps for training, written as XS = {Xs1, Xs2, . . . , Xss}, and P steps for forecasting, written as XP = {Xp1, Xp2, . . . , Xpp}. If the time step intervals in S and P are fixed, the sampling is considered as equal intervals sampling; otherwise, it is referred to as irregular sampling. Given S, XS and a set of times P, the model needs to predict XP . By using θ to represent other parameters in the network except K, the model is denoted as c XP = M(K, θ, XS). For traffic flow data, we utilize a T-step window approach to forecast the subsequent τ steps. If t + 1 represents the initial time step of a certain window, the model can be denoted as c Xt+T +1:t+T +τ = M(K, θ, Xt+1:t+T ). Take L to represent the loss function of the predicted value and ground truth. Then our goal is arg minK,θ P tL(M(K, θ, Xz1), Xz2) For synthetic dynamics, we set z1 = S, z2 = P, and for traffic flow prediction, we set z1 = t + 1 : t + T, z2 = t + T + 1 : t + T + τ for all the training examples that are partitioned by window. In the following, we review of the relevant models, including NDCN (Zang and Wang 2020), STG-NCDE (Choi et al. 2022), and DCRNN (Li et al. 2018). NDCN. NDCN (Zang and Wang 2020) can be considered as either a continuous-time GNN or a graph neural ODE model. NDCN utilizes an encoding function to transform X(t) into the hidden space, employs a continuous model to regulate the dynamics on the graph within the hidden space, and subsequently decodes the hidden state back into the original space. For irregular and equal intervals sampling tasks, NDCN uses L1 loss. The ODE layer is dH(t) dt = σ  e AH(t)Wh + bh  , (1) where Wh ∈Rn×h and bh ∈Rh are the parameters of fully connected layer FC. NDCN adopts a linear diffusion operator e A (shown in Table 1). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8293 STG-NCDE. STG-NCDE (Choi et al. 2022) STG-NCDE (Choi et al. 2022) combines the advantages of graph convolutional network and NCDE to design a unified space-time NCDE framework, which encompasses two NCDEs, a CDE that generates a continuous path for each node, denoted as f, and another CDE applies a method for spatial and temporal processing jointly, denoted as g. Denote the hidden states of f and g as H(t) and Z(t) respectively, then we have d dt  Z(t) H(t)  = " g(Z(t); θg)f(H(t); θf) dX(t) dt f(H(t); θf) dX(t) dt # . (2) The initial value is determined by the fully connected layer: H(0) = FCdim(Xi)→dim(Hi)(X(t0)), Z(0) = FCdim(Hi)→dim(Zi)(H(0)). θf is the parameters of the CDE function f. FCinput size→output size means a fullyconnected layer whose input size is input size and output size is output size. f is composed of l layers of MLPs, with the activation function in the final layer being ψ (hyperbolic tangent). The activation functions in the other layers are σ. g and f share similar structures, the only difference being that g incorporates adaptive graph information in the middle layer. Let ZB0(t) and ZB1(t) be the input and output of this layer. The equation is given by ZB1(t) = (I + ϕ(σ(EET )))ZB0(t)Ws, (3) where I is the n×n identity matrix. E is a trainable node embedding matrix, ET is its transpose. Eq. 3 has two important constituent components I + ϕ(σ(EET )) and Ws (Bai et al. 2020), the former adaptively learns spatial dependencies, and the latter extracts the specific pattern of each node through Ws = EWpool, where E ∈Rn×d and Wpool ∈Rd×h×h is weight pool for E. DCRNN. DCRNN (Li et al. 2018) introduces diffusion graph convolutions to capture spatial dependencies, facilitating the modeling of traffic flow dynamics from a spatiotemporal perspective. Within DCGRU, GRU (Chung et al. 2014) serves as a recurrent neural network to simulate time dependence, with matrix multiplication being replaced by diffusion convolution, R(t) = sigmoid (WR ⋆A [X(t)∥H(t −1)] + bR) , C(t) = tanh (WC ⋆A [X(t)∥(R(t) ⊙H(t −1)] + bC) , U(t) = sigmoid (WU ⋆A [X(t)∥H(t −1)] + bU) , H(t) = U(t) ⊙H(t −1) + (1 −U(t)) ⊙C(t), (4) where diffusion convolution ⋆A is defined as, WQ⋆AH(t) = X m  wQ m1 D−1 O A m + wQ m2 D−1 I AT m H(t). DO and DI are out-degree and in-degree matrix, || is connected along the feature dimension, and ⊙denotes elementwise product. The diffusion step m is a hyperparameter, wQ m1, wQ m2, bQ for Q = R, U, C are all model parameters. For simplicity, our expression here follows the conventions of GTS (Shang, Chen, and Bi 2021). Methods Signed Graph Ordinary Differential Equation. Learning the distribution of edges in a signed graph is a challenging optimization task, particularly when the edge distribution must be learned through a large neural network. When the number of nodes expands to hundreds, high-precision ODE solvers encounter difficulties in calculation and storage. Therefore, we opt for direct initialization of the corresponding node vector for each node. We consider two forms of coefficient matrix K, namely K = E1ET 2 and K = Kpos + Kneg, where Kpos = σ(E1ET 2 ), Kneg = −σ(E3ET 4 ), E∗∈Rn×d. The first form indicates learning the positive and negative relations of each node pair. The second form indicates that only the positive and negative relations of some node pairs are learned, and the node pairs with low dependency are ignored. We do not use any other constraints to ensure that K learns the connection weights. Without constraints, the evolution of unkonwn signed graph ODEs is unstable due to the extensive parameter space and the random initialization of embedding matrices. Motivated by the integration of initial value-related terms into the ODE for neighborhood information learning while retaining original features (Xhonneux, Qu, and Tang 2020), we introduce self-trend features Bi associated with the state of the ith node, augmenting the stability of the dynamics learning process. The inclusion of B(t) implies that the instantaneous rate of change in node features is influenced not just by interactions with other nodes, but also by its inherent changes. Learning self-evolving features is comparably straightforward compared to graph-related learning processes, allowing for representation of temporal variations. We posit that B(t) primarily encompasses three factors: the constant term, initial features, and the features at the current time step. Formally, the interaction of information on signed graphs is defined as follows: dH(t) dt = f(KH(t)Wh + B(t)), (5) B(t) = λ1g1(H(t)) + λ2g2(H(0)) + λ3B0, (6) where both f and g simply refer to functions, and Bt describes the information of the state trend at time t. λ1, λ2, λ3 ∈[0, 1] indicates whether the item is taken. We emphasize the significance of focusing on current time-step features, whether for longer-term predictions or shorter periods within the prediction window (e.g., λ1 = 1). Even incorporating basic linear relationships can effectively enhance the performance of the ODE model. Embedding SGODE into NDCN. We first embedding SGODE into NDCN (Zang and Wang 2020) to show its effectiveness of modeling continuous-time dynamics. We replace the intermediate ODE layer (Eq. 1) with our proposed SGODE layer, as described in Eq. 5, dH(t) dt = σ (KH(t)Wh + B(t) + bh) . (7) We consider three forms of K. The first form directly represents K by n × n learnable parameters, called SGODEv1. The second is to express K = E1ET 2 using the similarity calculated by two node embedding matrices, denoted as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8294 SGODEv2. The third is to utilize two node embedding matrices to calculate the positive relationships, and employ two node embedding matrices to calculate the negative relationships, K = σ(E1ET 2 ) −σ(E3ET 4 ), denoted as SGODEv3. Given the simplicity of the synthetic dynamics, in SGODEv3, we establish a straightforward linear scaling relationship between B(t) and H(t), as follows: B(t) = bH(t)Wh, where b ∈Rn. In practice, these learnable parameters are trained jointly with other network parameters. Further details can be found in Appendix. Embedding SGODE into STG-NCDE. STG-SGCDE refer to the STG-NCDE variant by modify the adaptive graph convolution layer of Eq.3 with the proposed SGODE. We observe that E for constructing the adaptive adjacency matrix is closely related to WS for extracting patterns of each node. In order to establish a relevant connection between K and WS1, we adopt a node embedding matrix E1 that controls both positive and negtive relations and get weight pool of each node with WS1. To enhance the sparsity of the graph, we employ an adaptive mask matrix (Jhin et al. 2021b). Formally, K0 = σ(E1ET 2 ) −σ(E1ET 3 ), Ws1 = E1Wpool1, Ws2 = E1Wpool2, (8) and construction of the adaptive mask matrix follows these steps: initially, a learnable matrix is set as M = EM1ET M2. Subsequently, a hard sigmoid function is employed for rigorous classification, and the result is rounded up to yield the final mask matrix, denoted as M. The detailed training approach is outlined below: φ(x) = round(hardsigmoid(αx)), (9) for the forward path, and ∇φ(x) = ∇hardsigmoid(αx), (10) for the backward path, where the temperature α ≥1.0 is a hyperparameter to control the slope of the sigmoid function. Given the presence of Ws1(t) for extracting interaction information KZB0(t). We then define g1(ZB0(t)) = ZB0(t)Ws2 = ZB0(t)E1Wpool2. Finally, we have K = φ(M) ⊙K0, and B(t) = ZB0(t)Ws2 + B0. i.e. we substitute this equation for Eq. 3. Embedding SGODE into DCRNN. SGODE-RNN refers to the DCRNN variant. Our continuous graph diffusion method, WQ ⋆K H(t) = X m wQ tmH(tm), (11) dH(tm) dt = KH(tm)Wh + B(tm), (12) depicts the continuous dynamics of hidden states on a signed graph during state transitions and the extraction of information on continuous dynamics. We establish a two-layer FC network as g2(H(0)) = FCdim(Hi)→dim(Hi)(σ(FCdim(Hi)→dim(Hi)(H(t)))), and g1(H(tm)) = bH(t)Wh. Here, K = σ(E1ET 2 ) − σ(E3ET 4 ). The ODE within the interval [t, t + 1] can be perceived as a normalization of an ODE of arbitrary length. To enhance monitoring of the internal evolution within the ODE solver and mitigate error accumulation, we extract features from a set of m equidistantly sampled states within the range of [0, 1]. If m = 1, we extract only the starting and ending information of ODE. This strategy enables an increased focus on the inherent trend of self-evolution, while also taking into account node interactions and self-evolution features, thereby enhancing the model’s fitting capabilities. Training Loss The training loss function in our model is shown as follows, L = 1 |P| 1 |τ| P p∈P P τ |c X(P(t)) −X(P(t))|, (13) p refers to each training example in the training set P, τ refers to the time step to be predicted in each training example, |P| is the number of training examples, and |τ| is the prediction step number. Experiments Experimental Setup Dataset Three Continuous-time Dynamics. We follow the dataset settings in NDCN (Zang and Wang 2020). We generate three continuous-time dynamics: heat diffusion, mutualistic interaction, and gene regulation dynamics on five graphs, including 1) Grid network, where each node is connected with 8 neighbors; 2) Random network (Erd˝os, R´enyi et al. 1960); 3) Power-law network (Barab´asi and Albert 1999); 4) Small-world network (Watts and Strogatz 1998); 5) Community network (Fortunato 2010). We categorize irregular sampling into two types: interpolated value prediction and extrapolated value prediction. The training, interpolation prediction, and extrapolation prediction are allocated in a ratio of 80/20/20, respectively. The details of three continuous-time dynamics and data generation are shown in Appendix. Traffic Prediction Dataset. Four publicly available realworld traffic datasets are employed: (1) METR-LA and PEMS-BAY (Li et al. 2018). METR-LA contains the traffic information on the highways of Los Angeles County and consists of 207 sensors, collecting data over a span of 4 months. PEMS-BAY comprises 325 sensors in the Bay Area, with data collected over a period of 6 months. (2) PeMS traffic datasets (Chen et al. 2001), previously employed in other works (Fang et al. 2021; Choi et al. 2022), include PeMSD4 and PeMSD8 with 307 and 170 nodes, respectively. The frequency of data in these four datasets is uniformly set to 5 minutes. For METR-LA and PEMS-BAY, we adopt the train/valid/test split of 70%/10%/20% as suggested by GTS (Shang, Chen, and Bi 2021), while for PeMSD4 and PeMSD8, we follow STG-NCDE (Choi et al. 2022) and use a split of 60%/20%/20%. Baselines To show the effectiveness of SGODE on modeling continuous dynamics, the following basedlines are employed. (1) NDCN (Zang and Wang 2020). It is a representative algorithm for ODE-GNN methods using real graphs. (2) No-graph. We remove graph structure in NDCN as the benchmark algorithm directly. (3) Adp-NDCN. For a fair The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8295 comparison, we replace the linear map in NDCN with a learnable matrix ϕ(σ(E1ET 2 )) (Bai et al. 2020; Choi et al. 2022; Wu et al. 2019, 2020b). (4) GTS-NDCN. For a fair comparison, we replace the linear map in NDCN with a learnable matrix proposed in (Shang, Chen, and Bi 2021). The details of SGODEv1, SGODEv2, and SGODEv3 are shown in the Methods section. For traffic prediction problems, We compare with widely used time series regression models, including (1) HA: Predicting based on the historical average; (2) ARIMA: Forecasting based on the statistical characteristics of stationary time series. (3) VAR: Vector Auto-Regression. (4) SVR: Support Vector Regression which uses linear support vector machine for the regression task; The following deep neural network based approaches are also included: (5) FNN: Feed-forward Neural Network ; (6) LSTM: Recurrent Neural Network with fully connected LSTM hidden units (Sutskever, Vinyals, and Le 2014); (7) DCRNN (Li et al. 2018): a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow; (8) LDS (Franceschi et al. 2019): the model considers graphs as hyperparameters within a two-layer optimization framework, where it learns parameterized element-wise Bernoulli distributions; (9) GWNet (Wu et al. 2019): the most representative deep models for traffic forecasting; (10) MTGNN (Wu et al. 2020b) is an extended version of GWNet that extends the adaptive graph leaning part; (11) GTSv and GTS (Shang, Chen, and Bi 2021): they are variants that apply inference graphs to T-GCN (Zhao et al. 2019) and DCRNN models respectively. (12) STEP (Shao et al. 2022) is an extension of spatial-temporal GNN enhanced by a scalable time series Pre-training model; (13) MegaCRN (Jiang et al. 2023): they designed meta-graph learner for spatio-temporal graph learning, to explicitly disentangles the heterogeneity in space and time. To additionally demonstrate the performance of SGODE embedded in the existing NCDE framework, we compare it with the advanced STGODE (Fang et al. 2021) and STG-NCDE (Choi et al. 2022). Hyperparameters. we set the learning rate to 0.005 for SGODE-RNN and to 0.001 for STG-SGCDE. We have reproduced the results for four methods, i.e., NDCN, GTS, STG-NCDE, and MegaCRN. If the original paper provides hyperparameters (e.g., STG-NCDE), we adhere to the same settings there. If not (e.g., GTS, MegaCRN), we conduct experiments within their recommended hyperparameter ranges. Specifically, for GTS, we perform a grid search on hyperparameters including learning rate, number of clusters, and regularization weight. For MegaCRN, we set the number of meta-nodes to 20. Detailed parameter settings for SGODE and baselines are available in Appendix. These methods are evaluated based on three commonly used metrics, including (1) Mean Absolute Error (MAE), (2) Mean Absolute Percentage Error (MAPE), and (3) Root Mean Squared Error (RMSE). To maintain consistency with the referenced algorithms, we refer to DCRNN (Li et al. 2018) for the evaluation index calculation method on METRLA and PEMS-BAY, and refer to STG-NCDE (Choi et al. 2022) on PeMS04 and PeMSD8. Methods Grid Rand. Power Small Com. No-graph 41.1 10.1 20.7 21.2 24.2 NDCN 3.3 3.4 6.0 3.6 3.7 I Adp-NDCN 7.9 8.3 12.6 9.8 9.3 GTS-NDCN 9.9 5.0 7.4 8.5 10.0 SGODEv1 4.5 1.5 3.2 4.7 2.1 SGODEv2 2.5 2.0 1.7 1.7 1.7 SGODEv3 3.4 2.0 2.8 2.9 3.8 No-graph 32.2 10.3 31.3 18.0 14.7 NDCN 8.2 6.1 6.6 4.4 9.1 II Adp-NDCN 5.9 9.6 8.4 6.4 10.2 GTS-NDCN 6.5 9.0 10.4 9.4 8.4 SGODEv1 6.1 5.6 5.7 3.0 7.7 SGODEv2 5.1 4.2 5.2 4.8 6.2 SGODEv3 4.7 7.9 5.8 3.1 9.4 No-graph 26.9 11.5 25.2 15.7 18.2 NDCN 5.9 3.3 3.1 4.0 1.9 III Adp-NDCN 4.5 1.6 3.1 2.6 2.8 GTS-NDCN 4.9 2.7 6.1 6.2 3.4 SGODEv1 3.3 2.3 2.0 2.8 2.6 SGODEv2 2.4 1.5 2.8 2.3 4.6 SGODEv3 2.5 2.5 2.0 2.4 2.7 Table 2: MAPE of interpolation of continuous-time network dynamics. I: heat diffusion dynamics; II: mutualistic interaction dynamics; III: gene regulatory dynamics; Rand.: Random, Com.: Community. Those in black font indicate the best performance. The underline corresponds to the secondranked value. Results Results on Three Dynamics The purpose of synthetic data experiment is to test the performance of different graph strategies (GTS: graph distribution inference, Adaptive: adaptive graph without negative information, NDCN: real graph) based on a unified base model. We conducted experiments involving both internal and external interpolation within continuous-time dynamics. The outcomes of our internal interpolation experiment are detailed in Table 2, while those from the external interpolation experiment are available in the Appendix. SGODE adeptly captures the evolution of various nodes across time and exhibits strong performance across diverse graph dynamics. Notably, methods employing learnable graphs are at least on par with NDCN when using the actual graph and, importantly, outperform approaches that lack a graph structure. In the interpolation experiment, our method demonstrates superior accuracy compared to other techniques, particularly the graph inference methods Adp-NDCN and GTSNDCN. Conversely, the NDCN variant lacking a graph exhibits a significant loss value, indicating its incapability to learn dynamic models and underscoring the importance of aggregating neighborhood information. In extrapolation experiments, SGODE exhibits substantial superiority over Nograph, Adaptive, GTS, and NDCN in heat diffusion dynamics. Our method also consistently achieves top-tier outcomes in most extrapolation cases. The utilization of node embedding The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8296 METR-LA 15min 30min 60min MAE RMSE MAPE MAE RMSE MAPE MAE RMSE MAPE HA 4.16 7.80 13.0% 4.16 7.80 13.0% 4.16 7.80 13.0% ARIMA 3.99 8.21 9.6% 5.15 10.45 12.7% 6.90 13.23 17.4% VAR 4.42 7.89 10.2% 5.41 9.13 12.7% 6.52 10.11 15.8% SVR 3.99 8.45 9.3% 5.05 10.87 12.1% 6.72 13.76 16.7% FNN 3.99 7.94 9.9% 4.23 8.17 12.9% 4.49 8.69 14.0% LSTM 3.44 6.30 9.6% 3.77 7.23 10.9% 4.37 8.69 13.2% DCRNN 2.77 5.38 7.3% 3.15 6.45 8.8% 3.60 7.59 10.5% GWNet 2.69 5.15 6.9% 3.07 6.22 8.4% 3.53 7.37 10.0% LDS 2.75 5.35 7.1% 3.14 6.45 8.6% 3.63 7.67 10.3% MTGNN 2.69 5.18 6.9% 3.05 6.17 8.2% 3.49 7.23 9.9% GTSv 2.74 5.09 7.3% 3.11 6.02 8.7% 3.53 6.84 10.3% GTS 2.64 4.95 6.8% 3.01 5.85 8.2% 3.41 6.74 9.9% STEP 2.61 4.98 6.6% 2.96 5.97 8.0% 3.37 6.99 9.6% MegaCRN 2.60 4.81 6.4% 3.00 5.74 7.8% 3.41 6.74 9.4% SGODE-RNN 2.54 4.74 6.3% 2.93 5.66 7.7% 3.33 6.58 9.3% PEMS-BAY 15min 30min 60min MAE RMSE MAPE MAE RMSE MAPE MAE RMSE MAPE HA 2.88 5.59 6.8% 2.88 5.59 6.8% 2.88 5.59 6.8% ARIMA 1.62 3.30 3.5% 2.33 4.76 5.4% 3.38 6.50 8.3% VAR 1.74 3.16 3.6% 2.32 4.25 5.0% 2.93 5.44 6.5% SVR 1.85 3.59 3.8% 2.48 5.18 5.5% 3.28 7.08 8.0% FNN 2.20 4.42 5.2% 2.30 4.63 5.4% 2.46 4.98 5.9% LSTM 2.05 4.19 4.8% 2.20 4.55 5.2% 2.37 4.96 5.7% DCRNN 1.39 2.95 2.9% 1.74 3.97 3.9% 2.07 4.74 4.9% GWNet 1.30 2.74 2.7% 1.63 3.70 3.7% 1.95 4.52 4.6% LDS 1. 33 2.81 2.8% 1.67 3.80 3.8% 1.99 4.59 4.8% MTGNN 1.32 2.79 2.8% 1.65 3.74 3.7% 1.94 4.49 4.5% GTSv 1.35 2.64 2.9% 1. 69 3.45 3.9% 1.99 4.05 4.7% GTS 1.32 2.62 2.8% 1.64 3.41 3.6% 1.91 3.97 4.4% STEP 1.36 2.73 2.8% 1.67 3.58 3.6% 1.99 4.20 4.6% MegaCRN 1.33 2.59 2.8% 1.65 3.40 3.6% 1.92 4.04 4.5% SGODE-RNN 1.29 2.55 2.7% 1.61 3.35 3.6% 1.90 3.96 4.4% Table 3: Forecasting error on METR-LA and PEMS-BAY. matrices to approximate the coefficient matrix, as opposed to directly learning the entire matrix, maintains accuracy without compromise. The marginal underperformance of our approach relative to NDCN in certain network dynamics may arise from the fact that these dynamics are generated from ground-truth graphs. By incorporating SGODE, we enhance the precision of the ODE-GNN model in predictive dynamics, especially in the context of interpolation prediction. Results on Traffic Datasets Forecasting quality. For prediction accuracy, we compare SGODE-RNN with the previously mentioned methods. Following the conventions of DCRNN (Li et al. 2018), GTS (Shang, Chen, and Bi 2021), and STG-NCDE (Choi et al. 2022), we present results for 15min, 30min, and 60min on METR-LA and PEMS-BAY datasets in Table 3, while average outcomes on PeMSD4 and PeMSD8 are reported in Table 4. We have observed the following phenomena: 1) We examine the learned K and find that both positive and negative relationships exist. The results demonstrates that our proposed approach is capable of capturing signed information (Details in the Appendix). 2) On the METR-LA and PEMSBAY datasets, SGODE significantly outperforms DCRNN, and it also surpasses state-of-the-art traffic flow prediction methods. This showcases the effectiveness of our proposed strategy in leveraging signed information and the positive benefits of embedding such information. Since our approach relies on ODEs, we compare our experimental results with STNODE and STG-NCDE, as shown in Table 4. The results of our experiments convincingly showcase the efficacy of our method, while our variants further unlock the potential of the STG-NCDE model. Moreover, our proposed SGODE-RNN demonstrates remarkable competitiveness. Notably, it is observed that the MAPE and MSE metrics of GTS and SGODE-RNN do not align with the MAE metric on the PeMSD4 dataset, possibly due to the presence of some minima close to zero in this dataset. Irregular traffic forecasting. In practice, data recording and storage errors can lead to the unavailability of specific data (Choi et al. 2022). We compare STG-SGCDE and STGNCDE on PeMSD4 and PeMSD8 datasets with 10%, 30%, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8297 Model PeMSD4 PeMSD8 MAE RMSE MAPE MAE RMSE MAPE HA 38.03 59.24 27.88% 34.86 59.24 27.88% ARIMA 33.73 48.80 24.18% 31.09 44.32 22.73% VAR 24.54 38.61 17.24% 19.19 29.81 13.10% DCRNN 21.22 33.44 14.17% 16.82 26.36 10.92% GWNet 24.89 39.66 17.29% 18.28 30.05 12.15% STGODE 20.84 32.82 13.77% 16.81 25.97 10.62% STG-NCDE 19.21 31.09 12.76% 15.45 24.81 9.92% GTS 19.36 32.99 13.54% 14.82 23.80 9.52% MegaCRN 18.92 31.90 12.89% 14.91 23.97 9.60% SGCDE* 19.06 30.96 12.65% 15.34 24.44 9.92% SGODE* 18.81 31.57 12.87% 14.55 23.85 9.30% Table 4: Forecasting error on PeMSD4 and PeMSD8. SGCDE*: STG-SGCDE, SGODE*: SGODE-RNN. Missing rate PeMSD4 PeMSD8 NCDE* SGCDE* NCDE* SGCDE* 10% MAE 19.61 19.15 17.21 15.67 RMSE 31.55 31.05 26.96 24.95 MAPE 13.02% 12.96% 10.68% 10.05% 30% MAE 19.37 19.41 17.42 16.16 RMSE 31.27 31.29 27.48 25.15 MAPE 13.12% 12.86% 11.03% 10.86% 50% MAE 20.35 19.70 17.21 15.88 RMSE 32.41 31.86 26.93 25.11 MAPE 13.50% 13.07% 11.02% 10.33% Table 5: Forecasting error on irregular PeMSD4 and PeMSD8. NCDE*: STG-NCDE, SGCDE*: STG-SGCDE. and 50% data corruption, and the results are presented in Table 5. Our experimental setup is based on STG-NCDE. The results indicate that our approach exhibits greater robustness to irregular datasets compared to STG-NCDE. This underscores the accuracy of our method in fitting continuous dynamics. Error for each horizon. We present the error analysis of each horizon prediction in Fig.1, where we predict a total of 12 horizons. It is evident that the level of error is highly correlated with forecast time. Across all horizons, SGODE-RNN outperforms both baseline models GTS and STG-NCDE in terms of lower error rates. While our proposed methods STGSGCDE and SGODE-RNN exhibit superior performance in the initial few horizons, their advantage over STG-NCDE and GTS gradually diminishes as forecasting progresses. Ablation Study We verify the validity of our proposed two key components, the designed K and B. We denote the SGODE-RNN with negative links removed and without its trend B as Positive1 and without-B, respectively. Only FF performs feature extraction solely on the final ODE result, without extracting features from the initial state and intermediate states. Positive2 refers to K = Kpos1 + Kpos2. We show the results on Model PeMSD4 PeMSD8 MAE RMSE MAPE MAE RMSE MAPE Only FF 19.32 33.03 13.05% 15.07 24.41 10.05% Without-B 19.01 31.61 14.13% 15.18 24.15 9.97% Positivev1 19.24 31.91 13.16% 14.81 23.95 9.57% Positivev2 19.56 32.69 13.79% 15.76 24.84 11.49% SGODE 18.81 31.57 12.87% 14.55 23.85 9.30% Model METR-LA PEMS-BAY MAE RMSE MAPE MAE RMSE MAPE Only FF 2.92 5.64 7.74% 1.57 3.19 3.53% Without-B 2.95 5.71 8.05% 1.58 3.23 3.51% Positivev1 2.89 5.60 7.81% 1.57 3.19 3.54% Positivev2 2.93 5.69 8.13% 1.57 3.22 3.58% SGODE 2.88 5.52 7.61% 1.55 3.15 3.50% Table 6: Ablation study of SGODE-RNN. four datasets in Table 6. The experimental results demonstrate a significant performance decrease of SGODE when our strategies are not considered, except for the PEMS-BAY dataset, which exhibits a relatively minor improvement in performance related to signed relationships. In comparison to Positive2, we also eliminate the possibility of performance improvement due to the introduction of additional parameters. Hyperparameters Analysis We illustrate the impact of the diffusion steps m and the second dimension d of the node embedding matrix E∗in Fig. 2. Notably, higher diffusion steps (greater than 1) yield improved outcomes, suggesting the practicality of extracting intermediate states. Fig. 2b indicates that the model is sensitive to the node embedding dimension. Efficiency Study We assess the effectiveness of our approaches through comparisons with state-of-the-art methods. Fig. 3 showcases the efficiency of our methods in terms of parameters and runtime, compared to the state-of-the-art alternatives. Our methods (SGODE-RNN) achieve the lowest parameter count, meanwhile attain the minimum overall MAE. Additionally, our 1 2 3 4 5 6 7 8 9 10 11 12 Horizon 17 18 19 20 21 MAE STG-NCDE GTS STG-SGCDE SGODE-RNN (a) MAE on PeMSD4 1 2 3 4 5 6 7 8 9 10 11 12 Horizon 13 14 15 16 17 MAE STG-NCDE GTS STG-SGCDE SGODE-RNN (b) MAE on PeMSD8 Figure 1: Prediction error at each horizon. More results in other datasets are in Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8298 1 2 3 m (Embedding dim = 8) 7.6 7.7 7.8 7.9 8.0 8.1 8.2 MAPE(%) 2.87 2.89 2.91 5.52 5.57 5.62 MAPE MAE RMSE (a) Fix dim = 10 and vary m. 2 4 6 8 10 12 Embedding dim (m=2) 7.60 7.68 7.76 7.84 7.92 8.00 8.08 2.90 2.95 3.00 MAE 5.52 5.66 5.80 RMSE MAPE MAE RMSE (b) Fix m = 2 and vary dim. Figure 2: Sensitivity analysis of m and d on METR-LA. More results in other datasets are shown in Appendix. 105 106 107 Total Parameters 19 20 21 22 23 24 25 Overall MAE DCRNN GWNet STGODE STG-NCDE GTS MegaCRN STG-SGCDE SGODE-RNN (a) Parametric efficiency 20 40 60 80 100 120 Time(s)/epoch 19 20 21 22 23 24 25 Overall MAE DCRNN GWNet STGODE STG-NCDE GTS MegaCRN STG-SGCDE SGODE-RNN (b) Runtime efficiency Figure 3: Efficiency evaluation on PEMS04. approach demonstrates reasonable runtime performance. Notably, methodologies incorporating ODE solvers result in extended training durations, while the supplementary runtime introduced by SGODE (STG-SGCDE) remains minimal. Related Work Lately, a noteworthy trend has emerged that involves the fusion of ODEs with graph neural networks to acquire insights into continuous-time dynamics. Furthermore, a concerted effort has been dedicated to incorporating graph structures into dynamic systems. Noteworthy endeavors have been made in this domain to extend NODE (Chen et al. 2018) to harness the rich graph structure (Hwang et al. 2021; Poli et al. 2019). GDE (Poli et al. 2019) extends the reach of GNNs into the continuous field by leveraging ODEs to learn input-output relationships. CGNN (Xhonneux, Qu, and Tang 2020) introduces a continuous message-passing layer, defining derivatives as combined representations of both the current and initial nodes. LG-ODE (Huang, Sun, and Wang 2020) employs neighborhood information to gather dynamic contextual cues, addressing scenarios where node states may not be observable over time. NDCN (Zang and Wang 2020) ingeniously combines ODEs and GNNs for modeling continuoustime dynamics. STGODE (Fang et al. 2021) captures spatiotemporal dynamics using tensor-based ODEs. Moreover, a distinct strand of research endeavors to extend ODEs to the realm of dynamic graphs. CG-ODE (Huang, Sun, and Wang 2021) integrates coupled ODEs to model dynamics based on edges and nodes, respectively. Jin et al. (Jin, Li, and Pan 2022) introduce explicit temporal dependencies through ODEs, showcasing their efficacy in graph-based modeling tasks and underlining the significance of graph representation. Nevertheless, the presumption of a known exact graph structure in advance is often challenging to maintain. STG-NCDE (Choi et al. 2022) introduces a dynamic approach with two NCDEs, adeptly handling spatial and temporal information using an adaptive normalized adjacency matrix. In a similar vein, MTGODE (Jin, Li, and Pan 2022) abstracted input sequences into dynamic graphs, where node features evolve over time alongside an unspecified graph structure. Conclusions We introduce a simple yet effective framework and extend its application to more intricate scenarios. Our investigation substantiates the inherent capability of the proposed methodology to effectively capture and leverage signed information. Furthermore, we underscore the method’s adaptability, showcasing its effortless integration into a diverse range of cuttingedge dynamic modeling methodologies. Through comprehensive experimentation encompassing both synthetic and real-world datasets, we unveil the untapped potential of incorporating signed relations. This integration results in notable enhancements in performance, particularly in the context of short-term or interpolation prediction tasks. However, in cases where the graph size is substantial, challenges arise in accurately learning the signed graph, potentially leading to overfitting to local optima. Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62206205, 62206207, in part by the Young Talent Fund of Association for Science and Technology in Shaanxi, China under Grant 20230129, in part by the Guangdong High-level Innovation Research Institution Project under Grant 2021B0909050008, and in part by the Guangzhou Key Research and Development Program under Grant 202206030003. References Bai, L.; Yao, L.; Li, C.; Wang, X.; and Wang, C. 2020. Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting. In NeurIPS, volume 33, 17804–17815. Barab´asi, A.-L.; and Albert, R. 1999. Emergence of scaling in random networks. Science, 286(5439): 509–512. Chen, C.; Petty, K.; Skabardonis, A.; Varaiya, P.; and Jia, Z. 2001. Freeway performance measurement system: mining loop detector data. Transportation Research Record, 1748(1): 96–102. Chen, R. T. Q.; Rubanova, Y.; Bettencourt, J.; and Duvenaud, D. K. 2018. Neural Ordinary Differential Equations. In NeurIPS, volume 31. Choi, J.; Choi, H.; Hwang, J.; and Park, N. 2022. Graph neural controlled differential equations for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 6367–6374. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8299 Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Deng, A.; and Hooi, B. 2021. Graph neural network-based anomaly detection in multivariate time series. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 4027–4035. Derr, T.; Ma, Y.; and Tang, J. 2018. Signed graph convolutional networks. In 2018 IEEE International Conference on Data Mining (ICDM), 929–934. IEEE. Erd˝os, P.; R´enyi, A.; et al. 1960. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1): 17–60. Fang, Z.; Long, Q.; Song, G.; and Xie, K. 2021. Spatialtemporal graph ode networks for traffic flow forecasting. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, 364–373. Feng, M.; Hou, H.; Zhang, L.; Guo, Y.; Yu, H.; Wang, Y.; and Mian, A. 2023. Exploring Hierarchical Spatial Layout Cues for 3D Point Cloud based Scene Graph Prediction. IEEE Transactions on Multimedia. Fortunato, S. 2010. Community detection in graphs. Physics Reports, 486(3-5): 75–174. Franceschi, L.; Niepert, M.; Pontil, M.; and He, X. 2019. Learning discrete structures for graph neural networks. In International conference on machine learning, 1972–1982. PMLR. Huang, Z.; Sun, Y.; and Wang, W. 2020. Learning continuous system dynamics from irregularly-sampled partial observations. Advances in Neural Information Processing Systems, 33: 16177–16187. Huang, Z.; Sun, Y.; and Wang, W. 2021. Coupled graph ode for learning interacting system dynamics. In The 27th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD). Hwang, J.; Choi, J.; Choi, H.; Lee, K.; Lee, D.; and Park, N. 2021. Climate Modeling with Neural Diffusion Equations. In ICDM, 230–239. Jhin, S. Y.; Jo, M.; Kong, T.; Jeon, J.; and Park, N. 2021a. ACE-NODE: Attentive co-evolving neural ordinary differential equations. In KDD, 736–745. Jhin, S. Y.; Lee, J.; Jo, M.; Kook, S.; Jeon, J.; Hyeong, J.; Kim, J.; and Park, N. 2022. Exit: Extrapolation and interpolationbased neural controlled differential equations for time-series classification and forecasting. In Proceedings of the ACM Web Conference 2022, 3102–3112. Jhin, S. Y.; Shin, H.; Hong, S.; Jo, M.; Park, S.; Park, N.; Lee, S.; Maeng, H.; and Jeon, S. 2021b. Attentive Neural Controlled Differential Equations for Time-series Classification and Forecasting. In ICDM, 250–259. Jiang, R.; Wang, Z.; Yong, J.; Jeph, P.; Chen, Q.; Kobayashi, Y.; Song, X.; Fukushima, S.; and Suzumura, T. 2023. SpatioTemporal Meta-Graph Learning for Traffic Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7): 8078–8086. Jin, M.; Li, Y.-F.; and Pan, S. 2022. Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., Advances in Neural Information Processing Systems. Jin, M.; Zheng, Y.; Li, Y.-F.; Chen, S.; Yang, B.; and Pan, S. 2023. Multivariate Time Series Forecasting With Dynamic Graph Neural ODEs. IEEE Transactions on Knowledge and Data Engineering, 35(9): 9168–9180. Karlebach, G.; and Shamir, R. 2008. Modelling and analysis of gene regulatory networks. Nature reviews Molecular cell biology, 9(10): 770–780. Kidger, P.; Morrill, J.; Foster, J.; and Lyons, T. 2020. Neural controlled differential equations for irregular time series. Advances in Neural Information Processing Systems, 33: 6696– 6707. Kipf, T.; Fetaya, E.; Wang, K.-C.; Welling, M.; and Zemel, R. 2018. Neural relational inference for interacting systems. In International Conference on Machine Learning, 2688–2697. PMLR. Li, Y.; Yu, R.; Shahabi, C.; and Liu, Y. 2018. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In International Conference on Learning Representations. Lieberman, E.; Hauert, C.; and Nowak, M. A. 2005. Evolutionary dynamics on graphs. Nature, 433(7023): 312–316. Marbach, D.; Costello, J. C.; K¨uffner, R.; Vega, N. M.; Prill, R. J.; Camacho, D. M.; Allison, K. R.; Kellis, M.; Collins, J. J.; and Stolovitzky, G. 2012. Wisdom of crowds for robust gene network inference. Nature Methods, 9(8): 796–804. Poli, M.; Massaroli, S.; Park, J.; Yamashita, A.; Asama, H.; and Park, J. 2019. Graph neural ordinary differential equations. arXiv preprint arXiv:1911.07532. Rubanova, Y.; Chen, R. T. Q.; and Duvenaud, D. K. 2019. Latent Ordinary Differential Equations for Irregularly-Sampled Time Series. In NeurIPS, volume 32. Shang, C.; Chen, J.; and Bi, J. 2021. Discrete graph structure learning for forecasting multiple time series. arXiv preprint arXiv:2101.06861. Shao, Z.; Zhang, Z.; Wang, F.; and Xu, Y. 2022. Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1567–1577. Shi, G.; Altafini, C.; and Baras, J. S. 2019. Dynamics over signed networks. SIAM Review, 61(2): 229–257. Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. Wasserman, S.; Faust, K.; et al. 1994. Social network analysis: Methods and applications. Cambridge university press. Watts, D. J.; and Strogatz, S. H. 1998. Collective dynamics of ‘small-world’networks. Nature, 393(6684): 440–442. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020a. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1): 4–24. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8300 Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Chang, X.; and Zhang, C. 2020b. Connecting the dots: Multivariate time series forecasting with graph neural networks. In KDD, 753–763. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; and Zhang, C. 2019. Graph WaveNet for Deep Spatial-Temporal Graph Modeling. In IJCAI-19, 1907–1913. Xhonneux, L.-P.; Qu, M.; and Tang, J. 2020. Continuous graph neural networks. In International Conference on Machine Learning, 10432–10441. PMLR. Zang, C.; and Wang, F. 2020. Neural dynamics on complex networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 892–902. Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; and Li, H. 2019. T-GCN: A temporal graph convolutional network for traffic prediction. IEEE Transactions on Intelligent Transportation Systems, 21(9): 3848–3858. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8301
2024
922
18,765
Deep Structural Knowledge Exploitation and Synergy for Estimating Node Importance Value on Heterogeneous Information Networks Yankai Chen1, Yixiang Fang2*, Qiongyan Wang3, Xin Cao4, Irwin King1 1 The Chinese University of Hong Kong, Hong Kong 2 The Chinese University of Hong Kong, ShenZhen 3 University of Copenhagen, Denmark 4 The University of New South Wales, Australia {ykchen,king}@cse.cuhk.edu.hk, [email protected], [email protected], [email protected] Abstract Node importance estimation problem has been studied conventionally with homogeneous network topology analysis. To deal with network heterogeneity, a few recent methods employ graph neural models to automatically learn diverse sources of information. However, the major concern revolves around that their full adaptive learning process may lead to insufficient information exploration, thereby formulating the problem as the isolated node value prediction with underperformance and less interpretability. In this work, we propose a novel learning framework: SKES. Different from previous automatic learning designs, SKES exploits heterogeneous structural knowledge to enrich the informativeness of node representations. Based on a sufficiently uninformative reference, SKES estimates the importance value for any input node, by quantifying its disparity against the reference. This establishes an interpretable node importance computation paradigm. Furthermore, SKES dives deep into the understanding that “nodes with similar characteristics are prone to have similar importance values” whilst guaranteeing that such informativeness disparity between any different nodes is orderly reflected by the embedding distance of their associated latent features. Extensive experiments on three widelyevaluated benchmarks demonstrate the performance superiority of SKES over several recent competing methods. Introduction Estimating node importance, as one of the classic problems in network science, founds various downstream applications, such as recommender systems, web information search and retrieval, query disambiguation, and resource allocation optimization (Zhang and Zhu 2019; Park et al. 2020; Zheng et al. 2021; Yang et al. 2022; Zhang et al. 2022a; Hu et al. 2020a, 2022, 2021b; Song, Zhang, and King 2023c; Chen et al. 2021; Fang et al. 2017; He et al. 2023b,a). Traditional approaches revolve around the analyses of network topologies, e.g., closeness centrality (Nieminen 1974), degree analysis (Nieminen 1974), and PageRank methodologies (Page et al. 1999; Haveliwala 2003). With the proliferation of heterogeneous information in graph data, conventional methods focusing on topology *The corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. a2: 10 < l a t e x i t s h a 1 _ b a s e 6 4 = " Q 3 5 2 m 7 A H V t 7 H f I X x T 6 l 0 d T I l L C o = " > A A A B 8 H i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C b a C p 7 J b K o q n o h e P F e y H t E v J p t k 2 N M k u S V Y o S 3 + F F w + K e P X n e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F 8 S c a e O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e t X S U K E K b J O K R 6 g R Y U 8 4 k b R p m O O 3 E i m I R c N o O x r c z v / 1 E l W a R f D C T m P o C D y U L G c H G S o 9 l 3 K + W r 5 H n 9 o s l t + L O g V a J l 5 E S Z G j 0 i 1 + 9 Q U Q S Q a U h H G v d 9 d z Y + C l W h h F O p 4 V e o m m M y R g P a d d S i Q X V f j o / e I r O r D J A Y a R s S Y P m 6 u + J F A u t J y K w n Q K b k V 7 2 Z u J / X j c x 4 Z W f M h k n h k q y W B Q m H J k I z b 5 H A 6 Y o M X x i C S a K 2 V s R G W G F i b E Z F W w I 3 v L L q 6 R V r X i 1 y s V 9 t V S / y e L I w w m c w j l 4 c A l 1 u I M G N I G A g G d 4 h T d H O S / O u / O x a M 0 5 2 c w x / I H z + Q N Z 4 4 7 P < / l a t e x i t > a3: 13 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 R J F k E w i x N d 7 e d r r L q e e z 6 8 h N + 8 = " > A A A B 8 H i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R j D x R H Z B o / F E 9 O I R E w E N b E i 3 d K G h 7 W 7 a r g n Z 8 C u 8 e N A Y r / 4 c b / 4 b C + x B w Z d M 8 v L e T G b m B T F n 2 r j u t 5 N b W V 1 b 3 8 h v F r a 2 d 3 b 3 i v s H L R 0 l i t A m i X i k H g K s K W e S N g 0 z n D 7 E i m I R c N o O R j d T v / 1 E l W a R v D f j m P o C D y Q L G c H G S o 9 l 3 K u V r 5 B X 6 x V L b s W d A S 0 T L y M l y N D o F b + 6 / Y g k g k p D O N a 6 4 7 m x 8 V O s D C O c T g r d R N M Y k x E e 0 I 6 l E g u q / X R 2 8 A S d W K W P w k j Z k g b N 1 N 8 T K R Z a j 0 V g O w U 2 Q 7 3 o T c X / v E 5 i w k s / Z T J O D J V k v i h M O D I R m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t g Q v M W X l 0 m r W v H O K u d 3 1 V L 9 O o s j D 0 d w D K f g w Q X U 4 R Y a 0 A Q C A p 7 h F d 4 c 5 b w 4 7 8 7 H v D X n Z D O H 8 A f O 5 w 9 f + I 7 T < / l a t e x i t > p1: 88 < l a t e x i t s h a 1 _ b a s e 6 4 = " J h 4 k A + P 9 8 P C + E h g 4 t A j K W p v l i i c = " > A A A B 8 H i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C b a C p 7 J b F I u n o h e P F e y H t E v J p t k 2 N M k u S V Y o S 3 + F F w + K e P X n e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F 8 S c a e O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e t X S U K E K b J O K R 6 g R Y U 8 4 k b R p m O O 3 E i m I R c N o O x r c z v / 1 E l W a R f D C T m P o C D y U L G c H G S o / l u O + V r 1 G t 1 i + W 3 I o 7 B 1 o l X k Z K k K H R L 3 7 1 B h F J B J W G c K x 1 1 3 N j 4 6 d Y G U Y 4 n R Z 6 i a Y x J m M 8 p F 1 L J R Z U + + n 8 4 C k 6 s 8 o A h Z G y J Q 2 a q 7 8 n U i y 0 n o j A d g p s R n r Z m 4 n / e d 3 E h D U / Z T J O D J V k s S h M O D I R m n 2 P B k x R Y v j E E k w U s 7 c i M s I K E 2 M z K t g Q v O W X V 0 m r W v E u K p f 3 1 V L 9 J o s j D y d w C u f g w R X U 4 Q 4 a 0 A Q C A p 7 h F d 4 c 5 b w 4 7 8 7 H o j X n Z D P H 8 A f O 5 w + G Q o 7 s < / l a t e x i t > p3: 48 < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 + o / z k t N D + Z 8 M 8 0 M w b C m 9 5 R J v u I = " > A A A B 8 H i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R j D x R H Y R I + F E 4 s U j J v J h Y E O 6 p Q s N b X f T d k 0 I 4 V d 4 8 a A x X v 0 5 3 v w 3 F t i D g i + Z 5 O W 9 m c z M C 2 L O t H H d b y e z s b m 1 v Z P d z e 3 t H x w e 5 Y 9 P W j p K F K F N E v F I d Q K s K W e S N g 0 z n H Z i R b E I O G 0 H 4 9 u 5 3 3 6 i S r N I P p h J T H 2 B h 5 K F j G B j p c d i 3 L 8 q 1 l C l 2 s 8 X 3 J K 7 A F o n X k o K k K L R z 3 / 1 B h F J B J W G c K x 1 1 3 N j 4 0 + x M o x w O s v 1 E k 1 j T M Z 4 S L u W S i y o 9 q e L g 2 f o w i o D F E b K l j R o o f 6 e m G K h 9 U Q E t l N g M 9 K r 3 l z 8 z + s m J q z 6 U y b j x F B J l o v C h C M T o f n 3 a M A U J Y Z P L M F E M X s r I i O s M D E 2 o 5 w N w V t 9 e Z 2 0 y i W v U r q + L x f q t T S O L J z B O V y C B z d Q h z t o Q B M I C H i G V 3 h z l P P i v D s f y 9 a M k 8 6 c w h 8 4 n z + A 2 I 7 i < / l a t e x i t > p4: 60 < l a t e x i t s h a 1 _ b a s e 6 4 = " s q / J + E T 4 a t 0 w P N O m D o w l n z 6 u z y w = " > A A A B 8 H i c b V D L S g N B E O z 1 G e M r 6 t H L Y C J 4 C r s h P s g p 4 M V j B P O Q Z A m z k 9 l k y M z s M j M r h C V f 4 c W D I l 7 9 H G / + j Z N k D 5 p Y 0 F B U d d P d F c S c a e O 6 3 8 7 a + s b m 1 n Z u J 7 + 7 t 3 9 w W D g 6 b u k o U Y Q 2 S c Q j 1 Q m w p p x J 2 j T M c N q J F c U i 4 L Q d j G 9 n f v u J K s 0 i + W A m M f U F H k o W M o K N l R 5 L c b 9 a q q E r t 1 8 o u m V 3 D r R K v I w U I U O j X / j q D S K S C C o N 4 V j r r u f G x k + x M o x w O s 3 3 E k 1 j T M Z 4 S L u W S i y o 9 t P 5 w V N 0 b p U B C i N l S x o 0 V 3 9 P p F h o P R G B 7 R T Y j P S y N x P / 8 7 q J C W / 8 l M k 4 M V S S x a I w 4 c h E a P Y 9 G j B F i e E T S z B R z N 6 K y A g r T I z N K G 9 D 8 J Z f X i W t S t m r l i / v K 8 V 6 L Y s j B 6 d w B h f g w T X U 4 Q 4 a 0 A Q C A p 7 h F d 4 c 5 b w 4 7 8 7 H o n X N y W Z O 4 A + c z x 9 5 S 4 7 d < / l a t e x i t > p5: 12 < l a t e x i t s h a 1 _ b a s e 6 4 = " j b K O 9 F R N k X k M D A D v D z / 5 I 2 o C i e w = " > A A A B 8 H i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C b a C p 7 J b L E p P B S 8 e K 9 g P a Z e S T b N t a J J d k q x Q l v 4 K L x 4 U 8 e r P 8 e a / M W 3 3 o K 0 P B h 7 v z T A z L 4 g 5 0 8 Z 1 v 5 3 c x u b W 9 k 5 + t 7 C 3 f 3 B 4 V D w + a e s o U Y S 2 S M Q j 1 Q 2 w p p x J 2 j L M c N q N F c U i 4 L Q T T G 7 n f u e J K s 0 i + W C m M f U F H k k W M o K N l R 7 L 8 a B W r i O v O i i W 3 I q 7 A F o n X k Z K k K E 5 K H 7 1 h x F J B J W G c K x 1 z 3 N j 4 6 d Y G U Y 4 n R X 6 i a Y x J h M 8 o j 1 L J R Z U + + n i 4 B m 6 s M o Q h Z G y J Q 1 a q L 8 n U i y 0 n o r A d g p s x n r V m 4 v / e b 3 E h D d + y m S c G C r J c l G Y c G Q i N P 8 e D Z m i x P C p J Z g o Z m 9 F Z I w V J s Z m V L A h e K s v r 5 N 2 t e J d V W r 3 1 V K j n s W R h z M 4 h 0 v w 4 B o a c A d N a A E B A c / w C m + O c l 6 c d + d j 2 Z p z s p l T + A P n 8 w d 2 Q 4 7 b < / l a t e x i t > a1: 25 < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 w 8 7 h J a T c E p d + w O M d 8 K m E R N J g j o = " > A A A B 8 H i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C b a C p 7 J b L I q n o h e P F e y H t E v J p t k 2 N M k u S V Y o S 3 + F F w + K e P X n e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F 8 S c a e O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e t X S U K E K b J O K R 6 g R Y U 8 4 k b R p m O O 3 E i m I R c N o O x r c z v / 1 E l W a R f D C T m P o C D y U L G c H G S o 9 l 3 P f K 1 6 h a 6 x d L b s W d A 6 0 S L y M l y N D o F 7 9 6 g 4 g k g k p D O N a 6 6 7 m x 8 V O s D C O c T g u 9 R N M Y k z E e 0 q 6 l E g u q / X R + 8 B S d W W W A w k j Z k g b N 1 d 8 T K R Z a T 0 R g O w U 2 I 7 3 s z c T / v G 5 i w i s / Z T J O D J V k s S h M O D I R m n 2 P B k x R Y v j E E k w U s 7 c i M s I K E 2 M z K t g Q v O W X V 0 m r W v E u K r X 7 a q l + k 8 W R h x M 4 h X P w 4 B L q c A c N a A I B A c / w C m + O c l 6 c d + d j 0 Z p z s p l j + A P n 8 w d h c 4 7 U < / l a t e x i t > v1: 150 < l a t e x i t s h a 1 _ b a s e 6 4 = " z 3 z p 3 Z P q Z z J 8 3 7 F h z P b w H G g 6 h D U = " > A A A B 8 X i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R j D x R H a J R M O J x I t H T A S J s C H d 0 o W G b r t p u y R k w 7 / w 4 k F j v P p v v P l v L L A H B V 8 y y c t 7 M 5 m Z F 8 S c a e O 6 3 0 5 u Y 3 N r e y e / W 9 j b P z g 8 K h 6 f t L V M F K E t I r l U n Q B r y p m g L c M M p 5 1 Y U R w F n D 4 G 4 9 u 5 / z i h S j M p H s w 0 p n 6 E h 4 K F j G B j p a f y p O + V 6 8 i r u f 1 i y a 2 4 C 6 B 1 4 m W k B B m a / e J X b y B J E l F h C M d a d z 0 3 N n 6 K l W G E 0 1 m h l 2 g a Y z L G Q 9 q 1 V O C I a j 9 d X D x D F 1 Y Z o F A q W 8 K g h f p 7 I s W R 1 t M o s J 0 R N i O 9 6 s 3 F / 7 x u Y s I b P 2 U i T g w V Z L k o T D g y E s 3 f R w O m K D F 8 a g k m i t l b E R l h h Y m x I R V s C N 7 q y + u k X a 1 4 V 5 X a f b X U q G d x 5 O E M z u E S P L i G B t x B E 1 p A Q M A z v M K b o 5 0 X 5 9 3 5 W L b m n G z m F P 7 A + f w B 7 a + P G g = = < / l a t e x i t > v2: 85 < l a t e x i t s h a 1 _ b a s e 6 4 = " W A x 5 3 7 T 1 C K Y F p m 0 w I / R 2 r 8 o r O E 8 = " > A A A B 8 H i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R j D x R H a J R M K J x I t H T O T D w I Z 0 S 4 G G t r t p u y R k w 6 / w 4 k F j v P p z v P l v L L A H B V 8 y y c t 7 M 5 m Z F 0 S c a e O 6 3 0 5 m a 3 t n d y + 7 n z s 4 P D o + y Z + e t X Q Y K 0 K b J O S h 6 g R Y U 8 4 k b R p m O O 1 E i m I R c N o O J n c L v z 2 l S r N Q P p p Z R H 2 B R 5 I N G c H G S k / F a b 9 c r K F q p Z 8 v u C V 3 C b R J v J Q U I E W j n / / q D U I S C y o N 4 V j r r u d G x k + w M o x w O s / 1 Y k 0 j T C Z 4 R L u W S i y o 9 p P l w X N 0 Z Z U B G o b K l j R o q f 6 e S L D Q e i Y C 2 y m w G e t 1 b y H + 5 3 V j M 6 z 6 C Z N R b K g k q 0 X D m C M T o s X 3 a M A U J Y b P L M F E M X s r I m O s M D E 2 o 5 w N w V t / e Z O 0 y i X v p l R 5 K B f q t T S O L F z A J V y D B 7 d Q h 3 t o Q B M I C H i G V 3 h z l P P i v D s f q 9 a M k 8 6 c w x 8 4 n z + K G Y 7 o < / l a t e x i t > t1: 320 < l a t e x i t s h a 1 _ b a s e 6 4 = " u S X + H 0 R c 7 N B h Y v Y d V f c z 3 L M o a M M = " > A A A B 8 X i c b V D L S g N B E O y N r x h f U Y 9 e B h P B U 9 i N i p J T w I v H C O a B y R J m J 7 P J k N n Z Z a Z X C C F / 4 c W D I l 7 9 G 2 / + j Z P H Q R M L G o q q b r q 7 g k Q K g 6 7 7 7 W T W 1 j c 2 t 7 L b u Z 3 d v f 2 D / O F R w 8 S p Z r z O Y h n r V k A N l 0 L x O g q U v J V o T q N A 8 m Y w v J 3 6 z S e u j Y j V A 4 4 S 7 k e 0 r 0 Q o G E U r P R a x 6 x U r 5 K L s d v M F t + T O Q F a J t y A F W K D W z X 9 1 e j F L I 6 6 Q S W p M 2 3 M T 9 M d U o 2 C S T 3 K d 1 P C E s i H t 8 7 a l i k b c + O P Z x R N y Z p U e C W N t S y G Z q b 8 n x j Q y Z h Q F t j O i O D D L 3 l T 8 z 2 u n G N 7 4 Y 6 G S F L l i 8 0 V h K g n G Z P o + 6 Q n N G c q R J Z R p Y W 8 l b E A 1 Z W h D y t k Q v O W X V 0 m j X P I u S 1 f 3 5 U K 1 s o g j C y d w C u f g w T V U 4 Q 5 q U A c G C p 7 h F d 4 c 4 7 w 4 7 8 7 H v D X j L G a O 4 Q + c z x / p F I 8 X < / l a t e x i t > t2: 156 < l a t e x i t s h a 1 _ b a s e 6 4 = " q R a m a l f k 0 d q 8 n R l 4 g 2 8 L H B k 2 n j Q = " > A A A B 8 X i c b V D L T g J B E O z 1 i f h C P X q Z C C a e y C 4 R N Z x I v H j E R B 4 R N m R 2 G G D C 7 O x m p t e E E P 7 C i w e N 8 e r f e P N v H G A P C l b S S a W q O 9 1 d Q S y F Q d f 9 d t b W N z a 3 t j M 7 2 d 2 9 / Y P D 3 N F x w 0 S J Z r z O I h n p V k A N l 0 L x O g q U v B V r T s N A 8 m Y w u p 3 5 z S e u j Y j U A 4 5 j 7 o d 0 o E R f M I p W e i x g t 1 S o E K 9 8 1 c 3 l 3 a I 7 B 1 k l X k r y k K L W z X 1 1 e h F L Q q 6 Q S W p M 2 3 N j 9 C d U o 2 C S T 7 O d x P C Y s h E d 8 L a l i o b c + J P 5 x V N y b p U e 6 U f a l k I y V 3 9 P T G h o z D g M b G d I c W i W v Z n 4 n 9 d O s H / j T 4 S K E + S K L R b 1 E 0 k w I r P 3 S U 9 o z l C O L a F M C 3 s r Y U O q K U M b U t a G 4 C 2 / v E o a p a J 3 W S z f l / L V S h p H B k 7 h D C 7 A g 2 u o w h 3 U o A 4 M F D z D K 7 w 5 x n l x 3 p 2 P R e u a k 8 6 c w B 8 4 n z / 1 O Y 8 f < / l a t e x i t > p2: 485 < l a t e x i t s h a 1 _ b a s e 6 4 = " d D 2 R s I J 6 S O f 8 8 I v 2 N I b m / u O X 2 3 o = " > A A A B 8 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L L a C p 5 K U F k t P B S 8 e K 9 g P b E P Z b D f t 0 s 0 m 7 G 6 E E v o v v H h Q x K v / x p v / x k 2 b g 7 Y + G H i 8 N 8 P M P C / i T G n b / r Z y W 9 s 7 u 3 v 5 / c L B 4 d H x S f H 0 r K v C W B L a I S E P Z d / D i n I m a E c z z W k / k h Q H H q c 9 b 3 a b + r 0 n K h U L x Y O e R 9 Q N 8 E Q w n x G s j f R Y j k b V c h P V G v V R s W R X 7 C X Q J n E y U o I M 7 V H x a z g O S R x Q o Q n H S g 0 c O 9 J u g q V m h N N F Y R g r G m E y w x M 6 M F T g g C o 3 W V 6 8 Q F d G G S M / l K a E R k v 1 9 0 S C A 6 X m g W c 6 A 6 y n a t 1 L x f + 8 Q a z 9 h p s w E c W a C r J a 5 M c c 6 R C l 7 6 M x k 5 R o P j c E E 8 n M r Y h M s c R E m 5 A K J g R n / e V N 0 q 1 W n F q l f l 8 t t Z p Z H H m 4 g E u 4 B g d u o A V 3 0 I Y O E B D w D K / w Z i n r x X q 3 P l a t O S u b O Y c / s D 5 / A P a m j y A = < / l a t e x i t > Empirical Feature Distributions Knowledge < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 m F 1 a v 9 W p l 6 N n / X T r I A k 3 2 K k y W A = " > A A A B 8 H i c b V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g q e w W R Y 9 F L 4 K X C v Z D 2 q V k s 7 N t a D Z Z k q x S l v 4 K L x 4 U 8 e r P 8 e a / M W 3 3 o K 0 P B h 7 v z T A z L 0 g 4 0 8 Z 1 v 5 3 C y u r a + k Z x s 7 S 1 v b O 7 V 9 4 / a G m Z K g p N K r l U n Y B o 4 E x A 0 z D D o Z M o I H H A o R 2 M r q d + + x G U Z l L c m 3 E C f k w G g k W M E m O l h 1 s h n z i E A + i X K 2 7 V n Q E v E y 8 n F Z S j 0 S 9 / 9 U J J 0 x i E o Z x o 3 f X c x P g Z U Y Z R D p N S L 9 W Q E D o i A + h a K k g M 2 s 9 m B 0 / w i V V C H E l l S x g 8 U 3 9 P Z C T W e h w H t j M m Z q g X v a n 4 n 9 d N T X T p Z 0 w k q Q F B 5 4 u i l G M j 8 f R 7 H D I F 1 P C x J Y Q q Z m / F d E g U o c Z m V L I h e I s v L 5 N W r e q d V c / v a p X 6 V R 5 H E R 2 h Y 3 S K P H S B 6 u g G N V A T U R S j Z / S K 3 h z l v D j v z s e 8 t e D k M 4 f o D 5 z P H + b Y k H o = < / l a t e x i t > p2’s Structural < l a t e x i t s h a 1 _ b a s e 6 4 = " Z n 0 M b 2 S W X + 6 N X I / B q e 8 2 1 H m v S l w = " > A A A B + 3 i c b V D L S g N B E J z 1 G e N r j U c v g 4 n o K e w G R Y 9 B L x 4 j m g c k y z I 7 m U 2 G z D 6 Y 6 R H D k l / x 4 k E R r / 6 I N / / G S b I H T S x o K K q 6 6 e 4 K U s E V O M 6 3 t b K 6 t r 6 x W d g q b u / s 7 u 3 b B 6 W W S r S k r E k T k c h O Q B Q T P G Z N 4 C B Y J 5 W M R I F g 7 W B 0 M / X b j 0 w q n s Q P M E 6 Z F 5 F B z E N O C R j J t 0 u V 1 K 9 V T h W + B 6 k p a E m E b 5 e d q j M D X i Z u T s o o R 8 O 3 v 3 r 9 h O q I x U A F U a r r O i l 4 G Z H A q W C T Y k 8 r l h I 6 I g P W N T Q m E V N e N r t 9 g k + M 0 s d h I k 3 F g G f q 7 4 m M R E q N o 8 B 0 R g S G a t G b i v 9 5 X Q 3 h l Z f x O N X A Y j p f F G q B I c H T I H C f S 0 Z B j A 0 h V H J z K 6 Z D I g k F E 1 f R h O A u v r x M W r W q e 1 6 9 u K u V 6 9 d 5 H A V 0 h I 7 R G X L R J a q j W 9 R A T U T R E 3 p G r + j N m l g v 1 r v 1 M W 9 d s f K Z Q / Q H 1 u c P w I + T o Q = = < / l a t e x i t > Knowledge < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 m F 1 a v 9 W p l 6 N n / X T r I A k 3 2 K k y W A = " > A A A B 8 H i c b V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g q e w W R Y 9 F L 4 K X C v Z D 2 q V k s 7 N t a D Z Z k q x S l v 4 K L x 4 U 8 e r P 8 e a / M W 3 3 o K 0 P B h 7 v z T A z L 0 g 4 0 8 Z 1 v 5 3 C y u r a + k Z x s 7 S 1 v b O 7 V 9 4 / a G m Z K g p N K r l U n Y B o 4 E x A 0 z D D o Z M o I H H A o R 2 M r q d + + x G U Z l L c m 3 E C f k w G g k W M E m O l h 1 s h n z i E A + i X K 2 7 V n Q E v E y 8 n F Z S j 0 S 9 / 9 U J J 0 x i E o Z x o 3 f X c x P g Z U Y Z R D p N S L 9 W Q E D o i A + h a K k g M 2 s 9 m B 0 / w i V V C H E l l S x g 8 U 3 9 P Z C T W e h w H t j M m Z q g X v a n 4 n 9 d N T X T p Z 0 w k q Q F B 5 4 u i l G M j 8 f R 7 H D I F 1 P C x J Y Q q Z m / F d E g U o c Z m V L I h e I s v L 5 N W r e q d V c / v a p X 6 V R 5 H E R 2 h Y 3 S K P H S B 6 u g G N V A T U R S j Z / S K 3 h z l v D j v z s e 8 t e D k M 4 f o D 5 z P H + b Y k H o = < / l a t e x i t > p5’s Structural < l a t e x i t s h a 1 _ b a s e 6 4 = " U w 4 d D i / S h 6 7 f J O 6 X G i e O X E l 6 C T Q = " > A A A B + 3 i c b V D L T g J B E J z F F + J r x a O X i W D 0 R H a J R I 9 E L x 4 x y i O B z W Z 2 m I U J s 4 / M 9 B g J 4 V e 8 e N A Y r / 6 I N / / G A f a g Y C W d V K q 6 0 9 0 V p I I r c J x v K 7 e 2 v r G 5 l d 8 u 7 O z u 7 R / Y h 8 W W S r S k r E k T k c h O Q B Q T P G Z N 4 C B Y J 5 W M R I F g 7 W B 0 M / P b j 0 w q n s Q P M E 6 Z F 5 F B z E N O C R j J t 4 v l 1 K + V z x S + B 6 k p a E m E b 5 e c i j M H X i V u R k o o Q 8 O 3 v 3 r 9 h O q I x U A F U a r r O i l 4 E y K B U 8 G m h Z 5 W L C V 0 R A a s a 2 h M I q a 8 y f z 2 K T 4 1 S h + H i T Q V A 5 6 r v y c m J F J q H A W m M y I w V M v e T P z P 6 2 o I r 7 w J j 1 M N L K a L R a E W G B I 8 C w L 3 u W Q U x N g Q Q i U 3 t 2 I 6 J J J Q M H E V T A j u 8 s u r p F W t u B e V 2 l 2 1 V L / O 4 s i j Y 3 S C z p G L L l E d 3 a I G a i K K n t A z e k V v 1 t R 6 s d 6 t j 0 V r z s p m j t A f W J 8 / x U W T p A = = < / l a t e x i t > Latent Space Reference < l a t e x i t s h a 1 _ b a s e 6 4 = " a + M j k K 0 y 6 J e u 5 N n n T j s T c H z b R G o = " > A A A B 8 H i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z m i 6 L L o x m U V + 5 B 2 K J n 0 T h u a y Q x J R i h D v 8 K N C 0 X c + j n u / B v T d h b a e i B w O O d e c s 8 J E s G 1 c d 1 v Z 2 V 1 b X 1 j s 7 B V 3 N 7 Z 3 d s v H R w 2 d Z w q h g 0 W i 1 i 1 A 6 p R c I k N w 4 3 A d q K Q R o H A V j C 6 m f q t J 1 S a x / L B j B P 0 I z q Q P O S M G i s 9 3 m O I C i X D X q n s V t w Z y D L x c l K G H P V e 6 a v b j 1 k a o T R M U K 0 7 n p s Y P 6 P K c C Z w U u y m G h P K R n S A H U s l j V D 7 2 e z g C T m 1 S p + E s b J P G j J T f 2 9 k N N J 6 H A V 2 M q J m q B e 9 q f i f 1 0 l N e O V n X C a p s a n m H 4 W p I C Y m 0 / S k z x U y I 8 a W U K a 4 v Z W w I V W U G d t R 0 Z b g L U Z e J s 1 q x T u v X N x V y 7 X r v I 4 C H M M J n I E H l 1 C D W 6 h D A x h E 8 A y v 8 O Y o 5 8 V 5 d z 7 m o y t O v n M E f + B 8 / g D M p Z B p < / l a t e x i t > Figure 1: An HIN example and SKES methodology. analysis may thus fail to capture such diverse semantic knowledge embedded. Heterogeneous Information Networks (HINs), which are essentially prevalent in various domains including bibliographic information networks, social media, and knowledge graphs, usually are composed of multiple typed nodes and edges. Notably, network heterogeneity leads to the variations of semantics and values in interpreting node importance. This makes the studied problem more challenging on HINs than that on homogeneous counterparts. We illustrate this by an HIN example of DBLP network in Figure 1(a). The important values of authors, papers, venues, and topics are indicated by their h-index values, citation numbers, venue rank, and popularity (e.g., numbers of Web pages in Google), respectively. Despite the network connection, adjacent nodes may have different influences on the importance value of the target node. For example, authors and topics have different facets of contributions to the importance values of papers. Moreover, while the scholar’s h-index values are often in [0, 350]1, the maximum paper citation number could be over 300,0002. This shows that the importance value heterogeneity is essentially influenced by semantic heterogeneity. Consequently, it becomes evident that both variations are inherently difficult to be simply analyzed from homogeneous network topologies, necessitating the exigency for new methodologies. Related Works. Among a few recent attempts, methods with Graph Neural Networks (GNNs) have emerged as a promising direction (Park et al. 2019). Due to good knowledge mining ability from high-order topologies, GNNs can produce semantic enrichment to vectorized node represen1https://www.webometrics.info/en/hlargerthan100 2https://www.genscript.com/top-100-most-cited-publications.html The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8302 tations benefiting downstream tasks (Chen et al. 2022a,b, 2023a; Yang et al. 2023a,b; Zhang et al. 2023b; Chen et al. 2020). By incorporating specific designs for HINs, GNN-based methods show the potential in dealing with information heterogeneity (Liang et al. 2023; Fu and King 2023), especially for node importance estimation. For instance, GENI (Park et al. 2019) applies GNN and attention mechanism to aggregate the structure information for node importance estimation. MULTIIMPORT (Park et al. 2020) improves GENI by using a variety of external input signals. RGTN (Huang et al. 2021) utilizes both the network structure information and nodes’ attributes for estimating the importance of nodes. However, all these works focus on solving the importance-based ranking problem, without inferring the specific importance values. Recent work HIVEN (Huang et al. 2022) considers the value heterogeneity of node importance in HINs by learning both local and global node information. Nevertheless, the primary concern lies in that HIVEN purely relies on GNNs for automatic information aggregation but ignores the explicit structural knowledge mining on HINs, making the model underperforming and less interpretable in importance calculation. Our Contribution. We push forward the investigation of node importance estimation over HINs by introducing a novel learning framework, namely SKES (Deep Structural Knowledge Exploitation and Synergy). SKES makes the assumption that each node corresponds to a unique highdimensional feature distribution reflecting its essential characteristics and knowledge to determine the node importance. However, such feature distribution is unknown and agnostic that can only be sampled and observed by certain empirical feature representations. These empirical representations are expected to be as much informative with heterogeneous knowledge as possible, so that the importance of each node within the underlying HIN can thus be estimated from its associated feature representations. Then SKES transforms the importance regression problem into quantifying the informativeness of these empirical node feature representations. Underpinned by Optimal Transport Theory (Villani et al. 2009), our formulation eventually provides an effectual and interpretable learning paradigm with theoretical guarantees. Specifically, SKES consists of three progressive modules. For each node, (1) Structural Priori Knowledge Exploitation focuses on mining the intrinsic intra- and inter-node information, i.e., centrality and similarity, from the given HIN, providing the comprehensive coverage of structural knowledge with diversity and heterogeneity. Then our (2) Synergetic Representation of Feature Distribution module learns to empirically represent the node’s unique, complicated, and high-dimensional feature distribution with adaptive heterogeneous knowledge synergy. Lastly, (3) we manually create a random feature distribution as the reference, which functions as the “coordinate origin” in the embedding space. Due to the randomness, this reference is sufficiently uninformative. Then our Node Importance Value Estimation module quantifies the informativeness of the input node by measuring its distance against the reference in the latent sapce, and transforms such measurement for node importance estimation. Furthermore, anchoring on this reference, the estimated importance values obey triangle inequality such that the informativeness gap between different node pairs can also be captured. This produces a fine-grained importance learning framework, which is different from previous methods that formulate the problem as the importance value prediction for isolated nodes. We provide a high-level illustration in Figure 1(b) and summarize our principal contributions as: • To the best of our knowledge, we are the first to formulate the HIN node importance estimation problem via quantifying node feature informativeness with Optimal Transport methodology, providing a novel and interpretable perspective to the related community. • We propose SKES model with three effective modules that operate progressively from structural knowledge exploitation and synergy to node importance estimation. • We conduct model evaluation on three real-world benchmarks. Experimental results demonstrate the performance superiority of our model against competing methods as well as the effectiveness of each module contained therein. Preliminaries and Problem Formulation Definition 1: Heterogeneous Information Network (HIN). It is a directed graph H = (V, E) with a node type mapping function φ : V −→A and an edge type mapping function ψ : E −→R, where A is a set of node types and R is a set of edge types satisfying |A| + |R| > 2. Definition 2: Metapaths. A metapath P is with the form A1 R1 −→A2 R2 −→· · · RH −→AH+1 that defines on the node and edge types, i.e., Ai ∈A, Ri ∈R. We omit the edge types if they are unique between two connected node types, e.g., A1A2 · · · AH+1. We call a path between nodes v1 to vH+1 a path instance of P, if ∀i, the node vi and edge ei = (vi, vi+1) satisfy that φ(vi) = Ai and ψ(ei) = Ri. Definition 3: 1-Wasserstein Distance. While Optimal Transport (OT) is the problem of moving one distribution of mass, e.g., P, to another, e.g., Q, as efficiently as possible, 1-Wasserstein distance is the derived minimum distribution distance that is defined by the following formulation: W (P, Q) = inf f∈T P (P,Q) Z ∥x −f(x)∥1dP(x), (1) where the infimum is over TP(P, Q) that denotes all transport plans. If a minimizer f ∗exists, it is thus the solution to compute W(P, Q). For common one-dimensional distributions, there is a closed-form solution to compute the optimal plan f ∗as f ∗(x) := F −1 P (FQ(x)); F is the cumulative distribution function (CDF) associated with the underlying distribution. 1-Wasserstein distance satisfies positivedefiniteness, symmetry, and triangle inequality (Nietert et al. 2022; Korotin, Selikhanovych, and Burnaev 2023; Chen et al. 2023c,b; Naderializadeh et al. 2021). Problem Definition. Given an HIN H =(V, E) and the importance values for a subset of nodes V′ ⊂V of some given types A′ ⊆A, we aim to learn a mapping function g(·) : V →R that estimates the importance value of every node of the given types A′ in H. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8303 Similarity Knowledge Centrality Knowledge Centrality embedding Similarity embedding Structural Priori Knowledge Exploitation Extracting Structural Knowledge for Paper Nodes Extracting Structural Knowledge for Topic Nodes Addition Matrix product ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ Centrality and similarity-aware embedding Embeddings based on other metapaths Linear Linear Linear Linear Linear Linear Linear Linear Linear Synergetic Representation Learning Term Venue Author Paper 10 7 16 ? 25 ? 30 v1 t2 p1 p2 p3 a1 a2 a3 a4 Input HIN t1 Knowledge Aggregation Linear Linear Linear Attention matrices Knowledge Aggregation Learning Feature Distribution for Paper Nodes Learning Feature Distribution for Topic Nodes a1 a2 a4 "!! Metapath 1: A →P →A a3 Encode Encode Metapaths …… …… …… Node Importance Value Estimation hR i <latexit sha1_base64="eIEQOCS5o5ot74u4dha3uB3pXU=">AB/HicbVDLSsNAFL3xWesr2qWbwVZwVZKi6LoxmUV+4A2lsl0g6dP JiZCHEX3HjQhG3fog7/8ZJm4W2Hhg4nHMv98xI86ksqxvY2V1bX1js7RV3t7Z3ds3Dw47MowFoW0S8lD0XCwpZwFtK6Y47UWCYt/ltOtOr3O/+0iFZGFwr5KIOj4eB8xjBCstDc1KLR34WE1cD02yIXtI7La0KxadWsGtEzsglShQGtofg1GIYl9GijCsZR924qUk2KhGOE0Kw9iSNMpn hM+5oG2KfSWfhM3SilRHyQqFfoNBM/b2RYl/KxHf1ZB5ULnq5+J/Xj5V36aQsiGJFAzI/5MUcqRDlTaARE5QonmiCiWA6KyITLDBRuq+yLsFe/PIy6Tq9ln9/LZRbV4VdZTgCI7hFGy4gCbcQAvaQCBZ3iFN+PJeDHejY/56IpR7FTgD4zPHyjclHU=</latexit> Estimating Node Importance for Paper Nodes Estimating Node Importance for Topic Nodes P0 <latexit sha1_base64="U9hSW7yUBY15HcFYKELkliVnc=">AB7HicdVDLSsNAFL2pr1pfVZduBlvBVUiKr+6KblxWMG2hDWUynbRDJ5MwMxFK6De4caGIWz/InX /jpK3g8CFwzn3cu89QcKZ0o7zbhWldW14rpY3Nre2d8u5eS8WpJNQjMY9lJ8CKciaop5nmtJNIiqOA03Ywvsr9h2VisXiVk8S6kd4KFjICNZG8qrNvlPtlyuOXfc+pmLfhPXdmaowALNfvmtN4hJGlGhCcdKdV0n0X6GpWaE02mplyqaYDLGQ9o1VOCIKj+bHTtFR0YZoDCWpoRGM/XrRIYjpSZRYDojrEfqp5eLf3ndVIcXfsZEkmoqyHxRmH KkY5R/jgZMUqL5xBMJDO3IjLCEhNt8imZED4/Rf+TVs12T+zTm1qlcbmIowgHcAjH4MI5NOAamuABAQb38AhPlrAerGfrZd5asBYz+/AN1usH+I2OJw=</latexit> Pi <latexit sha1_base64="5h1WcfbS+kXJkAGMLJWXaVhnlkE=">AB7HicdVDLSsNAFL2pr1pfVZduBlvBVUhaxXZXdOygmkLbSiT6aQdOpmEmYlQr/BjQtF3PpB7v wbpw/B54ELh3Pu5d57goQzpR3n3cqtrK6tb+Q3C1vbO7t7xf2DlopTSahHYh7LToAV5UxQTzPNaSeRFEcBp+1gfDXz23dUKhaLWz1JqB/hoWAhI1gbySs3+6zcL5Yc26nXnGod/Sau7cxRgiWa/eJbxCTNKJCE46V6rpOov0MS80Ip9NCL1U0wWSMh7RrqMARVX42P3aKTowyQGEsTQmN5urXiQxHSk2iwHRGWI/UT28m/uV1Ux3W/IyJNVUkMWiMO VIx2j2ORowSYnmE0MwkczcisgIS0y0yadgQvj8FP1PWhXbPbPbyqlxuUyjwcwTGcgsX0IBraIHBjcwyM8WcJ6sJ6tl0VrzlrOHMI3WK8fXA6OaQ=</latexit> W (P0, Pi) <latexit sha1_base64="Z3naLh1FKLH4avjZeiMLBTnjYqs=">ACAXicdVDJSgNBEO1xjXGLehG8NCZCBAmTqJjcgl48jmAWyAxDT6cnadKz0F0jhCFe/BUvHhTx6l9482/sLILrg4LHe1VU1fNiwRWY 5rsxN7+wuLScWcmurq1vbOa2tpsqSiRlDRqJSLY9opjgIWsAB8HasWQk8ARreYOLsd+6YVLxKLyGYcycgPRC7nNKQEtubrfQsgXzoWi5hG2XG5L3uvDYcHN5c2SWauaxzX8m5RL5gR5NIPl5t7sbkSTgIVABVGqUzZjcFIigVPBRlk7USwmdEB6rKNpSAKmnHTywQgfaKWL/UjqCgFP1K8TKQmUGgae7gwI9NVPbyz+5XUS8KtOysM4ARbS6SI/ERgiPI4Dd7lkFMRQE0Il17di2ieSUNChZXUIn5/i/0mzUiqflE6vKvn6 +SyODNpD+6iIyugM1dElslADUXSL7tEjejLujAfj2XiZts4Zs5kd9A3G6weBVJWs</latexit> Pj <latexit sha1_base64="a4GFQvqlFNRWy4WqMcFePZGtxQ=">AB7HicdVDLSsNAFL2pr1pfVZduBlvBVUiqYrsrunFZwbSFNpTJdNKOnUzCzEQod/gxoUibv0gd/6N04fg8CFwzn3cu89QcKZ0o7zbuWldW1/LrhY3Nre2d4u5eU8WpJNQjMY9lO8CKciaop5nmtJ1IiqOA01Ywupz6rTsqFYvFjR4n1I/wQLCQEayN5JUbvdtyr1hybKdW dU5q6DdxbWeGEizQ6BXfuv2YpBEVmnCsVMd1Eu1nWGpGOJ0UuqmiCSYjPKAdQwWOqPKz2bETdGSUPgpjaUpoNFO/TmQ4UmocBaYzwnqofnpT8S+vk+qw6mdMJKmgswXhSlHOkbTz1GfSUo0HxuCiWTmVkSGWGKiT4FE8Lnp+h/0qzY7ql9dl0p1S8WceThA7hGFw4hzpcQM8IMDgHh7hyRLWg/Vsvcxbc9ZiZh+wXr9AF2Tjmo=</latexit> W (P0, Pj) <latexit sha1_base64="dmwvqf1DXNkG1z1/+Xz9KQJNjo=">ACAXicdVDJSgNBEO2JW4xb1IvgpTERIkiYiYrJLejFYwSzQGYejo9SZuehe4aIYR48Ve8eFDEq3/hzb+xswiuDwoe71VRVc+LBVdgmu9Gam5+YXEpv ZxZWV1b38hubjVUlEjK6jQSkWx5RDHBQ1YHDoK1YslI4AnW9PrnY795w6TiUXgFg5g5AemG3OeUgJbc7E6+aQvmQ6Hmoe45l7bknd7cJB3szmzaFbK5lEF/yZW0Zwgh2aoudk3uxPRJGAhUEGUaltmDM6QSOBUsFHGThSLCe2TLmtrGpKAKWc4+WCE97XSwX4kdYWAJ+rXiSEJlBoEnu4MCPTUT28s/uW1E/DLzpCHcQIspNFfiIwRHgcB+5wySiIgSaESq5vxbRHJKGgQ8voED4/xf+TRqloHRdPLku56tksjTaRXuogCx0iqroAtVQHVF0i+7RI3oy7ow H49l4mbamjNnMNvoG4/UDguCVrQ=</latexit> h⇤ i <latexit sha1_base64="NQoyt2hUrg1VGxc0IWcR4bT9xE=">AB/3icbVDNS8MwHE3n15xfVcGLl+AmiIfRDkWPQy8eJ7gP2GpJ03QLS9OSpMKoPfivePGgiFf/DW/ +N6ZbD7r5IOTx3u9HXp4XMyqVZX0bpaXldW18nplY3Nre8fc3evIKBGYtHEItHzkCSMctJWVDHSiwVBocdI1xtf5373gQhJI36nJjFxQjTkNKAYKS25kFt4EXMl5NQX+kouz91U5rVXLNq1a0p4CKxC1IFBVqu+TXwI5yEhCvMkJR924qVkyKhKGYkqwSWKEx2hI+pyFBLpNP8GTzWig+DSOjDFZyqvzdSFMo8oZ4MkRrJeS8X/P6iQounZTy OFGE49lDQcKgimBeBvSpIFixiSYIC6qzQjxCAmGlK6voEuz5Ly+STqNun9XPbxvV5lVRxkcgiNwAmxwAZrgBrRAG2DwCJ7BK3gznowX4934mI2WjGJnH/yB8fkD9BCWEA=</latexit> h⇤ 0 <latexit sha1_base64="4F5TkXr9MbOlJPz2A6SiuoXgoiA=">AB/3icbVDNS8MwHE3n15xfVcGLl+AmiIfRDkWPQy8eJ7gP2GpJ03QLS9OSpMKoPfivePGgiFf/DW/+N6ZbD7r5IOTx3u9HXp4XM yqVZX0bpaXldW18nplY3Nre8fc3evIKBGYtHEItHzkCSMctJWVDHSiwVBocdI1xtf5373gQhJI36nJjFxQjTkNKAYKS25kFt4EXMl5NQX+kouz91UyuruWbVqltTwEViF6QKCrRc82vgRzgJCVeYISn7thUrJ0VCUcxIVhksQIj9GQ9DXlKCTSaf5M3isFR8GkdCHKzhVf2+kKJR5Qj0ZIjWS814u/uf1ExVcOinlcaIx7OHgoRBFcG8DOhTQbBiE0QFlRnhXiEBMJKV1bRJdjzX14knUbdPquf3zaqzau ijI4BEfgBNjgAjTBDWiBNsDgETyDV/BmPBkvxrvxMRstGcXOPvgD4/MHnTqV1w=</latexit> h⇤ j <latexit sha1_base64="b6BK6lUJRozyOE3ZyeXEzYgKB4=">AB/3icbVA7T8MwGHTKq5RXAImFxaJFQgxVUoFgrGBhLBJ9SG2IHMdpTZ04sh2kKmTgr7AwgBArf4ONf4NTMkDLSZPd98n8+LGZXKsr6M0sLi0vJKebWytr6xuWVu73QkTwQmbcwZFz0PScJoRNqKkZ6sSAo9BjpeuPL3O/eEyEpj27UJCZOiIYRDShGSkuVcbeJz5chLqK x1lt8duepfVXLNq1a0p4DyxC1IFBVqu+TnwOU5CEinMkJR924qVkyKhKGYkqwSWKEx2hI+pGKCTSaf5M3ioFR8GXOgTKThVf2+kKJR5Qj0ZIjWSs14u/uf1ExWcOymN4kSRCP8FCQMKg7zMqBPBcGKTRBWFCdFeIREgrXVlFl2DPfnmedBp1+6R+et2oNi+KOspgHxyAI2CDM9AEV6AF2gCDB/AEXsCr8Wg8G2/G+89oySh2dsEfGB/f9ZaWEQ=</latexit> Ed <latexit sha1_base64="mi3egSwQjGIqgysX0765udoX5lQ=">AB9XicbVDLSsNAFL3xWeur6tLNYCu4KklRdFkUwWUF+4A2LZPJpB06mYSZiVJC/8ONC0Xc+i/u/Bs nbRbaemDgcM693DPHizlT2ra/rZXVtfWNzcJWcXtnd2+/dHDYUlEiCW2SiEey42FORO0qZnmtBNLikOP07Y3vsn89iOVikXiQU9i6oZ4KFjACNZG6ld6IdYjz0tvp32/MiV7ao9A1omTk7KkKMxKH31/IgkIRWacKxU17Fj7aZYakY4nRZ7iaIxJmM8pF1DBQ6pctNZ6ik6NYqPgkiaJzSaqb83UhwqNQk9M5mFVIteJv7ndRMdXLkpE3GiqSDzQ0HC kY5QVgHymaRE84khmEhmsiIywhITbYoqmhKcxS8vk1at6pxXL+5r5fp1XkcBjuEzsCBS6jDHTSgCQkPMrvFlP1ov1bn3MR1esfOcI/sD6/AHhyZId</latexit> f ⇤(x|·) <latexit sha1_base64="+5p0DwbMDZURFZI8R/IdjPNFHUQ=">AB9XicbVBNT8JAEJ36ifiFevSyEUzQA2mJRo9ELx4xkY8ECtkuW9iw7Ta7W5VU/ocXDxr j1f/izX/jAj0o+JXt6bycw8L+JMadv+tpaWV1bX1jMb2c2t7Z3d3N5+XYlYElojgvZ9LCinIW0pnmtBlJigOP04Y3vJ74jXsqFRPhnR5F1A1wP2Q+I1gbqVPwO6fFx6c26Ql9Uujm8nbJngItEicleUhR7ea+2j1B4oCGmnCsVMuxI+0mWGpGOB1n27GiESZD3KctQ0McUOUm06vH6NgoPeQLaSrUaKr+nkhwoNQo8ExngPVA zXsT8T+vFWv/0k1YGMWahmS2yI850gJNIkA9JinRfGQIJpKZWxEZYImJNkFlTQjO/MuLpF4uOWel89tyvnKVxpGBQziCIjhwARW4gSrUgICEZ3iFN+vBerHerY9Z65KVzhzAH1ifPwNnkY0=</latexit> FPi(x) = 1 d Pd n=1 δ " x −hR i [n] # <latexit sha1_base64="Eak0wQ9QBxvyGTz9A3jsC9s8hZ0=">ACNXicbVBNSxBEO3R+LUaXc0xlyarsB6yzIiF0EUQg45bEJWhZ1x6Omp2W3s6 Rm6a8SlmT/lxf+RUzx4MIRc/Qv2rntINA8KHu9VUVUvKaUw6Pt3szsm7n5hcWlxvLK29W15vrGqSkqzaHC1no84QZkEJBDwVKOC81sDyRcJZcnoz9syvQRhTqO45KiHI2UCITnKGT4uaXzU+x7caibl9vH4aZtwGtU1rGpoqj606DOqLlIYpSGShAzb1x/DnOEwyeywjsXFt76KQi0GQ9zejJstv+NPQF+TYEp aZIpu3PwRpgWvclDIJTOmH/glRpZpFxC3QgrAyXjl2wAfUcVy8FEdvJ1TbecktKs0K4U0on694RluTGjPHGd4PNS28s/s/rV5gdRFaoskJQ/HlRVkmKBR1HSFOhgaMcOcK4Fu5WyofMRYcu6IYLIXj58mtyutMJdjt7X3daR8fTOBbJe/KBtElA9skR+Uy6pEc4uSE/yQP5d1695v789z64w3nXlH/oH3+AQvx au4</latexit> Importance values h⇤ i <latexit sha1_base64="XCtAOTBQacyZhvmq6+3qB4ieRyw=">AB/HicbVDLSsNAFL3xWesr2qWbwVYQFyUpi6LblxWsA9oY5hMJ+3QyYOZiRBC/RU3LhRx64e482+ctFlo64GBwzn3cs8cL+ZMKsv6NlZW19Y3Nktb5e2d3b198+CwI6NENomEY9Ez8OSchb StmK014sKA48Trve5Cb3u49USBaF9yqNqRPgUch8RrDSkmtWatkgwGrs+Wg8dlDdjatuWbVqlszoGViF6QKBVqu+TUYRiQJaKgIx1L2bStWToaFYoTaXmQSBpjMsEj2tc0xAGVTjYLP0UnWhkiPxL6hQrN1N8bGQ6kTANPT+ZB5aKXi/95/UT5V07GwjhRNCTzQ37CkYpQ3gQaMkGJ4qkmAimsyIyxgITpfsq6xLsxS8vk06jbp/XL+4a1eZ1UcJjuAYTsGS2jCLbSgDQRSeIZXeDOejBfj3fiYj64YxU4F/sD4/AHr3ZRN</latexit> #(%) Figure 2: The framework of our proposed model (best view in color). SKES Methodology Overview We now formally introduce our SKES model (Deep Structural Knowledge Exploitation and Synergy). To implement the mapping function g(·), the general framework of SKES to estimate the node importance value is: given a random distribution P0 representing the sufficiently uninformative reference, for each node vi ∈V, we first learn to represent the high-dimensional feature distribution Pi for vi, then the mapping function g(·) can be implemented as ∀vi ∈V, g(vi) = g(W(P0, Pi)). As depicted in Figure 2, SKES comprises three progressively-operated modules. Structural Priori Knowledge Exploitation Acquiring informative structural knowledge is critical to estimating node importance. To achieve this in HINs, we adopt the metapath-based methodologies (Sun et al. 2011) to first obtain the underlying sub-networks as follows. 1) Metapath-induced Sub-network Construction. There are limited but meaningful metapaths in HINs that describe the meta information of HINs (Fu et al. 2020; Fu and King 2024). For the k-th metapath, the induced sub-network is denoted as Gj k = (Vj k, Ej k), such that Vj k contains all j-th typed nodes and Ej k contains all the edges between nodes in Vj k, i.e., two nodes are linked in this sub-network if there is a path instance of the metapath between them. 2) Priori Centrality and Similarity Embedding. After obtaining the induced sub-networks, we propose to extract node pointwise centrality and pairwise similarity, which are two essential network properties that reveal the intra- and inter-node priori knowledge. Specifically, centrality measures generally are either defined based on network properties (Nieminen 1974; Hirsch 2005; Egghe et al. 2006; Negre et al. 2018; Page et al. 1999; Dorogovtsev, Goltsev, and Mendes 2006), or designed based on the shortest paths (Shaw 1954; Marchiori and Latora 2000; Sabidussi 1966). To fully capture the diverse information of the underlying structures, we pre-process these popular centrality measures to improve knowledge coverage. Detailed formulations are listed in Appendix. Then each centrality value is vectorized into 128-dimension via a two-layer perceptron. We denote the embedding calculated from the l-th (l ranges from 1 to L) centrality value of node vi by c(l) i,k. While centrality reflects the property of a given node oneself, similarity reveals the contrastive node information compared to others. In this work, SKES embeds the knowledge from the attribute- and topology-aware similarity. Specifically, for each edge in the induced sub-network Gj k, we assign the edge weight by the cosine similarity between the attributes of its two end nodes. Then we transform the similarity matrix as the transition probabilities to compute the node embeddings by adopting node2vec (Grover and Leskovec 2016). To embed topology-aware similarity, we directly follow PathSim (Sun et al. 2011) to calculate the similarity of each node pair in Gj k and then take an analogous embedding procedure via node2vec. Subsequently, we take the summation of two corresponding embeddings, denoted by f att i,k and f top i,k of the node vi, and finally have the similarity knowledge embedding as c(L+1) i,k = f att i,k + f top i,k . To summarize, centrality and similarity knowledge provides cohesive and complementary views of underlying networks, as each one of these measures usually represents a specific perspective of structural information. As we will show in experiments, they substantially boost SKES performance and stabilize the model training. Synergetic Representation of Feature Distribution 1) Heterogeneous Knowledge Aggregation. Since the priori centrality and similarity knowledge represents the node information from different perspectives, we propose to synergetically fuse them for later representing the unknown high-dimensional feature distribution. Concretely, for each node vi, let α(l) i,k denote the weighting coefficient of the lth centrality embedding and α(L+1) i,k represent the coefficient The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8304 of vi’s similarity embedding. We first derive the following coefficient calculations with l ranging from 1 to L + 1: α(l) i,k = 1 |Gφ(vi) k | X vi′ ∈Gφ(vi) k W1 tanh(W′ 1c(l) i′,k + b1), (2) where Gφ(vi) k denotes the induced sub-network. W1, W′ 1, and b1 are learnable parameters. These coefficients are further normalized with the softmax function, i.e., bα(l) i,k = exp(α(l) i,k)/ PL+1 l′=1 exp(α(l′) i,k ). bα(l) i,k attentively contributes to the k-th metapath derived knowledge embedding ei,k as: ei,k = L+1 X l=1 bα(l) i,kc(l) i,k. (3) Besides, for each node vi, since different metapaths lead to different sub-network extraction, we further adaptively fuse embeddings from these different sub-networks containing vi as well. Similarly, the coefficient τi,k of vi induced by the k-th metapath is defined as: τi,k = 1 |Gφ(vi) k | X vi′ ∈Gφ(vi) k W2 tanh(W′ 2ei′,k + b2), (4) where W2, W′ 2, and b2 are learnable parameters. Similarly, Eqn. (4) is further normalized across all other related coefficients as, bτi,k = exp(τi,k)/ PNφ(vi) k′=1 exp(τi,k′). Nφ(vi) denotes the number of metapaths starting from node type φ(vi). We derive the aggregated embedding of ei as follows: ei = Nφ(vi) X k=1 bτi,kei,k. (5) Given vi’s initial feature e′ i and the learned knowledge embedding ei, we obtain the aggregated representation xi by concatenation (denoted as ||), i.e., xi = e′ i || ei. 2) Empirical Representation of Feature Distribution. Intuitively, xi contains both the initial node features and refined structural knowledge. We then adaptively learn the empirical representations that are informative to represent the unknown high-dimensional node feature distributions. We achieve this by leveraging the self-attention mechanism (Vaswani et al. 2017; Chen et al. 2022c). Specifically, we extract hidden features from the input xi via implementing M attention heads in each layer. We denote the hidden feature of node vi learned by the r-th layer as h(r) i . We follow the conventional computation protocol to firstly obtain d-dimensional query, key and value variables: q(r) i,m = Wm qryh(r) i,m, k(r) i,m = Wm keyh(r) i,m, v(r) i,m = Wm valh(r) i,m, (6) where q(r) i,m, k(r) i,m, and v(r) i,m denote the m-th query, key, and value vectors of vi at the r-th layer. Wm qry, Wm key, and Wm val are learnable weights. Then, the attentive coefficient between nodes vj and vi is calculated as: S(r) m (vj, vi) = exp(q(r) i,mWψ(ej,i)(k(r) i,m)T µψ(ej,i) √ d ) P vj′ ∈N (vi) exp(q(r) i,mWψ(ej′,i)(k(r) j′,m)T µ ψ(ej′,i) √ d ) , (7) where ej,i denotes the edge from vj to vi, and Wψ(ej,i) represents the learnable weight matrix of edge type ψ(ej,i). µψ(ej,i) is the learnable magnitude for type ψ(ej,i) and N(i) is the neighbor set of node vi. Then the embedding v(r) i,m is updated via aggregating adjacent information as follows: ˜v(r) i,m = X vj∈N (vi) S(r) m (vj, vi)v(r) i,m. (8) Let || denote the concatenation and Wout is a learnable weight matrix. We finally complete the target feature representation by iteratively updating from r = 1 to R −1: h(r+1) i,m = h(r) i,m + Wout ·  ||M m=1˜v(r) i,m  . (9) The output of Eqn. (9), i.e., hR i for brevity, is expected to be empirically representative for the unknown node feature distribution with heterogeneous knowledge synergy. We explain our implementation to estimate node importance via mensurating the empirical distribution distances as follows. Node Importance Value Estimation As we mentioned earlier, notation Pi denotes the feature distribution associated with node vi. Since we use hR i to represent Pi, which is discrete, then its empirical CDF can be defined as follows: FPi(x) = 1 d d X n=1 δ  x −hR i [n]  , (10) where δ(·) returns 1 if the input is zero and 0 otherwise3. hR i [n] is the n-th element. To explicitly measure the distribution distance, we propose to compare Pi with a fixed random reference that functions as the “origin” in the embedding space. Specifically, we introduce a reference distribution P0 with associated feature representation h0 ∈Rd, elements of which are uniformly sampled. Then the distribution distance between Pi and P0 can be explicitly measured by 1-Wasserstein distance, i.e., W(P0, Pi). As we have introduced in Preliminaries, W(P0, Pi) is computed via implementing the optimal transport plan f ∗(x) := F −1 Pi (FP0(x)). Based on the empirical CDFs of P0 and Pi, we can quantitatively interpreted f ∗(x) as: f ∗ x|hR i  = argminx′∈hR i FPi(x′) = γ  , γ = FP0(x). (11) Moreover, let π(x′|hR i ) denote the ranking of each input x′ in the ascending sorting of elements in hR i . We can further achieve the following algorithmic implementation: f ∗ x|hR i  = argminx′∈hR i  π(x′|hR i ) = π(x|h0)  . (12) Please notice that, the indicator π(·) can be actually preprocessed via “argsort” to hR i and “sort” to h0, which essentially permutes and encodes hR i via referring to h0. Therefore, the resultant representation is denoted as h∗ i : h∗ i = ||d n=1  f ∗(h0[n]|hR i ) −h0[n]  . (13) h∗ i ∈Rd presents several desirable geometric properties, as it can efficiently reflect the 1-Wasserstein distance between distributions P0 and Pi as follows: ∥h∗ i ∥1 ∝W(P0, Pi) and h∗ i −h∗ j 1 ∝W(Pi, Pj). (14) 3Dirac delta function with R δ(x)dx = 1 for continuous inputs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8305 Dataset # Nodes # Edges # Node types # Edge types Target node Meaning # Node with Importance Range MUSIC10K 22,986 80,272 4 8 Artist Familiarity 4,214 (18.3%) [0, 1] Song Hotness 4,411 (19.1%) [0, 1] TMDB5K 76,926 359,780 7 12 Movie Popularity 4,802 (6.2%) [-7.89, 6.77] Director Box office 1,159 (1.5%) [0.021, 10.55] DBLP 249,903 2,428,250 4 6 Author H-index 101,958 (40.8%) [0, 159] Paper Citations 100,000 (40.0%) [0, 34191] Table 1: Statistics of three datasets (MUSIC10K, TMDB5K and DBLP). We attach the proof of Eqn. 14 in Appendix. h∗ i naturally inherit the relative order of the distance ranking with the theoretical guarantees. Based on h∗ i , we finally implement the importance estimation function g(·): g(vi) = λ · h∗ i , (15) where λ is a learnable vector to provide better regression capability. Obviously, for any node vi, its importance value g(vi) is correlated with its distribution distance to the reference P0. We showcase and analyze its performance superiority over competing methods in Experimental Evaluation. Training Objective We adopt the common regression loss with mean squared error between estimated and ground-truth importance values: Lmse = 1 |A′| X j∈A′ 1 |Vj| X vi∈Vj (g(vi) −yvi)2, (16) where yvi denotes the ground-truth importance value of node vi. Then the complete training objective is defined as: L = Lmse + µ∥∆∥2 2. (17) ∥∆∥2 2 is the L2-regularizer of trainable embeddings and variables to avoid over-fitting with the hyperparameter µ. Experiments We evaluate SKES with the research questions (RQs) as: • RQ1: How does SKES compare to state-of-the-art methods on the tasks of node importance value estimation and important node ranking? • RQ2: How does our proposed design of knowledge synergy and measurement contribute to SKES performance? • RQ3: How does other proposed components of SKES influence the model performance? Experiment Setups 1) Benchmarks. We include three widely evaluated realworld HIN datasets, namely MUSIC10K, TMDB5K, and DBLP. Dataset statistics are reported in Table 1 and detailed data descriptions are attached in Appendix. 2) Evaluation Metrics. For the node importance estimation task, three metrics are applied for performance evaluation, including mean absolute error (MAE), root mean square error (RMSE), and normalized root mean square error (NRMSE). The lower value of these metrics, the better the model performance. On the importance-based ranking task, we use normalized discounted cumulative gain (NDCG) and Spearman correlation coefficient (SPEARMAN), where a higher value indicates better performance. 3) Experimental Settings. In line with prior work (Park et al. 2019; Huang et al. 2022), we perform five-fold cross validation for testing and report the average performance. For each fold, 80% and 20% of nodes with ground-truth importance values are used for training and testing. 15% of training nodes is used for validating. We select symmetric metapaths with lengths less than four. We implement SKES using Python 3.8 and PyTorch 1.8.0 on a Linux machine with 4 Nvidia A100 GPUs and 4 Intel Core i78700 CPUs. Following HIVEN (Huang et al. 2022), node features are initialized from textual contents via sentenceBERT (Reimers and Gurevych 2019). We set the learning rate as 10−3 and train the model via Adam optimizer. Metapaths and hyperparameter settings are reported in Appendix. 4) Competing Models. We include three groups of existing models: (1) traditional network analytic methods, i.e., PageRank (PR) (Page et al. 1999) and personalized PageRank (PPR) (Haveliwala 2003); (2) machine learning methods, i.e., linear regression (LR) and random forest (RF); (3) neural network based models, i.e., GAT (Veliˇckovi´c et al. 2017), HGT (Hu et al. 2020b), GENI (Park et al. 2019), Multiimport (MULTI) (Park et al. 2020), RGTN (Huang et al. 2021), and HIVEN (Huang et al. 2022). Detail descriptions are referred in Appendix. Experimental Evaluation (RQ1) 1) Task of Importance Value Estimation. As shown in Table 2, we observe that: (1) Compared to HGT, GENI, and MULTI that mainly learn the graph heterogeneity, HIVEN considers the degree centrality that showcases its usefulness for estimating the node importance. (2) SKES further outperforms the baseline methods with performance gains over 2.75%, 4.76%, and 1.20% in MAE, RMSE, and NRMSE on three datasets. This demonstrates its effectiveness of structural knowledge exploitation and synergy for accurate estimation of importance values. (3) For the target “Director” type of TMDB5K, we observe a performance gap against HIVEN with -2.19% of MAE. One explanation is due to the data scarcity issue of “Director” for model training, i.e., 1.5% shown in Table 1, as SKES may be under-trained to produce sporadic inaccurate estimations. These “outliers” may significantly influence MAE that considers the average absolute difference between the predicted and actual values. On the contrary, RMSE and NRMSE are less sensitive to these outliers, providing a more objective evaluation by considering the magnitude of errors and reflecting the influence of outliers less prominently. (4) Additionally, we conduct the Wilcoxon signed-rank tests and the results show that all The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8306 Method MUSIC10K TMDB5K DBLP Song Artist Movie Director Paper Author M R N M R N M R N M R N M R N M R N PR 0.460 0.489 0.633 0.540 0.563 0.596 2.517 2.771 0.448 0.499 1.005 0.155 2.293 3.244 0.352 1.728 1.894 0.467 PPR 0.460 0.490 0.633 0.540 0.563 0.596 2.512 2.771 0.448 0.499 1.005 0.155 2.293 3.243 0.352 1.173 1.894 0.467 LR 0.137 0.165 0.208 0.126 0.164 0.173 0.672 0.851 0.138 0.541 0.812 0.125 1.061 1.319 0.146 0.628 0.7657 0.204 RF 0.122 0.152 0.213 0.110 0.143 0.151 0.774 0.954 0.154 0.500 0.857 0.132 1.057 1.312 0.145 0.555 0.674 0.180 NN 0.127 0.155 0.200 0.111 0.143 0.151 0.720 0.889 0.144 0.402 0.801 0.124 1.042 1.297 0.141 1.133 1.402 0.153 GAT 0.126 0.156 0.205 0.109 0.140 0.149 0.635 0.808 0.131 0.415 0.753 0.116 0.991 1.240 0.143 0.492 0.606 0.153 HGT 0.129 0.160 0.207 0.118 0.145 0.154 0.581 0.764 0.126 0.352 0.681 0.105 0.996 1.248 0.135 0.514 0.629 0.158 GENI 0.131 0.158 0.200 0.121 0.155 0.158 0.594 0.748 0.121 0.347 0.677 0.104 1.006 1.259 0.145 0.493 0.607 0.154 MULTI 0.147 0.185 0.234 0.147 0.186 0.190 0.982 1.166 0.189 0.479 0.764 0.117 2.061 2.451 0.268 1.316 1.467 0.371 RGTN 0.123 0.155 0.216 0.111 0.143 0.153 0.624 0.798 0.182 0.316 0.557 0.093 0.990 1.240 0.136 0.496 0.613 0.155 HIVEN 0.122 0.152 0.209 0.102 0.132 0.144 0.523 0.664 0.108 0.268 0.539 0.084 1.024 1.283 0.141 0.507 0.620 0.151 SKES 0.106 0.137 0.177 0.099 0.126 0.133 0.509 0.667 0.106 0.274 0.507 0.083 0.940 1.179 0.130 0.462 0.571 0.143 Gain 15.09%10.94%12.99%3.03%4.76%8.27%2.75%-0.45%1.89%-2.19%6.31%1.20%5.32%5.17%3.85%6.49%6.13%5.59% Table 2: Quantitative comparison on the importance value estimation task. Bold and underlined digits are the best and secondbest metric values (M, R, and N denote MAE, RMSE, and NRMSE, respectively). Method MUSIC10K TMDB5K DBLP Song Artist Movie Director Paper Author SP NDCG SP NDCG SP NDCG SP NDCG SP NDCG SP NDCG PR 0.013 0.596 0.176 0.743 0.548 0.775 0.182 0.473 -0.104 0.331 0.443 0.916 PPR -0.020 0.581 0.188 0.732 0.707 0.846 0.195 0.489 0.051 0.333 0.453 0.913 LR 0.226 0.701 -0.037 0.645 0.669 0.858 0.393 0.672 0.312 0.538 0.2445 0.676 RF 0.461 0.797 0.441 0.783 0.590 0.854 0.333 0.484 0.325 0.615 0.396 0.759 NN 0.383 0.774 0.431 0.820 0.657 0.850 0.414 0.613 0.352 0.583 -0.002 0.399 GAT 0.408 0.786 0.481 0.830 0.728 0.867 0.660 0.794 0.401 0.597 0.491 0.922 HGT 0.342 0.753 0.448 0.810 0.758 0.892 0.301 0.463 0.426 0.644 0.458 0.857 GENI 0.402 0.793 0.485 0.784 0.753 0.895 0.678 0.851 0.412 0.602 0.491 0.923 MULTI 0.467 0.808 0.500 0.871 0.728 0.867 0.660 0.704 0.364 0.596 0.452 0.918 RGTN 0.414 0.787 0.486 0.853 0.682 0.901 0.623 0.822 0.438 0.643 0.488 0.907 HIVEN 0.480 0.814 0.544 0.885 0.793 0.910 0.701 0.862 0.404 0.612 0.459 0.913 SKES 0.565 0.865 0.602 0.894 0.823 0.942 0.680 0.847 0.483 0.674 0.589 0.925 Gain 17.77% 6.27% 10.66% 1.02% 3.78% 3.52% -3.00% -1.74% 10.27% 4.66% 19.96% 0.22% Table 3: Quantitative comparison on the importance-based node ranking task (SP denotes SPEARMAN). SPEARMAN NDCG Training Time/epoch (ms) 0.680→0.755 0.847→0.902 745→1,219 (+11.03%) (+6.49%) (+63.62%) Table 4: Evaluation with the marginal ranking loss. the improvements that SKES has achieved are statistically significant with at least 95% confidence level. 2) Task of Important Node Ranking. We present evaluation results for important node ranking in Table 3 with twofold discussions: (1) Our model generally presents performance superiority over all baselines with 3.78%∼19.96% and 0.22%∼6.27% of metric improvements, respectively. This is intuitive as our model SKES, performing well for essential importance value estimation task, thus naturally inherits to present a good capability for ranking. (2) For the type “Director” of TMDB5K, SKES obtains the secondbest performance. Since our model incorporates more heterogeneous information, this implies that it may need more training data for better knowledge learning and fusion. One solution could be to supplement the original regression objective, i.e., MSE loss of Eqn. (16), by adding triplet ranking objective, e.g., marginal ranking loss, as follows: Lmrl = max  0, m+ g(vi)−g(v+)  − g(vi)−g(v−)  , (18) where m is the margin. v+ and v−are two nodes that the importance value gap between v+ and vi is larger than the value gap between v−and vi. The general idea of Eqn. (18) is to reduce the disparity between vi and its positive pair, i.e., v+, against its negative counterpart v−. We conduct experiments on this case with results shown in Table 4. The observation indicates that supplementing the ranking regularization of Eqn. (18) will boost the performance as expected. With the performance increasing over 6.49%, our model also surpasses HIVEN shown in Table 3. On the other hand, the training time cost inevitably increases, which is a practical trade-off to consider. Hence, in this paper, we mainly report the results based on our original objective function and leave the further design of learning frameworks as future work. Study of Knowledge Synergy and Measurement (RQ2) 1) Synergy of Structural Knowledge. To validate the contribution of structural knowledge, we randomly disable the knowledge proportion, i.e., c(l) i,k, from 0% (intact) to -80%. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8307 Percentage Testing Metrics MUSIC10K TMDB5K DBLP TMDB5K Evaluation MAE Epoch Figure 3: (1) Decreasing contribution of structural knowledge; (2) curves of evaluation MAE (best view in color). Variant Movie Director MSE RMSE NRMSE MSE RMSE NRMSE w/o WD 0.540 0.692 0.120 0.284 0.523 0.092 w/o λ 0.524 0.675 0.116 0.279 0.519 0.092 SKES 0.509 0.667 0.106 0.274 0.507 0.083 Table 5: Study of node importance estimation. Our observations from Figure 3 are twofold: (1) From the upper-row figures, we notice remarkable performance degradations across three datasets, where DBLP presents a more pronounced performance perturbation on MAE and RMSE curves. This demonstrates the efficacy of our knowledge synergy mechanism in identifying node importance, especially in larger HINs with diverse and complex structural information. (2) We plot MAE curves for cases of keeping intact, removing 40%, and removing 80% prior knowledge in the first 500 training epochs on “Movie” of TMDB5K. As shown in the lower-row figure, we observe that SKES produces much less bursting perturbations than the curves of removing 40% and removing 80%. This implies that our implementation to adaptively fuse heterogeneous knowledge can also help to stabilize the model performance. 2) Mechanism of Node Importance Estimation. To evaluate the effectiveness of our proposed importance estimation method, we design two variants on TMDB5K. (1) Firstly, we replace our original design with 1-Wasserstein distance by simply utilizing a two-layer of MLP for node importance estimation. We denote the variant as w/o WD. (2) Secondly, we retain the measurement of 1-Wasserstein distance but remove λ in Eqn. (15) for regression, denoted as w/o λ. Results in Table 5 not only justify that integrating Optimal Transport theory for importance estimation achieves better performance (comparing w/o WD to w/o λ), but also prove the simplicity yet effectiveness of non-linear value regression with learnable λ (comparing SKES to w/o λ). We further visualize the absolute value gap between the ground truth and estimated values of “Movie” nodes. To provide a readable visualization in Figure 4, the curves are based on averaged values with the rolling window as 10 nodes. As the curve gets closer to the bottom, the value estimation will be more accurate. The curves in Figure 4 indicate that Node index Value Gap Node index Figure 4: Absolute importance value gap (best view in color) Variant Movie Director MSE RMSE NRMSE MSE RMSE NRMSE w/o NH 0.551 0.722 0.113 0.320 0.573 0.098 w/o ATT 0.544 0.696 0.118 0.319 0.549 0.096 SKES 0.509 0.667 0.106 0.274 0.507 0.083 Table 6: Ablation study of other module designs. SKES consistently performs better than those two variants. Ablation Study (RQ3) 1) Network Heterogeneity Learning. We adopt metapathbased methodology to learn heterogeneity. To validate its usefulness, we propose a variant w/o NH via directly treating input networks as homogeneous. From Table 6, we observe over 8.33% (MSE of “Movie” type) performance decay on TMDB5K. This demonstrates the necessity of explicitly distinguishing the heterogeneity of node and edge types for the node importance estimation task. 2) Self-attention in Synergetic Representation Learning. We create a variant w/o ATT by disabling all designs in Eqn’s. (6-9) and simply using ei to replace hR i for importance estimation. The empirical results between w/o ATT and SKES in Table 6 clearly demonstrate that the selfattention mechanism also work for our model to attentively adjust the contributions for different sources of structural prior knowledge in model learning. Conclusion and Future Work We propose a novel framework SKES for estimating HIN node importance. SKES effectively leverages structural knowledge to harness information synergy, providing a robust measurement of node importance. Our empirical model evaluation on three public benchmarks demonstrates the performance superiority of SKES against competing baselines. As for future work, we identify two promising directions: (1) It is worth investigating other Learning paradigm (Zhang et al. 2022b, 2023c,a; Song, Zhang, and King 2023b,a; He et al. 2023c) to further improve the quality of learned node embeddings from heterogeneous information. (2) We also plan to incorporate language/vision modeling (Qiu et al. 2022, 2021; Chen et al. 2022d; Li et al. 2022; Sun et al. 2022; Li et al. 2020; Hu et al. 2021a, 2023; Zhu et al. 2023) as practical HINs may contain multi-modal information. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8308 Acknowledgments The work described here was partially supported by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14222922, RGC GRF No. 2151185) and (RGC Research Impact Fund R5034-18; CUHK 2410021). Yixiang Fang was supported in part by NSFC (Grant 62102341), Guangdong Talent Program (Grant 2021QN02X826), Shenzhen Science and Technology Program (Grants JCYJ20220530143602006 and ZDSYS 20211021111415025), and Shenzhen Science and Technology Program and Guangdong Key Lab of Mathematical Foundations for Artificial Intelligence. References Chen, Y.; Fang, Y.; Zhang, Y.; and King, I. 2023a. Bipartite Graph Convolutional Hashing for Effective and Efficient Top-N Search in Hamming Space. In WWW, 3164–3172. Chen, Y.; Guo, H.; Zhang, Y.; Ma, C.; Tang, R.; Li, J.; and King, I. 2022a. Learning binarized graph representations with multi-faceted quantization reinforcement for top-k recommendation. In SIGKDD, 168–178. Chen, Y.; Truong, T.; Shen, X.; Wang, M.; Li, J.; Chan, J.; and King, I. 2023b. Topological representation learning for e-commerce shopping behaviors. In MLG-KDD. Chen, Y.; Wu, Y.; Ma, S.; and King, I. 2020. A Literature Review of Recent Graph Embedding Techniques for Biomedical Data. In ICONIP, 21–29. Chen, Y.; Yang, M.; Zhang, Y.; Zhao, M.; Meng, Z.; Hao, J.; and King, I. 2022b. Modeling scale-free graphs with hyperbolic geometry for knowledge-aware recommendation. In WSDM, 94–102. Chen, Y.; Yang, Y.; Wang, Y.; Bai, J.; Song, X.; and King, I. 2022c. Attentive knowledge-aware graph convolutional networks with collaborative guidance for personalized recommendation. In ICDE, 299–311. IEEE. Chen, Y.; Zhang, J.; Fang, Y.; Cao, X.; and King, I. 2021. Efficient community search over large directed graphs: An augmented index-based approach. In IJCAI, 3544–3550. Chen, Y.; Zhang, Y.; Guo, H.; Tang, R.; and King, I. 2022d. An Effective Post-training Embedding Binarization Approach for Fast Online Top-K Passage Matching. In AACL, 102–108. Chen, Y.; Zhang, Y.; Yang, M.; Song, Z.; Ma, C.; and King, I. 2023c. WSFE: Wasserstein Sub-graph Feature Encoder for Effective User Segmentation in Collaborative Filtering. In SIGIR, 2521–2525. Dorogovtsev, S. N.; Goltsev, A. V.; and Mendes, J. F. F. 2006. K-core organization of complex networks. Physical review letters, 96(4): 040601. Egghe, L.; et al. 2006. An improvement of the h-index: The g-index. ISSI newsletter, 2(1): 8–9. Fang, Y.; Cheng, R.; Chen, Y.; Luo, S.; and Hu, J. 2017. Effective and efficient attributed community search. The VLDB Journal, 26: 803–828. Fu, X.; and King, I. 2023. FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks. In IJCAI, 3705–3713. Fu, X.; and King, I. 2024. MECCH: Metapath Context Convolution-based Heterogeneous Graph Neural Networks. Neural Networks, 170: 266–275. Fu, X.; Zhang, J.; Meng, Z.; and King, I. 2020. Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding. In WWW, 2331–2341. Grover, A.; and Leskovec, J. 2016. node2vec: Scalable feature learning for networks. In SIGKDD, 855–864. Haveliwala, T. H. 2003. Topic-sensitive pagerank: A context-sensitive ranking algorithm for web search. TKDE, 15(4): 784–796. He, B.; He, X.; Zhang, R.; Zhang, Y.; Tang, R.; and Ma, C. 2023a. Dynamic Embedding Size Search with Minimum Regret for Streaming Recommender System. In CIKM, 741– 750. He, B.; He, X.; Zhang, Y.; Tang, R.; and Ma, C. 2023b. Dynamically Expandable Graph Convolution for Streaming Recommendation. In WWW, 1457–1467. He, B.; Sun, Z.; Liu, J.; Zhang, S.; Chen, X.; and Ma, C. 2023c. Offline imitation learning with variational counterfactual reasoning. arXiv preprint arXiv:2310.04706. Hirsch, J. E. 2005. An index to quantify an individual’s scientific research output. PNAS, 102(46): 16569–16572. Hu, X.; Chen, J.; Li, X.; Guo, Y.; Wen, L.; Yu, P. S.; and Guo, Z. 2023. Do Large Language Models Know about Facts? arXiv preprint arXiv:2310.05177. Hu, X.; Guo, Z.; Wu, G.; Liu, A.; Wen, L.; and Yu, P. S. 2022. CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking. In NAACL-HLT, 3362–3376. Hu, X.; Wen, L.; Xu, Y.; Zhang, C.; and Yu, P. S. 2020a. SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction. In EMNLP, 3673–3682. Hu, X.; Zhang, C.; Ma, F.; Liu, C.; Wen, L.; and Yu, P. S. 2021a. Semi-supervised Relation Extraction via Incremental Meta Self-Training. In EMNLP, 487–496. Hu, X.; Zhang, C.; Yang, Y.; Li, X.; Lin, L.; Wen, L.; and Yu, P. S. 2021b. Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction. In EMNLP, 2737–2746. Hu, Z.; Dong, Y.; Wang, K.; and Sun, Y. 2020b. Heterogeneous graph transformer. In WWW, 2704–2710. Huang, C.; Fang, Y.; Lin, X.; Cao, X.; Zhang, W.; and Orlowska, M. 2022. Estimating Node Importance Values in Heterogeneous Information Networks. In ICDE, 846–858. Huang, H.; Sun, L.; Du, B.; Liu, C.; Lv, W.; and Xiong, H. 2021. Representation Learning on Knowledge Graphs for Node Importance Estimation. In SIGKDD, 646–655. Korotin, A.; Selikhanovych, D.; and Burnaev, E. 2023. Neural Optimal Transport. In ICLR. OpenReview.net. Li, J.; Li, Z.; Ge, T.; King, I.; and Lyu, M. R. 2022. Text Revision by On-the-Fly Representation Optimization. In AAAI, 10956–10964. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8309 Li, J.; Li, Z.; Mou, L.; Jiang, X.; Lyu, M.; and King, I. 2020. Unsupervised text generation by learning from search. In NeurIPS, volume 33, 10820–10831. Liang, L.; Hu, X.; Xu, Z.; Song, Z.; and King, I. 2023. Predicting Global Label Relationship Matrix for Graph Neural Networks under Heterophily. In NeurIPS. Marchiori, M.; and Latora, V. 2000. Harmony in the smallworld. Physica A: Statistical Mechanics and its Applications, 285(3-4): 539–546. Naderializadeh, N.; Comer, J. F.; Andrews, R.; Hoffmann, H.; and Kolouri, S. 2021. Pooling by sliced-Wasserstein embedding. volume 34, 3389–3400. Negre, C. F.; Morzan, U. N.; Hendrickson, H. P.; Pal, R.; Lisi, G. P.; Loria, J. P.; Rivalta, I.; Ho, J.; and Batista, V. S. 2018. Eigenvector centrality for characterization of protein allosteric pathways. PNAS, 115(52): E12201–E12208. Nieminen, J. 1974. On the centrality in a graph. Scandinavian journal of psychology, 15(1): 332–336. Nietert, S.; Goldfeld, Z.; Sadhu, R.; and Kato, K. 2022. Statistical, robustness, and computational guarantees for sliced wasserstein distances. NeurIPS, 35: 28179–28193. Page, L.; Brin, S.; Motwani, R.; and Winograd, T. 1999. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Park, N.; Kan, A.; Dong, X. L.; Zhao, T.; and Faloutsos, C. 2019. Estimating node importance in knowledge graphs using graph neural networks. In SIGKDD, 596–606. Park, N.; Kan, A.; Dong, X. L.; Zhao, T.; and Faloutsos, C. 2020. Multiimport: Inferring node importance in a knowledge graph from multiple input signals. In SIGKDD, 503– 512. Qiu, Z.; Su, Q.; Ou, Z.; Yu, J.; and Chen, C. 2021. Unsupervised Hashing with Contrastive Information Bottleneck. In IJCAI, 959–965. Qiu, Z.; Su, Q.; Yu, J.; and Si, S. 2022. Efficient Document Retrieval by End-to-End Refining and Quantizing BERT Embedding with Contrastive Product Quantization. In EMNLP, 853–863. Reimers, N.; and Gurevych, I. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Sabidussi, G. 1966. The centrality index of a graph. Psychometrika, 31(4): 581–603. Shaw, M. E. 1954. Group structure and the behavior of individuals in small groups. The Journal of psychology, 38(1): 139–149. Song, Z.; Zhang, Y.; and King, I. 2023a. No Change, No Gain: Empowering Graph Neural Networks with Expected Model Change Maximization for Active Learning. In NeurIPS. Song, Z.; Zhang, Y.; and King, I. 2023b. Optimal Blockwise Asymmetric Graph Construction for Graph-based Semi-supervised Learning. In NeurIPS. Song, Z.; Zhang, Y.; and King, I. 2023c. Towards Fair Financial Services for All: A Temporal GNN Approach for Individual Fairness on Transaction Networks. In CIKM, 2331– 2341. ACM. Sun, X.; Ge, T.; Ma, S.; Li, J.; Wei, F.; and Wang, H. 2022. A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model. In Raedt, L. D., ed., IJCAI, 4367–4374. Main Track. Sun, Y.; Han, J.; Yan, X.; Yu, P. S.; and Wu, T. 2011. Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. VLDB, 4(11): 992–1003. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. NeurIPS, 30. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Villani, C.; et al. 2009. Optimal transport: old and new, volume 338. Springer. Yang, M.; Li, Z.; Zhou, M.; Liu, J.; and King, I. 2022. Hicf: Hyperbolic informative collaborative filtering. In SIGKDD, 2212–2221. Yang, M.; Zhou, M.; Pan, L.; and King, I. 2023a. κHGCN: Tree-likeness Modeling via Continuous and Discrete Curvature Learning. In SIGKDD, 2965–2977. Yang, M.; Zhou, M.; Ying, R.; Chen, Y.; and King, I. 2023b. Hyperbolic Representation Learning: Revisiting and Advancing. ICML. Zhang, X.; Chen, Y.; Gao, C.; Liao, Q.; Zhao, S.; and King, I. 2022a. Knowledge-aware Neural Networks with Personalized Feature Referencing for Cold-start Recommendation. arXiv preprint arXiv:2209.13973. Zhang, Y.; Chen, Y.; Song, Z.; and King, I. 2023a. Contrastive cross-scale graph knowledge synergy. In SIGKDD, 3422–3433. Zhang, Y.; and Zhu, H. 2019. Doc2hash: Learning discrete latent variables for documents retrieval. In NAACL. Zhang, Y.; Zhu, H.; Chen, Y.; Song, Z.; Koniusz, P.; King, I.; et al. 2023b. Mitigating the Popularity Bias of Graph Collaborative Filtering: A Dimensional Collapse Perspective. In NeurIPS. Zhang, Y.; Zhu, H.; Song, Z.; Koniusz, P.; and King, I. 2022b. COSTA: covariance-preserving feature augmentation for graph contrastive learning. In SIGKDD. Zhang, Y.; Zhu, H.; Song, Z.; Koniusz, P.; and King, I. 2023c. Spectral feature augmentation for graph contrastive learning and beyond. In AAAI. Zheng, Y.; Zhang, X.; Chen, S.; Zhang, X.; Yang, X.; and Wang, D. 2021. When Convolutional Network Meets Temporal Heterogeneous Graphs: An Effective Community Detection Method. IEEE TKDE. Zhu, X.; Zhang, R.; He, B.; Guo, Z.; Zeng, Z.; Qin, Z.; Zhang, S.; and Gao, P. 2023. Pointclip v2: Prompting clip and gpt for powerful 3d open-world learning. In ICCV, 2639–2650. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8310
2024
923
18,766
KGTS: Contrastive Trajectory Similarity Learning over Prompt Knowledge Graph Embedding Zhen Chen1, Dalin Zhang2, Shanshan Feng3, 4, Kaixuan Chen2, Lisi Chen1, Peng Han1*, Shuo Shang1* 1University of Electronic Science and Technology of China 2Aalborg University, Denmark 3Centre for Frontier AI Research, A*STAR, Singapore 4Institute of High-Performance Computing, A*STAR, Singapore {chenzhen059, jedi.shang}@gmail.com, {dalinz, kchen}@cs.aau.dk, {victor fengss, penghan study}@foxmail.com, [email protected] Abstract Trajectory similarity computation serves as a fundamental functionality of various spatial information applications. Although existing deep learning similarity computation methods offer better efficiency and accuracy than non-learning solutions, they are still immature in trajectory embedding and suffer from poor generality and heavy preprocessing for training. Targeting these limitations, we propose a novel framework named KGTS based on knowledge graph grid embedding, prompt trajectory embedding, and unsupervised contrastive learning for improved trajectory similarity computation. Specifically, we first embed map grids with a GRot embedding method to vigorously grasp the neighbouring relations of grids. Then, a prompt trajectory embedding network incorporates the resulting grid embedding and extracts trajectory structure and point order information. It is trained by unsupervised contrastive learning, which not only alleviates the heavy preprocessing burden but also provides exceptional generality with creatively designed strategies for positive sample generation. The prompt trajectory embedding adopts a customized prompt paradigm to mitigate the gap between the grid embedding and the trajectory embedding. Extensive experiments on two real-world trajectory datasets demonstrate the superior performance of KGTS over stateof-the-art methods. Introduction GPS sensors emit continued points of location data, the sequence of which assembles a trajectory that describes the spatial path of a moving object over time. The similarity between two trajectories of different objects or different time segments of the same object is a fundamental measure for many real-world applications, such as animal migration analysis (Li et al. 2011), transportation optimization (Song et al. 2014), route retrieval (Ranu et al. 2015), and traffic prediction (Li et al. 2023; Lin et al. 2023; Chang et al. 2023). Therefore, the trajectory similarity computation has always been a research hotspot for decades (Yang et al. 2021; Yao et al. 2019; Evans et al. 2013; van Kreveld and Luo 2007). *Shuo Shang and Peng Han are corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Various trajectory similarity computation approaches have been investigated such as dynamic time warping (DTW) (Yi, Jagadish, and Faloutsos 1998), edit distance on real sequences (EDR) (Chen, ¨Ozsu, and Oria 2005), and Hausdorff distance (Atev, Miller, and Papanikolopoulos 2010). Despite some success, they suffer from a common deficiency of high computation complexity (Li et al. 2018). Specifically, since two trajectories need to be aligned point by point, the computation complexity attains a quadratic form O(l2), where l is the mean length of the trajectories. This flaw hinders their practical applications to long trajectories and large datasets. Although previous methods have achieved good description about similarity, there are still some problems in certain aspects. First, existing deep learning similarity computation work requires heavy preprocessing. Specifically, these studies generally rely on supervised learning, which requires the similarity measure of each pair of trajectories as the supervision label (Yang et al. 2021; Han et al. 2021; Yang et al. 2022b). However, the similarity measures are not directly available in the raw dataset and thus have to be calculated during the dataset preprocessing, requiring extensive computation. Assuming there are q trajectories in the dataset, each of which has l points, the complexity of computing the similarities of all trajectory pairs is thus quadratic as O(q2l2). Therefore, it is costly to achieve a ready-to-use dataset when the raw dataset has numerous long trajectories. Second, a realistic dataset cannot contain trajectory patterns exhaustively. The supervision based methods find positive samples in the dataset, however, there are still some trajectories whose positive samples are not really similar to themselves. In addition, distance of location is considered more than similarity of structure in previous methods, and having high similarity of structure is in a same paragraph. Therefore, a large dataset with diverse trajectory relationships is essential to a model with high generality. However, it is unrealistic to handle a real-world dataset with exhaustive relationships of trajectories. In addition to the limitations, there is still room to improve the performance of existing solutions. Some studies (Li et al. 2018; Yao et al. 2019) do not use dedicated modules to embed three sorts of information of a trajectory, namely locaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8311 tion, structure, and the order of points, while others rely on out-of-dated modules or ineffective frameworks. In light of the above considerations, we attempt to address the above issues. For the first problem, we adopt an unsupervised method to reduce the burden of computing labels, which is achieved through contrastive learning. For the second matter, just ordinary data augmentation (Deng et al. 2022) has great limitations and is not sufficient for similarity calculation. We propose a new method to alleviate the shortcomings of unsupervised methods. Specifically, this enhancement method aims at generating positive samples which are similar to the original trajectories. To extract spatial information accurately, we consider the relationship between grids and attempt to acquire better grid representations through knowledge graph embedding methods. In this paper, we propose KGTS, a brand-new framework for improved trajectory similarity computation. Specifically, based on the grid-based approach (Li et al. 2018; Zhang et al. 2011), we first innovatively employ a novel relation model GRot (Grid RotateE) to embed the grids of the entire space to encourage neighbouring grids to have similar embedding. Next, to alleviate the incompatibility from pretrain to fintune, we propose a prompt trajectory embedding module that has a novel attentive prompt scheme to effectively incorporate the grid embedding into the final trajectory embedding. The trajectory embedding module has a GCN to model the trajectory structure and a GRU to extract the order of points in a trajectory. We train the prompt trajectory embedding module with unsupervised contrastive learning so that supervision labels and consequently the costly preprocessing are not required. Besides, three novel strategies of positive sample generation for contrastive learning are devised to simulate diverse cases of highly similar trajectories thus enhancing the model’s generality. The main contributions are summarized as follows: • First, we propose a framework KGTS including grid embedding and prompt trajectory embedding through unsupervised training scheme. • Second, we propose the GRot for trajectory grid embedding so that spatially neighbouring grids are encouraged to have similar embedding, and a prompt trajectory embedding module for trajectory embedding that properly grasps the location, structure and points order information of trajectories. • Third, we train the prompt trajectory embedding module using unsupervised contrastive learning with newly designed positive sample generation strategies. • Finally, we conduct extensive experiments on two large benchmark datasets to justify our design and its superior performance to state-of-the-art studies. Related Work Trajectory Similarity Computation Trajectory similarity computation methods can roughly be divided into two categories, namely, knowledge-based methods and learning-based methods. The computational complexity of knowledge-based methods, such as longest common subsequence (LCSS) (Vlachos, Gunopulos, and Kollios 2002), Dynamic Time Warping (DTW) (Yi, Jagadish, and Faloutsos 1998), edit distance with real penalty (ERP) (Chen and Ng 2004), and edit distance on real sequences (Chen, ¨Ozsu, and Oria 2005) (EDR), heavily depends on the length of trajectories. With the rapid development of machine learning especially the deep learning technique in various areas (LeCun, Bengio, and Hinton 2015), an early study is t2vec (Li et al. 2018; Yao et al. 2019), which employs a seq2seq model to consider the order of points for trajectory embedding. T3S (Yang et al. 2021) is a combination of the grid-based method and the coordinate-based method that alleviates the impact of noisy points and measurement errors. A recent work GTS (Han et al. 2021) represents the trajectories with a two-step process, i.e., a skip-gram point embedding step and a GNNbased trajectory embedding step for good performance. Prompt Learning In this work, we adopt a two-step process: the first step embedding trajectory points and the second step embedding the whole trajectories. Since these two steps are separately trained and the first step does not comply with the final trajectory embedding objective, we adopt the prompt learning (Liu et al. 2021) scheme to facilitate the second-step learning process. Prompt learning is a technique that autonomously tunes the downstream learning process to fit a pre-trained model through prompts. In this light, recent studies propose to learn the prompt together with the downstream model (Gao, Fisch, and Chen 2021; Jiang et al. 2020), which are revised from early studies (Brown et al. 2020; Raffel et al. 2020). Contrastive Learning Contrastive learning has recently shown profound performance and influence on various tasks (Rethmeier and Augenstein 2021; Yu et al. 2022; Yang et al. 2022a). A key aspect of contrastive learning is to create positive and negative samples. SimCSE (Gao, Yao, and Chen 2021) utilizes the smile dropout technique to generate positive samples from input sentences. Deep InfoMax (Hjelm et al. 2019) creates positive and negative samples by manipulating local and global features of images. CBT (Sun et al. 2019a) masks a part of a video clip and uses the masked videos as positive samples. Preliminary Problem Statement DEFINITION 1 (Trajectory). A trajectory T is formed as a sequence of points, i.e., T = ⟨p1, p2,...,pn⟩, where pi = (lati, loni) is the i-th point associated with a tuple of latitude lati and longitude loni, and n is the length of the trajectory. DEFINITION 2 (Trajectory Similarity). Given two trajectories Ti and Tj, their distance dist(Ti, Tj) is used as the similarity measure sim∗(Ti, Tj) to assess the similarity between Ti and Tj. Then, Tj is similar to Ti, if sim∗(Ti, Tj) < ϵ or ∀Tp ∈D, sim∗(Ti, Tj) ≤sim∗(Ti, Tp), where D is a set of trajectories. Given a specific similarity measure sim∗(·, ·), our goal is to find an optimal trajectory embedding function f ∗ e (·) : f ∗ e = argmin fe E D |sim∗(fe(Ti), fe(Tj)) −sim∗(Ti, Tj)|, (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8312 where Ti, Tj ∈D. Note that sim∗(·, ·) can be any distance metric such as Euclidean distance for trajectory computing or Cosine distance for representation computing. Grid-based Trajectory Embedding Realistic trajectories often have non-uniform sampling rates and noisy points. For example, GPS receivers may miss data or have erroneous recordings due to poor satellite visibility. One of the problems with trajectory embedding is how to embed points. Following previous work (Yang et al. 2021; Li et al. 2018), we adopt the grid-based trajectory embedding for handling varying sampling rates and noise in trajectories. Specifically, the space (e.g., a map) is partitioned into m grids of equal size (see the map in Figure 1), and a trajectory point falling into a grid is represented by the grid entity gi. Next, a trajectory T can be represented by a sequence of grids: T = ⟨p1, p2, ..., pn⟩⇒T ≈⟨g1, g2, ..., gn⟩, (2) where gi ∈[1, m] is the gird ID of the i-th trajectory grid. Then, the trajectory embedding problem is converted to embed the grid sequence of a trajectory. The Proposed KGTS Method As shown in Figure 1, our KGTS has two main modules: a grid embedding module that embeds all grids in the entire space and a prompt trajectory embedding module that embeds specific trajectories. In addition, we propose an unsupervised contrastive learning scheme for efficient trajectory similarity learning. GRot Gird Embedding Knowledge graph embedding is the task of learning representations of graph nodes considering both their entities and relations. Recent research has well demonstrated its success for various downstream tasks (Ji et al. 2021; Huang et al. 2019; Sun et al. 2018). The task of grid embedding is to embed space grids about their locations and relations, so knowledge graph embedding naturally fits this goal. Specifically, we regard the entire space as a graph and its grids as nodes. We only consider one relation between grids, that is direct connection; One grid is considered to have direct connections to its eight immediately neighbouring grids in the space. The notable RotatE model (Sun et al. 2019b) is chosen and further modified to better adapt to our case. The RotatE model originates from the Euler’s identity and represents the head h and tail t entity of an edge/relation in a graph with embedding in the complex space. The mapping from h to t induced by relation r is realised by an elementwise rotation: t = h ◦r, (3) where h, t ∈Ck are embeddings in the complex space of the head and tail entities, r ∈Ck is the embedding of the relation between h and t, and ◦is the Hadamard (or elementwise) product. The rj is of the form eiΘr,j = cos Θr,j + i sin Θr,j and thus |rj| = 1. Therefore, the original RotatE score function that measures how distant two nodes h, t are relative to their relation r is defined as: d(h, t) = ∥h ◦r −t∥. (4) In the context of knowledge graph for grid embedding, h and t are grid embeddings in the complex space, i.e., h, t = cos Φj + i sin Φj, where Φj is randomly initialized and learnable hidden embedding for grid gj. Unlike conventional knowledge graph embedding problems, we argue that the graph of the map space additionally requires that the neighbouring grids in terms of geographical locations have similar embedding. It is thus expected that h ◦r = t ◦r and consequently the score function in Eq. 4 is modified as: dr(h, t) = ∥h ◦r −t ◦r∥, (5) where r = cos Θ + i sin Θ parameterized by Θ is the relation of direct connection of two grids. With this customized RotatE score function, both grids and grid relations can be well embedded. Same as the original RotatE model, Θ ∈Rk is randomly initialized and learned together with Φj using the self-adversarial negative sampling approach (Sun et al. 2019b) and the following loss function (Sun et al. 2019b): L = −log σ(γ −dr(h, t)) − o X j=1 p(h ′ j, r, t ′ j) log σ(dr(h ′ j, t ′ j) −γ), (6) where γ is a fixed margin and σ(·) is the sigmoid function. p(h ′ j, r, t ′ j) is the negative sampling probability of the j-th negative sample (h ′ j, r, t ′ j), which is the grid embedding pair that does not have the relation r, i.e., the two grids are not directly connected. After training the GRot, we achieve the embedding of each grid as the Hadamard product between hi and r: ˜Φi = hi ◦r, where ˜Φi ∈Rk. (7) Prompt Trajectory Embedding We now present how to use the achieved grid embedding ˜Φi to produce the final trajectory embedding. The grid embedding embeds all grid entities regarding the grid locations in the entire map space without considering trajectory information. However, our goal is to embed specific trajectories. Thus, there is a gap between these two embedding objectives. Inspired by the recent success of prompt learning (Liu et al. 2021), we employ a prompt to instruct the trajectory embedding module to unify the two embedding objectives. Prompt Design Different from the original prompt learning for NLP, which designs human-understandable prompts, we propose to learn the prompt together with the trajectory embedding network. Moreover, we propose an attentive prompt concatenation scheme that concatenates prompt and grid embedding (Eq. 7) with attentive weights. Then, the prompt grid embedding Pi is formally denoted as: Pi = [ αi 1U ; αi 2 ˜Φi ] αi 1 = UW1, αi 2 = ˜ΦiW2, (8) where αi ∗is the attentive concatenation coefficient, U ∈ R1×u is the prompt vector, and W1 ∈Ru×1 and W2 ∈Rk×1 are the learnable weights to achieve αi ∗. Thus, Pi ∈Ru+k. Through this design, the subsequent trajectory embedding can properly incorporate grid embedding under the instruction of a learnable attentive prompt. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8313 ··· GCN Layer GRU GRU GRU GRU ··· ··· ··· ··· 𝒈𝟏 𝒓 𝒈𝟐 𝒓 𝒈𝒏−𝟏 𝒓 𝒈𝒏𝒓 KGTS KGTS Similarity Computation Loss ··· 𝒈𝟏 𝒗 𝒈𝟐 𝒗 𝒈𝒏−𝟏 𝒗 𝒈𝒏𝒗 Grid Hidden Embedding Relation Embedding Grid Embedding Prompt Vector GCN Embedding Trajectory Embedding Concatenate Grid Embedding Trajectory Embedding Figure 1: Overall framework of KGTS. Left: a trajectory and its grid-based representation; Right: the KGTS network structure. [gr 1, ..., gr n] is the grid sequence of a trajectory Ti; [gv 1, ..., gv n] is the grid sequence of a negative/positive sample for Ti. Trajectory Embedding Besides the gird embedding, it is also critical to embed the structure (or the shape) of trajectories to enhance the spatial connections of grids. We appeal to the graph convolutional network (GCN) (Kipf and Welling 2017) for this task, which is known for its extraordinary capability to embed spatial structures. We consider all grids in the map as nodes vi to construct a graph G for GCN. In addition, two grids are regarded as adjacent, if they are directly connected in any trajectories in the dataset. Specifically, the edge set of G is E = {(vi, vj)}, where vi and vj (i.e., gi and gj) are directly connected in any trajectories. With the graph G and the prompt grid embedding P = [P1, P2, ..., Pm] from Eq. 8 for nodes, we have one GCN layer embedding as: H = σ( ˜D−1 2 ˜A ˜D−1 2 PW) ˜A = A + I, ˜Dii = X j ˜Aij, (9) where A ∈Rm×m is the adjacency matrix, I is an identity matrix, W ∈R(u+k)×(u+k) is the weight matrix, and σ(·) is a nonlinear activation function. The element of the adjacency matrix A is: Aij = 1 if (vi, vj) ∈E, 0 otherwise. (10) Through GCN embedding and graph G, the grid connection patterns in trajectories can be well embedded. We finally proceed to embed the order of grids in trajectories. Since a trajectory is generated by a moving object, the order of its locations is a critical characteristic for matching trajectories. Among many existing sequence order embedding models, the gated recurrent unit (GRU) (Cho et al. 2014) is widely adopted due to its superiority. We thus utilize it to embed the trajectory grid order: z = GRU(H(T)|Ψ), (11) where z ∈Rd is the last step of the GRU, d is the GRU embedding size and Ψ is the parameters of GRU. We take z as the final trajectory embedding. Unsupervised Contrastive Similarity Learning Given the trajectory embedding network, two trajectories are fed to achieve their embeddings, which are then used to calculate a particular similarity measure. Existing approaches need to compute labels to conduct supervised learning during preprocessing. This learning scheme suffers from two limitations: (1) the preprocessing for generating supervision labels is costly (the computation complexity is quadratic to the number and length of trajectories); (2) the training set cannot well contain adequate cases where two trajectories are highly similar, so the resulting similarity computation model has poor generality. In light of the above limitations and the recent success of the SimCSE model (Gao, Yao, and Chen 2021), we employ unsupervised contrastive learning for training the prompt trajectory embedding module. The basic idea of contrastive learning is to minimize the distance between similar samples while maximizing the distance between dissimilar samples. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8314 (a) (b) (c) Figure 2: Examples of the positive sample generation strategies. (a) whole trajectory strategy; (b) partial trajectory strategy-end; (c) partial trajectory strategy-mid. We adopt the following infoNCE loss (Oord, Li, and Vinyals 2018) for training: Li = −E M  log esim(zi,z+ i )/τ esim(zi,z+ i )/τ + PN j=1 esim(zi,z− j )/τ  , (12) where Li is the loss for the trajectory Ti, zi is the embedding of the trajectory Ti, z+ i and z− i are embeddings of the positive samples T + i (i.e., similar trajectories to Ti) and the negative samples T − i (i.e., dissimilar trajectories to Ti), respectively, τ is a hyperparameter that controls the convergence speed, and sim(·, ·) is the similarity computation function (e.g., Euclidean distance computation). There are N negative samples and M positive samples for each trajectory. As shown in Figure 1, a trajectory and its positive/negative samples are fed into the KGTS encoder to compute their similarities and thus the loss in Eq. 12. We use the other trajectories in a training batch other than Ti as its negative samples; we argue that the trajectories in the training set are not able to properly cover enough cases where two trajectories are highly similar, so we suggest creative strategies (as detailed in next section) to generate positive samples from the original trajectories. Positive Sample Generation This task is to create similar trajectories to a base trajectory. Existing studies only use simple operations, such as randomly dropping grids from the base trajectory (Li et al. 2018), while the positive samples generated by these simple operations are nearly identical to the base trajectory, narrowing down the possible cases of similar trajectories. For example, two trajectories are actually similar when they have identical structures but are on adjacent paths; however, this case cannot be simulated and identified by existing solutions. Before introducing our new strategies of positive sample generation, we first present the basic operations used to implement the strategies. These operations include copy which duplicates a grid at the same position, add which creates a new grid to one of the eight immediately neighbouring positions of a base grid, delete which deletes a grid, and move which moves a grid to one of its eight immediate neighbours. With the above basic operations, we now present the proposed positive sample generation strategies as follows. The basic operations are denoted as italic. We present exemplar trajectories generated by the strategies in Figure 2. • Whole trajectory strategy. We use the whole trajectory as the base to generate positive samples. There are three steps: (1) we first copy base grids to create a new grid sequence (purple grids in Figure 2a) which completely overlaps the original one (the blue grids in Figure 2a); (2) we then delete a random amount of consecutive grids (red grids in the upper left corner in Figure 2a) on one end of the new grid sequence and add the same number of consecutive grids (yellow grids in the lower right corner in Figure 2a) on the other end; (3) finally, we randomly move (grids in the dashed rectangle in Figure 2a) and delete several grids (the red grids in the middle of the trajectory in Figure 2a). This strategy can generate positive samples with minimized dissimilarities to the original trajectory. • Partial trajectory strategy-end. In addition to generating positive samples that are similar to the entire base trajectory, we also generate positive samples that are only similar to part of the base trajectory. As shown in Figure 2b, a base sequence is first divided into three equal parts, then we apply the whole trajectory strategy to one of the two end parts to generate positive samples, but there are no delete operations in the second step to keep the positive samples having a comparable length to the base trajectory. • Partial trajectory strategy-mid. To create the positive samples that are on adjacent paths of the base trajectory, we apply the move operation to the grids of the middle one of the three aliquots of the base trajectory (Figure 2c). Then, we apply the whole trajectory strategy without the delete operation in the second step to the new sequence of grids to enhance the diversity of positive samples. Overall Training Process There are two modules in KGTS that are trained in two unsupervised learning phases, respectively. The first module is the GRot for grid embedding with trainable hidden vectors {Φi}m i=1 and relation embedding parameter Θ. These parameters are trained using loss function Eq. 6 and are fixed to produce grid embedding {˜Φi}m i=1 for the subsequent trajectory embedding. Next, we train the prompt trajectory embedding module using the loss function Eq. 12 to learn the prompt vector U, weight matrix W∗, and GRU parameters Ψ. After fully training KGTS, the embedding of a trajectory can be achieved through Eq. 11. Note that the trajectory embedding can be used for different similarity measures and we choose cosine similarity in this work. Experiments Experimental Setup Dataset We adopt two popular large benchmark datasets for trajectory analysis in our experiments, namely GeoLife The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8315 and Porto. (1) The GeoLife dataset (Zheng, Xie, and Ma 2010) contains trajectories recorded over five years from 182 users in the city of Beijing, China. The time interval between two successive points is about five seconds. We discard trajectories under 2km and cut the trajectories above 5km as done for the Porto dataset. (2) The Porto dataset (MoreiraMatias et al. 2016) contains trajectories recorded over one year from 442 taxis in the city of Porto, Portugal. The time interval between two consecutive points is 15 seconds. Following conventional research (Yao et al. 2019), we discard trajectories under 2km and randomly cut the trajectories above 5km into trajectories between 2km and 5km long. For both datasets, we randomly chose trajectories to keep the training, validation, and test ratio approximately as 1:1:1. Implementation Details We divide the geographical space into 1000 × 1300 grids for the GeoLife dataset and 140 × 280 grids for the Porto dataset. The interval of the latitude and longitude are both 0.001. We first train the GRot module using the loss function in Eq. 6 to obtain the grid embedding. The margin γ in Eq. 6 is set to 12. We then train the trajectory embedding module using unsupervised contrastive learning with the loss function in Eq. 12. The hyperparameter τ in Eq. 12 is set to 0.05. Both phases are trained with the Adam optimizer and a learning rate of 0.0001. All experiments are conducted with GeForce RTX 3090 GPU. Evaluation Metrics Following existing studies (Han et al. 2021), we use the Top-k hitting ratio to measure the performance of trajectory similarity computation, HR@K = 1 | Dt | X τ∈Dt | LP τ @K T LR τ | | LR τ | (13) where τ represents a specific trajectory; Dt is the test set; LR τ is the set of the Top-k similar trajectories in the training set for the given trajectory τ, and LP τ @K is the set of the Top-k similar trajectories predicted by our approach for τ. Baselines We compare our method with the following approaches:(1) SRN (Pei, Tax, and van der Maaten 2016) uses a Siamese recurrent network for sequence similarity computation. (2) t2vec (Li et al. 2018) is also an unsupervised approach, which adopts a seq2seq model and a customized positive sample generation method for unsupervised training. (3) NeuTraj (Yao et al. 2019) uses a spatial attention memory unit to train the similarity computation model. (4) T3S (Yang et al. 2021) jointly considers both coordinates and grid entities to train the trajectory embedding model. (5) GTS (Han et al. 2021) uses a modified skip-gram module and a GNN for improved POI embedding; an LSTM is also used to embed the trajectory points order information. (6) CL-TSim (Deng et al. 2022) also adopts the skip-gram module and LSTM to gather grid sequences’ information with contrastive learning to narrow similar trajectories’ representation. (7) TMN (Yang et al. 2022b) combines points across trajectories and considers not only individual trajectory sequence information, but also interaction between trajectories to reach state-of-the-art. Dataset Method HR@1 HR@5 HR@10 GeoLife SRN 0.3363 0.4257 0.4624 t2vec 0.363 0.3761 0.3184 NeuTraj 0.4113 0.53 0.5823 T3S 0.4273 0.5362 0.5843 GTS 0.4327 0.4962 0.5102 CL-TSim 0.3233 0.3448 0.3027 TMN 0.4290 0.5092 0.5517 KGTS (Ours) 0.5123 0.579 0.5942 Porto SRN 0.3503 0.5079 0.5606 t2vec 0.4105 0.5056 0.5138 NeuTraj 0.4103 0.5465 0.591 T3S 0.3923 0.5506 0.6109 GTS 0.3987 0.5269 0.578 CL-TSim 0.2973 0.3347 0.3379 TMN 0.401 0.5263 0.586 KGTS (Ours) 0.5244 0.6277 0.6542 Table 1: Comparison results on the GeoLife and Porto datasets. HR@k denotes the Top-k hitting rate. Experimental Results Overal Performance The comparison results are summarized in Table 1 with the best results shown in bold and the second best results shown in underline. It is demonstrated that our KGTS remarkably outperforms all baselines. We note that KGTS outperforms the baselines more significantly on smaller Top-k hit rates. This is due to the fact that our positive sample generation strategies for contrastive learning can properly cover enough cases of highly similar trajectories, while existing approaches can only learn from datasets that do not contain diverse cases of similar trajectories. Compared with other models, we find that although some baselines (i.e., SRN, NeuTraj, T3S, and TMN) use powerful modules to capture the trajectory structures, they do not have specific modules for trajectory point embedding. By contrast, our KGTS has the GRot to well embed the trajectory grids and uses a prompt trajectory embedding to properly incorporate grid embedding into the subsequent trajectory structure and the grid order embedding. Similarly, the t2vec and CL-TSim also use the same unsupervised learning scheme as KGTS. However, their positive sample generation strategy only considers the cases of mostly overlap. This strategy is quite limited, as it is common that trajectories are partially similar in real datasets. In contrast, we suggest three positive sample generation strategies that cover more general cases of similar trajectories. Besides the better performance of trajectory similarity computation, our method uses zero time to prepare supervision labels, while supervised learning baselines take around one hour per 5,000 trajectories and the cost increases quadratically with the number and length of trajectories. Ablation Study We perform an ablation study to justify our design choices for KGTS. In particular, we compare KGTS with the following variants: (1) w/o GRot: it does not use the proposed GRot model for grid embedding. (2) w/ original RotatE: it utilizes the original RotatE instead of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8316 Figure 3: Visualization of the case studies. The red trajectories are the query trajectories; the blue trajectories are the Top-3 ground truth similar trajectories; the black trajectories are the Top-3 similar trajectories found by KGTS. Method HR@1 HR@5 HR@10 w/o GRot 0.3243 0.4211 0.4468 w/ original RotatE 0.5067 0.6151 0.6445 w/o prompt 0.5096 0.6227 0.6514 w/o GCN 0.4853 0.5951 0.6304 w/o GRU 0.152 0.2231 0.2553 KGTS (ours) 0.5244 0.6277 0.6542 Table 2: Ablation study on the Porto dataset. (a) dimension of k (b) dimension of u (c) dimension of d Figure 4: Hyperparameter sensitivity analysis. our proposed modified one to study its effectiveness. (3) w/o prompt: it does not employ a prompt to instruct the trajectory embedding to incorporate grid embedding. The grid embedding ˜Φi is directly input into the trajectory embedding module. (4) w/o GCN: it does not apply the GCN to trajectory structure embedding, and the prompt grid embedding is directly fed into the GRU. (5) w/o GRU: it does not consider the order of points in a trajectory. The embeddings from the GCN are summed up to form a single trajectory embedding. Table 2 reports the ablation study results. We make the following observations: (1) The w/o GRot variant is lack of calculation of inter grid relationships, so its effectiveness is relatively low. (2) KGTS is better than the w/ original RotatE variant demonstrating the superior capability of the GRot. (3) Prompt learning is originally proposed to adapt a pre-trained network to downstream tasks with the help of a prompt. We adopt this idea to better utilize grid embedding for the final trajectory embedding, and the results that w/oprompt is inferior to KGTS well justify our design. (4) The w/o GCN variant shows superior performance to the w/o GRot variant. This suggests that properly embedding the grids in the space is more important. Thus, our GRot module for grid embedding is an essential innovation. (5) The w/o GRU variant is the worst variant with considerably lower hitting rates. This draws two conclusions: the trajectory point order information is critical to the trajectory embedding; simply adding up the embedding of each grid in a trajectory into a single embedding is far from enough to produce a favourable trajectory embedding. Case Study We randomly choose one query trajectory from the test set and find its similar trajectories from the training set using our KGTS. We visualize the obtained Top3 trajectories from KGTS and ground truth Top-3 similar trajectories in Figure 3. For the query trajectory T3111 (in Figure 3), our approach successfully finds all the Top-3 similar trajectories, although there is a slight deviation between the order of the three trajectories and the ground truth. This still indicates that under unsupervised conditions, the similarity between trajectories can be accurately calculated. Hyperparameter Sensitivity Analysis Three hyperparameters influence our KGTS performance the most: grid embedding dimension k, prompt dimension u, and trajectory embedding dimension d. We show the parameter sensitivity analysis on the Proto dataset in Figure 4. We observe that the hitting rate increases as the grid embedding dimension k grows from 32 to 128 and decreases at 256. A larger embedding dimension can embed more information, while a too-large value would make it overfitting and thus poorly generalised. The hitting rate rises sharply as prompt dimension u doubles from 16 to 32 and decreases slightly afterwards. This manifests the importance of the prompt scheme for trajectory embedding. For the trajectory embedding dimension d, the hitting rate increases straightly as it climbs from 64 to 512. We do not try larger dimensions due to exponentially increasing computational costs. The best trajectory embedding dimension (i.e., 512) is larger than the best grid embedding dimension (i.e., 128). This is due to the fact that the trajectory embedding needs to encompass richer information than the grid embedding. Conclusion In this paper, we target the trajectory similarity computation task and manage to mitigate the limitations of current deep learning solutions. We propose KGTS which has a modified RotatE module for grid embedding and a prompt trajectory embedding module for the final trajectory embedding. Furthermore, we present three novel positive sample generation strategies for unsupervised contrastive trajectory embedding learning, which can well cover extensive cases of highly similar trajectories. Thus, the proposed approach manifests improved generality and does not require the costly preprocessing for generating supervision labels as done by existing deep learning solutions. Extensive experiments on two benchmark datasets well demonstrate the effectiveness of each KGTS component. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8317 Acknowledgments This work was supported by the NSFC (U2001212, U22B2037, U21B2046, 62032001, and 61932004). References Atev, S.; Miller, G.; and Papanikolopoulos, N. P. 2010. Clustering of Vehicle Trajectories. IEEE Transactions on Intelligent Transportation Systems, 11(3): 647–657. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Chang, S. Y.; Wu, H.-C.; Kuan, Y.-C.; and Wu, Y. 2023. Tensor Levenberg-Marquardt Algorithm for Multi-Relational Traffic Prediction. IEEE Transactions on Vehicular Technology, 1–17. Chen, L.; and Ng, R. T. 2004. On The Marriage of Lp-norms and Edit Distance. In (e)Proceedings of the Thirtieth International Conference on Very Large Data Bases, VLDB 2004, Toronto, Canada, August 31 - September 3 2004, 792–803. Morgan Kaufmann. Chen, L.; ¨Ozsu, M. T.; and Oria, V. 2005. Robust and Fast Similarity Search for Moving Object Trajectories. In Proceedings of the ACM SIGMOD International Conference on Management of Data, Baltimore, Maryland, USA, June 1416, 2005, 491–502. ACM. Cho, K.; van Merrienboer, B.; G¨ulc¸ehre, C¸ .; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, 1724–1734. ACL. Deng, L.; Zhao, Y.; Fu, Z.; Sun, H.; Liu, S.; and Zheng, K. 2022. Efficient Trajectory Similarity Computation with Contrastive Learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, CIKM ’22, 365–374. New York, NY, USA: Association for Computing Machinery. ISBN 9781450392365. Evans, M. R.; Oliver, D.; Shekhar, S.; and Harvey, F. 2013. Fast and exact network trajectory similarity computation: a case-study on bicycle corridor planning. In Proceedings of the 2nd ACM SIGKDD International Workshop on Urban Computing, UrbComp@KDD 2013, Chicago, Illinois, USA, August 11, 2013, 9:1–9:8. ACM. Gao, T.; Fisch, A.; and Chen, D. 2021. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, 3816– 3830. Association for Computational Linguistics. Gao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, 6894– 6910. Association for Computational Linguistics. Han, P.; Wang, J.; Yao, D.; Shang, S.; and Zhang, X. 2021. A Graph-based Approach for Trajectory Similarity Computation in Spatial Networks. In KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, 556– 564. ACM. Hjelm, R. D.; Fedorov, A.; Lavoie-Marchildon, S.; Grewal, K.; Bachman, P.; Trischler, A.; and Bengio, Y. 2019. Learning deep representations by mutual information estimation and maximization. Huang, X.; Zhang, J.; Li, D.; and Li, P. 2019. Knowledge Graph Embedding Based Question Answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019, Melbourne, VIC, Australia, February 11-15, 2019, 105–113. ACM. Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; and Philip, S. Y. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 33(2): 494–514. Jiang, Z.; Xu, F. F.; Araki, J.; and Neubig, G. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8: 423–438. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. nature, 521(7553): 436–444. Li, F.; Feng, J.; Yan, H.; Jin, G.; Yang, F.; Sun, F.; Jin, D.; and Li, Y. 2023. Dynamic Graph Convolutional Recurrent Network for Traffic Prediction: Benchmark and Solution. ACM Trans. Knowl. Discov. Data, 17(1). Li, X.; Zhao, K.; Cong, G.; Jensen, C. S.; and Wei, W. 2018. Deep Representation Learning for Trajectory Similarity Computation. In 34th IEEE International Conference on Data Engineering, ICDE 2018, Paris, France, April 1619, 2018, 617–628. IEEE Computer Society. Li, Z.; Han, J.; Ji, M.; Tang, L.-A.; Yu, Y.; Ding, B.; Lee, J.G.; and Kays, R. 2011. Movemine: Mining Moving Object Data for Discovery of Animal Movement Patterns. ACM Transactions on Intelligent Systems and Technology (TIST), 2(4): 1–32. Lin, J.; Li, Z.; Li, Z.; Bai, L.; Zhao, R.; and Zhang, C. 2023. Dynamic Causal Graph Convolutional Network for Traffic Prediction. arXiv:2306.07019. Liu, P.; Yuan, W.; Fu, J.; Jiang, Z.; Hayashi, H.; and Neubig, G. 2021. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Moreira-Matias, L.; Gama, J.; Ferreira, M.; MendesMoreira, J.; and Damas, L. 2016. Time-evolving O-D matrix estimation using high-speed GPS data streams. Expert Systems and Applications, 44: 275–288. Oord, A. v. d.; Li, Y.; and Vinyals, O. 2018. Representation Learning with Contrastive Predictive Coding. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8318 Pei, W.; Tax, D. M.; and van der Maaten, L. 2016. Modeling Time Series Similarity with Siamese Recurrent Networks. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P. J.; et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140): 1–67. Ranu, S.; P, D.; Telang, A. D.; Deshpande, P.; and Raghavan, S. 2015. Indexing and matching trajectories under inconsistent sampling rates. In 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015, 999–1010. IEEE Computer Society. Rethmeier, N.; and Augenstein, I. 2021. A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned & Perspectives. Song, R.; Sun, W.; Zheng, B.; and Zheng, Y. 2014. PRESS: A Novel Framework of Trajectory Compression in Road Networks. Proceedings of the VLDB Endowment: 40th VLDB, 7: 661–672. Sun, C.; Baradel, F.; Murphy, K.; and Schmid, C. 2019a. Learning video representations using contrastive bidirectional transformer. Sun, Z.; Deng, Z.; Nie, J.; and Tang, J. 2019b. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. Sun, Z.; Yang, J.; Zhang, J.; Bozzon, A.; Huang, L.; and Xu, C. 2018. Recurrent knowledge graph embedding for effective recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018, Vancouver, BC, Canada, October 2-7, 2018, 297–305. ACM. van Kreveld, M. J.; and Luo, J. 2007. The definition and computation of trajectory and subtrajectory similarity. In 15th ACM International Symposium on Geographic Information Systems, ACM-GIS 2007, November 7-9, 2007, Seattle, Washington, USA, Proceedings, 44. ACM. Vlachos, M.; Gunopulos, D.; and Kollios, G. 2002. Discovering Similar Multidimensional Trajectories. In Proceedings of the 18th International Conference on Data Engineering, San Jose, CA, USA, February 26 - March 1, 2002, 673–684. IEEE Computer Society. Yang, J.; Li, C.; Zhang, P.; Xiao, B.; Liu, C.; Yuan, L.; and Gao, J. 2022a. Unified Contrastive Learning in Image-TextLabel Space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 19141–19151. IEEE. Yang, P.; Wang, H.; Lian, D.; Zhang, Y.; Qin, L.; and Zhang, W. 2022b. TMN: Trajectory Matching Networks for Predicting Similarity. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), 1700–1713. Yang, P.; Wang, H.; Zhang, Y.; Qin, L.; Zhang, W.; and Lin, X. 2021. T3S: Effective Representation Learning for Trajectory Similarity Computation. In 37th IEEE International Conference on Data Engineering, ICDE 2021, Chania, Greece, April 19-22, 2021, 2183–2188. IEEE. Yao, D.; Cong, G.; Zhang, C.; and Bi, J. 2019. Computing Trajectory Similarity in Linear Time: A Generic SeedGuided Neural Metric Learning Approach. In 35th IEEE International Conference on Data Engineering, ICDE 2019, Macao, China, April 8-11, 2019, 1358–1369. IEEE. Yi, B.; Jagadish, H. V.; and Faloutsos, C. 1998. Efficient Retrieval of Similar Time Sequences Under Time Warping. In Proceedings of the Fourteenth International Conference on Data Engineering, Orlando, Florida, USA, February 2327, 1998, 201–208. IEEE Computer Society. Yu, J.; Yin, H.; Xia, X.; Chen, T.; Cui, L.; and Nguyen, Q. V. H. 2022. Are Graph Augmentations Necessary?: Simple Graph Contrastive Learning for Recommendation. In SIGIR ’22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, 1294–1303. ACM. Zhang, D.; Li, N.; Zhou, Z.; Chen, C.; Sun, L.; and Li, S. 2011. iBAT: detecting anomalous taxi trajectories from GPS traces. In UbiComp 2011: Ubiquitous Computing, 13th International Conference, UbiComp 2011, Beijing, China, September 17-21, 2011, Proceedings, 99–108. ACM. Zheng, Y.; Xie, X.; and Ma, W. 2010. GeoLife: A Collaborative Social Networking Service among User, Location, and Trajectory. IEEE Data Engineering Bulletin, 33(2): 32–39. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8319
2024
924
18,767
Learning to Reweight for Graph Neural Network Zhengyu Chen1,2, Teng Xiao3, Kun Kuang1,2, Zheqi Lv1,2, Min Zhang1,2, Jinluan Yang1,2, Chengqiang Lu4, Hongxia Yang4, and Fei Wu1,2 1 Institute of Artificial Intelligence, Zhejiang University 2 Shanghai Institute for Advanced Study, Zhejiang University 3 The Pennsylvania State University 4 DAMA Academy, Alibaba Group. {kunkuang, wufei}@zju.edu.cn, [email protected] Abstract Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing GNNs’ generalization ability will degrade when there exist distribution shifts between testing and training graph data. The cardinal impetus underlying the severe degeneration is that the GNNs are architected predicated upon the I.I.D assumptions. In such a setting, GNNs are inclined to leverage imperceptible statistical correlations subsisting in the training set to predict, albeit it is a spurious correlation. In this paper, we study the problem of the generalization ability of GNNs in Out-Of-Distribution (OOD) settings. To solve this problem, we propose the Learning to Reweight for Generalizable Graph Neural Network (L2R-GNN) to enhance the generalization ability for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs. We propose a novel nonlinear graph decorrelation method, which can substantially improve the outof-distribution generalization ability and compares favorably to previous methods in restraining the over-reduced sample size. The variables of the graph representation are clustered based on the stability of the correlation, and the graph decorrelation method learns weights to remove correlations between the variables of different clusters rather than any two variables. Besides, we interpose an efficacious stochastic algorithm upon bi-level optimization for the L2R-GNN framework, which facilitates simultaneously learning the optimal weights and GNN parameters, and avoids the overfitting problem. Experimental results show that L2R-GNN greatly outperforms baselines on various graph prediction benchmarks under distribution shifts. Introduction Graph Neural Networks (GNNs) have achieved state-of-theart performances on various graph tasks (Kipf and Welling 2016; Veliˇckovi´c et al. 2017; Xu et al. 2018a), but they assume that the training and testing data are independent and identically distributed (i.e., i.i.d assumption), which is not always the case in real-world applications (Chen, Xu, and Wang 2021; Chen and Wang 2021; Chen, Gai, and Wang 2019; Chen, Wang, and Yin 2021; Gai et al. 2019). This leads to inadequate out-of-distribution (OOD) generalization ability, causing significant performance degradation under distribution shifts (Hu et al. 2020; Wu et al. 2018; Chen et al. 2021). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. GNNs’ inadequate out-of-distribution generalization is caused by a spurious correlation between irrelevant features and category labels in training data (Xiao, Chen, and Wang 2022; Chen et al. 2023b). This correlation varies across distributions and is exploited by GNNs for inference. An example of spurious correlation is shown in the graph classification task of the “wheel” motif in Figure 1. In the biased training dataset, most positive graphs have only “star” motifs added, leading to a strong correlation between structural features of “wheel” motifs and “star” motifs. This unexpected correlation leads to a spurious correlation between the structural features of “star” motifs and the label “wheel”. The GCN model exploits this spurious correlation and tends to use “star” motifs for prediction, making false predictions on negative graphs with a “star” motif. To solve the spurious correlation problem caused by the discrepancy between training and testing distributions, earlier research attempts to train a model with stability guarantee through variable decorrelation with sample reweighting, taking model misspecification into consideration (Shen et al. 2020; Kuang et al. 2020; Lv et al. 2023b; Zhang et al. 2023; Lv et al. 2023a). However, the majority of these approaches are suggested in linear settings. GNNs combine heterogeneous data from node features and graph topological structures, resulting in the existence of intricate and unrecognized non-linear relationships across representations (Fan et al. 2021; Li et al. 2021). Non-linear dependencies on graph data cannot be removed using linear sample reweighting approaches. Recent studies propose non-linear decorrelation methods for graph tasks (Fan et al. 2021; Li et al. 2021). They attempt to eliminate the dependencies between all the variables of the graph representation through a set of learned sample weights. However, such a demanding aim might result in an excessively small sample size, which hampers the generalization ability of GNNs (Martino, Elvira, and Louzada 2017; Llorente et al. 2022; Zhang et al. 2022). Moreover, these non-linear decorrelation methodologies on graph data suffer from overfitting problems due to the additional hyperparameters, leading to difficulties in achieving convergence. We suggest that not all correlations should be eliminated, in contrast to prior techniques (Fan et al. 2021; Li et al. 2021), which aggressively decorrelate all connections across graph representations. Such an aggressive objective may result in an issue with an overly-reduced sample size (Martino, Elvira, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8320 Testing Graphs … Biased Training Dataset Wheel Non-Wheel Non-Wheel … Wheel Wheel Non-Wheel Non-Wheel L2R-GCN training with Biased data Wheel Non-Wheel   ◊ Wheel  Wheel GCN training with Biased data Wheel Figure 1: An illustration of a fictitious correlation in the “wheel” motif graph classification task. and Louzada 2017; Llorente et al. 2022), which hampers the generalization ability of GNNs. Taking the graph classification task in Figure 1 as an example, although varied variables may be used to characterize the “wheel” motif’s graph structure and the features of nodes; they function as an integrated whole, and these relationships remain stable across datasets with varied or unknown distribution shifts. The significant correlations between the variables in the “wheel” and the “star” due to the selection bias seen in the biased training dataset. Such ”spurious” relationships, however, cannot be used in OOD datasets. Therefore, to obtain a precise graph model, we just need to eliminate the spurious correlations between two sets of variables (the “wheel” and the “star”). In this paper, we propose a framework called L2R-GNN to solve the problem of learning out-of-distribution graph representation. Our framework includes a nonlinear graph decorrelation method that reduces correlations between variables of various clusters. This method is more effective than previous approaches at controlling the overly-reduced sample size and can significantly increase the ability to generalize outside of the distribution. We group the variables of graph representations based on the stability of their correlations and learn a set of weights to remove spurious correlations. By doing so, graph neural networks can focus more on the real relationship between their ground-truth labels and the graph representations. We also introduce a stochastic approach based on bi-level optimization for the L2R-GNN framework. This approach allows for the simultaneous learning of the optimal GNN parameters and weights while avoiding the over-fitting problem. Our experimental results show that L2R-GNN outperforms baselines on different graph tasks subjected to distribution shifts. Our contributions are as follows: 1) We propose a novel framework that can learn effective graph representation under complex distribution shifts and achieve better performance simultaneously. - We propose a more effective graph decorrelation method than prior approaches at controlling the overly-reduced sample size and increasing the ability to generalize outside of the distribution. 2) We propose an effective stochastic algorithm based on bi-level optimization for the L2R-GNN framework, which enables simultaneously learning the optimal weights and GNN parameters and avoiding the over-fitting issue. 3) Our extensive empirical results on several graph benchmarks subjected to distribution shifts show that L2R-GNN greatly outperforms baselines in terms of performance. Related Works Generalizable Graph Neural Network. Most GNNs methods are proposed under the IID hypothesis, which states that the training and testing sets are independently sampled from the same distribution (Kipf and Welling 2016; Veliˇckovi´c et al. 2017). However, in practice, it might be challenging to satisfy this ideal hypothesis. Recent research (Fan et al. 2021; Li et al. 2021) studies how well GNNs generalize outside the training distribution. Several studies concentrate on size generalization ability to make GNNs function effectively on testing graphs whose size distribution is different from that of training graphs. SL-DSGCN (Tang et al. 2020) reduces the degree-related distribution shifts of GCNs for the OOD node classification task. ImGAGN (Qu et al. 2021) produces a set of synthetic minority nodes to balance the class distribution shifts. BA-GNN (Chen, Xiao, and Kuang 2022) is proposed to learn node representations that are invariant across various distributions for invariant prediction. EERM(Wu et al. 2022) helps GNNs take advantage of invariance principles for prediction on node-level problems. For OOD graph classification task, some works (Fan et al. 2021; Li et al. 2021) improve the generalization capability of GNNs via non-linear decorrelation methods. However, such an aggressive target might result in an issue with an excessively small sample size problem(Martino, Elvira, and Louzada 2017; Llorente The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8321 et al. 2022), which hampers the generalization ability of GNNs. Moreover, these non-linear decorrelation methods on graph data suffer from over-fitting issues due to the additional hyper-parameters and are hard to converge. The Bi-level Optimization. Many works use bi-level optimization (Maclaurin, Duvenaud, and Adams 2015; Wang et al. 2020; Chen, Chen, and Wang 2021; Gai, Chen, and Wang 2021; Chen et al. 2023a) to improve the performance of GNNs. They optimize a higher-level learning subject for lower-level learning. To search the GNN architectures, several studies (Xiao et al. 2022; Xiao, Chen, and Wang 2023; Jiang et al. 2022) optimize a bi-level goal using reinforcement learning. Furthermore, (Xiao et al. 2021) presents bilevel programming with variational inference to provide a framework for learning propagation methods. The study (Liu et al. 2020) attempts to get a parameter initialization that can swiftly adapt to unfamiliar workloads utilizing gradient information from the bi-level optimization. Our study focuses on the capacity of GNNs to generalize in graph-level tasks, and we use bi-level programming to provide a framework for learning graph weights while avoiding over-fitting problems. Method Problem Formulation. Given the training graphs Gtrain = {Gn, Yn}N n=1, where Gn is the n-th graph and Yn is the corresponding label. Gtest is the testing graph which is unobserved in the training stage. The task is to learn a graph neural network GNN(θ) : G →Z and classifier R : Z →Y to predict the label of testing graphs Gtest, which is under distribution shifts P(Gtrain) ̸= P(Gtest). Denote graph representations Z = GNN(θ, G), Z ⊂RN×d. Zi,j represent the i-th row and j-th column in Z. Graph Reweighting with RFF. Similar to previous works (Fan et al. 2021; Li et al. 2021), we decorrelate graph representations, removing statistical relationships between relevant and irrelevant graph representations. Relevant graph representation is invariant across many unknown testing graphs, while irrelevant representation varies. We remove statistical dependency of all dimensions in representation Z, defined as: Z:,i ⊥⊥Z:,j, ∀i, j ∈[1, d], i ̸= j. Hypothesis testing statistics evaluate independence between random variables. We use the Hilbert-Schmidt Independence Criterion (HSIC) to supervise feature decorrelation. If the product kZ:,ikZ:,j is a characteristic kernel, we have HSIC(Z:,i, Z:,j) = 0 ⇔ Z:,i ⊥⊥Z:,j. However, HSIC is not suitable for training deep models on large datasets due to high computational cost. We use the Frobenius norm as the independent testing statistic for the graph representation space. The partial crosscovariance matrix is:ˆΣZ:,i,Z:,j = 1 N−1 PN n=1  u(Zn,i) − 1 N PN m=1 u(Zm,j) T ·  v(Zn,j) −1 N PN m=1 v(Zm,j)  where HRFF represents the space of a random Fourier function. Using nu, nv = 5 is reliable enough to assess the independence of random variables in real-world situations. Using the independence criterion, we apply graph reweighting to remove inter-variable dependencies in graph representation, and RFF to assess overall independence. The learnable graph weight W = {wn}N n=1 for the n-th graph Gn in the training set is wn ∈R. The partial cross-covariance matrix after reweighting is: bΣW Z:,i,Z:,j = 1 N −1 N X n=1   wnu(Zni) −1 N N X m=1 wmu(Zmi) !⊤ · wnv(Znj) −1 N N X m=1 wmv(Zmj) !# . (1) By reducing the squared Frobenius norm of the partial cross-covariance matrix ∥bΣW Z:,i,Z:,j∥2 F, the optimal graph weight W∗reduces inter-variable dependencies in graph representation: W∗= argminW X 1≤i<j≤d ∥bΣW Z:,i,Z:,j∥2 F, (2) Reducing Eq. (1) directly eliminates correlations between any two variables of graph representation. These methods are widely used in recent works (Fan et al. 2021; Li et al. 2021). However, this aggressive target in Eq. (1) hampers GNNs’ generalization capacity due to an excessively decreased sample size problem. Graph Decorrelation. We contend that not all correlations need to be eliminated, in contrast to prior approaches (Fan et al. 2021; Li et al. 2021), which aggressively decorrelate all dependencies between graph representations. As an example, consider the graph classification problem given in Figure 1. Although the features of the nodes and the graph structure of the “wheel” motif may be represented by several variables, they function as a single unit and exhibit consistent correlations across various unknown testing graphs. We can see the significant correlations between the variables in the “wheel” and the “star” due to the selection bias observed in the biased training dataset. Such ”spurious” correlations, however, cannot be used for many unknown testing graphs. In order to get a correct graph model for such a situation, we just need to eliminate the erroneous connection between two sets of data (the “wheel” and the “star”). Specifically, we propose a novel nonlinear graph decorrelation method, which is more effective than previous approaches at limiting the overly-reduced sample size and may significantly increase the out-of-distribution generalization ability. The graph decorrelation approach learns a set of weights to reduce correlations between the variables of various clusters rather than any two variables. The variables of graph representation are grouped depending on the stability of their correlations. By eliminating erroneous correlations, the learnt weights enable graph neural networks to focus more on the real relationship between learned discriminative graph representations and their ground-truth labels. We define the dissimilarity of two variables of graph representation Z as follows in order to express the invariant property of two variables through the variance of their correlation: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8322 Dis (Z:,i, Z:,j) = v u u t 1 N −1 N X l=1  Corr (Zl,i, Zl,j) −Ave−Corr  ˆZ:,i, ˆZ:,j 2 (3) where Ave−Corr  ˆZ:,i, ˆZ:,j  represents the average correlation across whole datasets and Corr (Zl,i, Zl,j) represents the pearson correlation of Zl,i, Zl,j in the lth graph. We load the full dataset in order to obtain ˆZ:,i and ˆZ:,j. However, because of the high computational cost and enormous storage usage, it is practically impossible to apply in huge datasets. Thus, we propose a scalable method with momentum update. It makes sense to cluster the variables with lower dissimilarity into the same cluster since they are more likely to retain a stable joint distribution over many graphs. Therefore, we choose the cluster with the closest mean as determined by the least squared Euclidean distance for each variable in the graph representation Z. We could have K clusters of variables in graph representation Z, where jth cluster is Sj and cluster center is µj. Thus, we could learn µ and S by minimizing µ, S = argminµ,S k X j X Z:,i∈Sj Dis(Z:,i, µj) (4) We could eliminate the correlation between the variables of distinct clusters rather than any two variables by combining the variable clustering of the graph representation. We could reformulate Eq. 2 as: W ∗= argminW X 1≤i<j≤d I(i, j)∥bΣW Z:,i,Z:,j∥2 F, (5) where indicator variable I(i, j) returns 0 if the clusters of Z:,i and Z:,j are different and 1 if the clusters are the same. To get optimal graph weights W, graph neural network GNN(θ), and classifier R, we have: θ∗, R∗= argminθ,R N X n=1 wnℓ(R ◦GNN(G, θ), Yn) , (6) W∗= argminW X 1≤i<j≤d I(i, j)∥bΣW Z:,i,Z:,j∥2 F, (7) where ℓis the loss function. The statistical dependence between different clusters rather than all variables could be eliminated by jointly optimizing the graph neural network GNN(θ), classifier R, and graph weights W. However, such non-linear decorrelation methods on graph data suffer from over-fitting issues due to the additional hyperparameters and are hard to converge. To overcome such overfitting problem, we consider a bi-level optimization method for the model framework that allows for the simultaneous learning of the optimal GNN parameters and weights. Bi-level Training Algorithm. We introduce our L2RGNN framework to learn effective graph representation. However, as suggested by previous works (Ren et al. 2018; Xiao et al. 2021), sample reweighting algorithms suffer from overfitting due to the additional hyperparameters and are hard to converge. In this section, we propose the bi-level training algorithm to alleviate the over-fitting problem. For our proposed L2R-GNN framework, the introduced decorrelation method for joint learning sample reweights also increases the risk of over-fitting as shown in experiments. Inspired by gradient-based meta-learning (learning to learn), we use bi-level optimization to solve the over-fitting issue. Thus, the objective can be formulated as the following bilevel optimization problem: min W Lval (θ∗(W), W) = X 1≤i<j≤d I(i, j)∥ΣW GNN(Gval,θ∗(W ))∥2 F s.t. θ∗(W) = arg min θ Ltrain(θ, W) = Wℓ(R ◦GNN (Gtrain, θ) , Ytrain) , (8) This bi-level update aims to optimize the graph weights based on its validation for avoiding the over-fitting issues, where Ltrain(θ, W) and Lval (θ∗(W), W) are lower-level and higher-level objectives on the training and validation sets, respectively. Since there is no closed-form expression for theta, it is hard to directly optimize the higher-level objective function in Eq. (8). We provide an alternating approximation approach to solve such problems. Updating θ in outer loop. Different from previous works (Fan et al. 2021; Li et al. 2021), we do not solve the lower level problem for each outer loop. At the i-th iteration, we fix W and only perform the gradient steps for parameter θ with the learning rate ηθ that are listed below: θ(i) = θ(i−1) −ηθ∇θLtrain(θ(i−1), W (i−1)), (9) Updating W in inner loop. We compute the higher-level objective in the inner loop after getting the parameter θ(i), which is an estimate of θ(∗)(W)): W (i) = W (i−1) −ηW ∇W Lval(θ(i), W (i−1)). (10) We could have the gradient of W, since the W is part of function θ(i) due to Eq. (9), and the function ∇W Lval(θ(i), W (i−1)) could be represented by: ∇W Lval(θ(i), W (i−1)) = ∇W Lval(¯θ(i), W (i−1)) −ηθ 1 ϵ (∇W Ltrain(θ(i−1) + ϵ∇θLval(θ(i), ¯ W (i−1)), W (i−1)) −∇W Ltrain(θ(i−1), W (i−1))), (11) where ¯θ(i) and ¯W (i−1) denote stopping the gradient. Set ηθ to 0 in Eq. (11) to obtain a first-order approximation as followed: ∇W Lval(θ(i), W (i−1)) = ∇W Lval(¯θ(i), W (i−1)). (12) By alternating the update procedures in Eqs. (9) and (10), we can derive the whole model algorithm from the above gradient derivations. Moreover, we investigate the impact of bi-level optimization as well as the first- and second-order The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8323 0.6 0.7 0.8 0.9 Correlation Degree 0.6 0.7 0.8 Accuracy GIN L2R-GIN 0.6 0.7 0.8 0.9 Correlation Degree 0.6 0.7 0.8 Accuracy GCN L2R-GCN Figure 2: Results of GCN and GIN backbones under different correlation degree settings. Comparing with GCN and GIN method, our L2R-GNN methods (by applying our L2RGNN framework on GCN and GIN backbone) improves the accuracy of graph classification across different spurious correlation degree settings. approximations in experiments. Results show that the best performance is obtained with a first-order approximation. Momentum Graph Weight Estimator. For each graph, a specific weight should be learned as shown in Equation 8. However, simultaneously loading the entire dataset for optimization is impractical due to high computational cost and excessive storage consumption, especially for large datasets. We employ weight queues with a K dimension to balance optimization performance and weight consistency. We have graph representation queue Z(q) = [Z(q1), · · · , Z(qK)] and the corresponding weight queue W(q) = [W(q1), · · · , W(qK)]. During training, they act as a memory bank from earlier mini-batches. The graph representations and weights used for optimization are constructed as follows for each mini-batch of input graphs Gn: bZ = Concat(Z(q1), · · · , Z(qK), Z(l)), c W = Concat(W(q1), · · · , W(qK)W(l)). Using graph representations queues, we reformulate Eq.3 as: Dis (Z:,i, Z:,j) = v u u t 1 N −1 N X l=1  Corr  Z(l) :,i , Z(l) :,j  −Ave−Corr  ˆZ(q) :,i , ˆZ(q) :,j 2 (13) where the pearson correlation of Z:,i, Z:,j in the mini-batch is represented by Corr  Z(l) :,i , Z(l) :,j  , and their average correlation across all graph representation queues is represented by Ave−Corr  ˆZ(q) :,i , ˆZ(q) :,j  . Our L2R-GNN reduce the computational cost via weight queues. If the batch size is B, then bZ is a matrix with dimensions of ((k+1)B)×mZ, and c W is a vector with dimensions of (K + 1)B. The computational cost is lowered from O(N) to O(kB) in this manner. To dynamically update the representations Z(q) and weights W(q) in queues, we use a momentum coefficient αi ∈[0, 1): Z(qi)∗= αiZ(qi) + (1 −αi) Z(l), W(qi)∗= αiW(qi) + (1 −αi) W(l). We replace all Z(qi), W(qi) with Z(qi)∗, W(qi)∗for the next batch. 0 50 100 Epochs 0.0 0.5 1.0 Training…Loss Training First-order Second-order 0 50 100 Epochs 0.2 0.4 0.6 0.8 Validation…Loss Training First-order Second-order Figure 3: The training and validation loss curves on D&D. Experiments In this section, we describe the experimental setup used to evaluate the effectiveness of our proposed method. Experimental results demonstrate the effectiveness of our framework in comparison with different GNN backbones and datasets. We specifically aim to answer the following questions: (RQ 1) How effective is the proposed L2R-GNN framework for the graph classification task? (RQ 2) Could the proposed L2R-GNN alleviate different distribution shifts? (RQ 3) Could the proposed Bi-Level Training Algorithm alleviate the over-fitting issue? (RQ 4) Does the proposed reweighting mechanism work as designed and give some useful insights? (RQ 5) What are the effects of our proposed different components? Baselines We compare our L2R-GNN with several representative state-of-the-art methods: GCN (Kipf and Welling 2016), GIN (Xu et al. 2018a), SGC (Wu et al. 2019), JKNet (Xu et al. 2018b), FactorGCN (Yang et al. 2020), PNA (Corso et al. 2020), TopKPool (Gao and Ji 2019), SAGPool (Lee, Lee, and Kang 2019), OOD-GNN (Li et al. 2021) and StableGNN (Fan et al. 2021). Datasets We evaluate our method and baselines on synthetic and real-world datasets for complex and realistic graph distribution shifts: Synthetic Datasets. To validate L2RGNN effectiveness with various distribution shifts, we generate synthetic datasets, allowing biased degree creation. Following GNN explanation works (Ying et al. 2019; Lin, Lan, and Li 2021), we focus on graph classification tasks with distribution shifts from training to testing datasets. We create a base subgraph for each graph, where each positive graph has a “wheel”-structured network motif and each negative graph has a motif chosen from four candidates: “star”, ”circle”, ”grid”, and ”diamond”. The “wheel” motif is the causal structure determining the label. Real-world Datasets. (1) Molecule and social datasets. Similar to previous works (Knyazev, Taylor, and Amer 2019), we consider three graph classification benchmarks: COLLAB, PROTEINS, and D&D. These datasets are split based on graph size. Methods are trained on smaller graphs and tested on unseen larger graphs. Specifically, COLLAB is a social dataset with 3 public datasets: High Energy Physics, Condensed Matter Physics, and Astro Physics. We train on graphs with nodes from 32 to 35 and test on graphs with nodes from 32 to 492. PROTEINS is a protein dataset. We train on graphs with nodes from 4 to 25 and test on graphs with nodes from 6 to 620. D&D is also a protein dataset. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8324 TOX21 BACE BBBP CLINTOX HIV ESOL Metric ROC-AUC (↑) RMSE (↓) GIN 70.4±0.9 73.8±3.1 67.9±1.4 87.4±2.8 75.8±1.2 1.14±0.08 GCN 72.7±0.6 77.6±1.5 66.8±1.2 88.6±2.2 76.2±1.2 1.12±0.04 SGC 71.8±1.2 70.7±1.4 62.7±1.8 76.4 ±2.0 67.5±1.3 1.59±0.06 PNA 69.1±0.8 74.9±1.9 64.5±1.3 80.6±2.3 76.2±1.8 0.98±0.07 JKNet 69.8±1.2 77.5±1.2 63.4±1.1 82.4±2.2 73.7±1.2 1.29±0.09 SAGPool 72.6±2.5 75.8±1.2 68.4±2.0 86.2±1.3 76.4±1.2 1.13±0.08 TopKPool 72.1±1.5 76.5±2.3 67.8±1.8 86.2±1.2 75.1±1.2 1.10±0.06 FactorGCN 56.2 ±2.8 68.9±1.5 55.1±1.6 65.7±2.6 57.5±1.9 3.12±0.17 OOD-GNN 76.2±1.3 78.3±1.8 68.4±2.8 89.1±1.6 78.2±1.2 0.94±0.06 StableGNN 74.8±1.9 79.2±2.4 68.1±2.6 88.4±1.9 76.5±1.8 0.96±0.04 L2R-GNN 78.6±1.2 81.9±1.0 70.6±1.3 91.9±1.5 79.7±1.0 0.84±0.07 Table 1: Performance of six Open Graph Benchmark (OGB) graph datasets. Figure 4: Case study of GCN and L2R-GNN in biased synthetic dataset. Shadow areas are the important subgraph calculated by the GNNExplainer. COLLAB PROTEINS D&D GIN 56.3±3.5 74.4±2.5 68.9±4.1 GCN 64.0±2.8 74.8±2.7 71.7±3.3 SGC 54.9±4.1 72.6±2.1 64.2±3.8 PNA 58.7±4.3 71.9±2.9 70.5±2.4 JKNet 57.9±3.8 74.2±2.6 69.5±3.6 SAGPool 66.8±1.3 75.9±0.6 77.1±1.6 TopKPool 54.7±1.4 65.7±2.8 68.2±3.2 FactorGCN 52.3±1.8 62.4±4.3 55.2±2.4 OOD-GNN 66.9±1.5 77.1±1.1 79.1±1.3 StableGNN 67.3±1.4 76.5±0.9 78.7±1.6 L2R-GNN 68.2±1.4 78.9±0.7 80.8±1.2 Table 2: Performance of graph classification accuracy (%) with graph size distribution shifts, where the training and testing graphs are split by graph sizes. All methods are trained on small graphs and tested on larger graphs. Best results are indicated in bold. We train on graphs with nodes from 30 to 300 and test on graphs with nodes from 30 to 5, 748. (2)Open Graph Benchmark (OGB) (Hu et al. 2020). We consider OGBG-MOL∗ TOX21, BACE, BBBP, CLINTOX, HIV, and ESOL as six graph property prediction datasets from OGB with distribution shifts. Predicting target molecule properties is the graph classification task. We use scaffold splitting technique to separate graphs based on two-dimensional structural frameworks. This technique divides structurally diverse molecules into subsets, creating a more realistic and hard out-of-distribution generalization situation. RQ1. Performance Comparison. Results on 6 OGB 0.0 0.5 1.0 1.5 2.0 Weight 0.0 0.8 1.6 2.4 Density Unbiased Biased (a) Synthetic dataset. 0.0 0.5 1.0 1.5 2.0 Weight 0.0 0.8 1.6 2.4 Density OGBG-MOLTOX21 D&D (b) OGBG and D&D datasets. Figure 5: The distribution of the learned graph weights on (a) unbiased and biased synthetic dataset and (b) two real-world datasets. datasets are in Table 1. Datasets use scaffold splitting (Wu et al. 2018), dividing molecules by 2D structural frameworks, creating distribution shifts between train and test graphs. We could find that L2R-GNN outperforms other GNN models in all cases, effectively alleviating distribution shifts. L2R-GNN surpasses StableGNN and OOD-GNN, showing effectiveness of graph decorrelation method and bi-level optimization. L2R-GNN performs well on various tasks and dataset scales, indicating generality. L2R-GNN excels in out-of-distribution generalization, esp. for large-scale real-world graphs. Size generalization problem considered on real-world molecule and social datasets (COLLAB, PROTEINS, D&D), with train and test graphs split by size. Results in Table 2. L2R-GNN outperforms baselines, demonstrating best out-of-distribution generalization under size distribution shifts. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8325 0.70 0.75 0.80 0.85 0.90 Accuracy L2R-GIN L2R-GINw/oBi L2R-GINw/oGD GIN 0.70 0.75 0.80 0.85 0.90 Accuracy L2R-GCN L2R-GCNw/oBi L2R-GCNw/oGD GCN Figure 6: Ablation study of our L2R-GNN with (a) GIN and (b) GCN backbones. RQ2. Different Distribution Shifts. To inject spurious correlation, µ ∗100% of “wheel” graphs have “star” motif added, and the remaining graphs have a non-causal motif chosen from 4 candidate motifs. For all nodes, node features are taken from the same uniform distribution. To create four spurious correlations for the training set, we set µ as {0.6, 0.7, 0.8, 0.9}. To answer RQ2, we conduct experiments on synthetic datasets with different distribution shifts. The results of GCN and GIN backbones under different correlation degree settings are shown in Figure 2. As spurious correlation degree increases, both GIN and GCN experience significant performance decline, suggesting spurious correlation greatly impacts GNNs’ generalization performance, and larger distribution changes result in greater performance decline. Our L2R-GNN methods, implementing our framework on GCN and GIN backbones, increase graph categorization accuracy across various spurious correlation degree settings compared to GCN and GIN methods. The “wheel” motif, as described earlier, represents the label; thus, using this causal subgraph is the only way to improve performance. This demonstrates how our models can significantly reduce the impact of erroneous subgraph correlation. In all biased scenarios, our L2R-GNN outperforms backbones, proving it is a universal framework capable of fitting different GNN architectures. RQ3. Bi-Level Optimization. To answer RQ3, we conduct experiments to analyze the model loss during training. We use training as a baseline, where we optimize W simultaneously with θ on training data without validation. We compare training with first-order and second-order approximates. Figures 3 shows the learning curves of training loss and validation loss on the D&D dataset of L2R-GNN . We can observe that the training gets stuck in the over-fitting issue attaining low training loss but high validation loss. For first-order and second-order, the difference between training and validation losses is significantly lower. It proves that the first-order approximation is adequate to prevent over-fitting and the bi-level optimization increases generalization ability. RQ4. Reweighting Mechanism. We study the reweighting mechanism’s contribution to robust graph representation learning via experiments on synthetic and real-world datasets. In synthetic datasets, µ ∗100% positive graphs have a ”star” motif added, and the remaining positive and negative graphs have a non-causal motif chosen from 4 candidates. We collect graph weights in unbiased (µ = 0.25) and biased (µ = 0.8) datasets, as shown in Figure 5(a). Median weight in biased data is lower than in unbiased data, indicating L2R-GNN can identify noise graphs with spurious correlation. Weight variance in biased data is higher than in unbiased data, showing L2R-GNN ’s reliable detection of noise graphs with spurious correlation. On real-world datasets D&D and OGBG-MOLTOX21, Figure 5(b) displays learned graph weight distribution, demonstrating non-trivial weights and varying distribution across datasets. Case study. Using GNNExplainer (Ying et al. 2019), we visualize important subgraphs for GNN’s prediction as shadow areas and compare GCN with L2R-GNN in Figure 4. Three cases demonstrate L2R-GNN effectiveness: Case 1. GNNExplainer (Ying et al. 2019) shows GCN assigns higher weights to ”star” motif, while L2R-GNN focuses on ”wheel” motif. GCN’s accurate but unstable prediction may rely on spurious correlation, which is undesirable. Case 2. GCN ignores ”wheel” motif due to spurious correlation, leading to incorrect prediction. L2R-GNN focuses on ”wheel” motif, determining the true label. Case 3. Spurious correlation causes GCN to focus on ”star” motif and make incorrect predictions. L2R-GNN ’s decorrelation of subgraphs attributes more to prediction with ”circle” motif. RQ5. Component Effects We conduct an ablation study and hyper-parameter sensitivity analysis to understand component effects on performance. We compare L2R-GNN with: L2R-GNNw/oBi: L2R-GNN without bi-level optimization, optimizing W and θ simultaneously on training data without validation. L2R-GNNw/oGD: L2R-GNN without graph decorrelation module. Results in Figure 6 show L2R-GNN achieves the best performance, indicating each component contributes to effectiveness and robustness. Both components contribute to performance gain, complementing each other. Conclusions GNNs achieve state-of-the-art performance in tasks like molecular graph prediction, scene graph classification, and social network classification. We propose Learning to Reweight for Generalizable Graph Neural Network for OOD generalization of GNNs. Our novel nonlinear graph decorrelation method improves OOD generalization and outperforms previous methods in preventing over-reduced sample size. We propose a bi-level optimization-based stochastic algorithm for the L2R-GNN framework, enabling simultaneous learning of optimal example weights and GNN parameters, and avoiding over-fitting. Empirical results on synthetic and realworld datasets demonstrate L2R-GNN ’s effectiveness. Acknowledgements This work was supported in part by Young Elite Scientists Sponsorship Program by CAST (2021QNRC001), National Natural Science Foundation of China (No. 62376243, U20A20387), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SNZJU-SIAS-0010), Project by Shanghai AI Laboratory (P22KS00111) and Program of Zhejiang Province Science and Technology (2022C01044). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8326 References Chen, S.; Chen, Z.; and Wang, D. 2021. Adaptive adversarial training for meta reinforcement learning. In 2021 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. Chen, Y.; Wen, Z.; Fan, G.; Chen, Z.; Wu, W.; Liu, D.; Li, Z.; Liu, B.; and Xiao, Y. 2023a. MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization. In Findings of the Association for Computational Linguistics: EMNLP 2023, 3279–3304. Chen, Z.; Gai, S.; and Wang, D. 2019. Deep tensor factorization for multi-criteria recommender systems. In 2019 IEEE International Conference on Big Data (Big Data), 1046– 1051. IEEE. Chen, Z.; Ge, J.; Zhan, H.; Huang, S.; and Wang, D. 2021. Pareto Self-Supervised Training for Few-Shot Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13663–13672. Chen, Z.; Gong, Y.; Yang, L.; Zhang, J.; Zhang, W.; He, S.; and Zhang, X. 2023b. Invariant Graph Neural Network for Out-of-Distribution Nodes. In Proceedings of the 2023 15th International Conference on Machine Learning and Computing, 192–196. Chen, Z.; and Wang, D. 2021. Multi-Initialization MetaLearning with Domain Adaptation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1390–1394. IEEE. Chen, Z.; Wang, D.; and Yin, S. 2021. Improving cold-start recommendation via multi-prior meta-learning. In Advances in Information Retrieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28–April 1, 2021, Proceedings, Part II 43, 249–256. Springer. Chen, Z.; Xiao, T.; and Kuang, K. 2022. BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), 3012–3024. IEEE. Chen, Z.; Xu, Z.; and Wang, D. 2021. Deep transfer tensor decomposition with orthogonal constraint for recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 4010–4018. Corso, G.; Cavalleri, L.; Beaini, D.; Li`o, P.; and Veliˇckovi´c, P. 2020. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33: 13260–13271. Fan, S.; Wang, X.; Shi, C.; Cui, P.; and Wang, B. 2021. Generalizing Graph Neural Networks on Out-Of-Distribution Graphs. In arXiv preprint arXiv:2111.10657. Gai, S.; Chen, Z.; and Wang, D. 2021. Multi-modal meta continual learning. In 2021 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. Gai, S.; Zhao, F.; Kang, Y.; Chen, Z.; Wang, D.; and Tang, A. 2019. Deep transfer collaborative filtering for recommender systems. In PRICAI 2019: Trends in Artificial Intelligence: 16th Pacific Rim International Conference on Artificial Intelligence, Cuvu, Yanuca Island, Fiji, August 26-30, 2019, Proceedings, Part III 16, 515–528. Springer. Gao, H.; and Ji, S. 2019. Graph u-nets. In international conference on machine learning, 2083–2092. PMLR. Hu, W.; Fey, M.; Zitnik, M.; Dong, Y.; Ren, H.; Liu, B.; Catasta, M.; and Leskovec, J. 2020. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS. Jiang, Y.; Chen, Z.; Kuang, K.; Yuan, L.; Ye, X.; Wang, Z.; Wu, F.; and Wei, Y. 2022. The Role of Deconfounding in Meta-learning. In International Conference on Machine Learning, 10161–10176. PMLR. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. In ICLR. Knyazev, B.; Taylor, G. W.; and Amer, M. 2019. Understanding attention and generalization in graph neural networks. In NeurIPS. Kuang, K.; Xiong, R.; Cui, P.; Athey, S.; and Li, B. 2020. Stable Prediction with Model Misspecification and Agnostic Distribution Shift. In AAAI. Lee, J.; Lee, I.; and Kang, J. 2019. Self-attention graph pooling. In ICML, 3734–3743. PMLR. Li, H.; Wang, X.; Zhang, Z.; and Zhu, W. 2021. Ood-gnn: Out-of-distribution generalized graph neural network. arXiv preprint arXiv:2112.03806. Lin, W.; Lan, H.; and Li, B. 2021. Generative Causal Explanations for Graph Neural Networks. In ICML. Liu, Z.; Zhang, W.; Fang, Y.; Zhang, X.; and Hoi, S. C. 2020. Towards locality-aware meta-learning of tail node embeddings on networks. In CIKM. Llorente, F.; Martino, L.; Read, J.; and Delgado, D. 2022. Optimality in Noisy Importance Sampling. Signal Processing, 108455. Lv, Z.; Chen, Z.; Zhang, S.; Kuang, K.; Zhang, W.; Li, M.; Ooi, B. C.; and Wu, F. 2023a. Ideal: Toward high-efficiency device-cloud collaborative and dynamic recommendation system. arXiv preprint arXiv:2302.07335. Lv, Z.; Zhang, W.; Zhang, S.; Kuang, K.; Wang, F.; Wang, Y.; Chen, Z.; Shen, T.; Yang, H.; Ooi, B. C.; et al. 2023b. DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization. In Proceedings of the ACM Web Conference 2023, 3077–3085. Maclaurin, D.; Duvenaud, D.; and Adams, R. 2015. Gradientbased hyperparameter optimization through reversible learning. In ICML. Martino, L.; Elvira, V.; and Louzada, F. 2017. Effective sample size for importance sampling based on discrepancy measures. Signal Processing, 131: 386–401. Qu, L.; Zhu, H.; Zheng, R.; Shi, Y.; and Yin, H. 2021. Imgagn: Imbalanced network embedding via generative adversarial graph networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 1390– 1398. Ren, M.; Zeng, W.; Yang, B.; and Urtasun, R. 2018. Learning to reweight examples for robust deep learning. In ICML. Shen, Z.; Cui, P.; Liu, J.; Zhang, T.; Li, B.; and Chen, Z. 2020. Stable Learning via Differentiated Variable Decorrelation. In The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8327 Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2185–2193. Tang, X.; Yao, H.; Sun, Y.; Wang, Y.; Tang, J.; Aggarwal, C.; Mitra, P.; and Wang, S. 2020. Investigating and Mitigating Degree-Related Biases in Graph Convoltuional Networks. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 1435–1444. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. In ICLR. Wang, S.; Yang, J.; Chen, Z.; Yuan, H.; Geng, J.; and Hai, Z. 2020. Global and local tensor factorization for multi-criteria recommender system. Patterns, 1(2). Wu, F.; Souza, A.; Zhang, T.; Fifty, C.; Yu, T.; and Weinberger, K. 2019. Simplifying Graph Convolutional Networks. In ICML, 6861–6871. PMLR. Wu, Q.; Zhang, H.; Yan, J.; and Wipf, D. 2022. Handling Distribution Shifts on Graphs: An Invariance Perspective. In ICLR. Wu, Z.; Ramsundar, B.; Feinberg, E. N.; Gomes, J.; Geniesse, C.; Pappu, A. S.; Leswing, K.; and Pande, V. 2018. MoleculeNet: a benchmark for molecular machine learning. Chemical science, 9(2): 513–530. Xiao, T.; Chen, Z.; Guo, Z.; Zhuang, Z.; and Wang, S. 2022. Decoupled self-supervised learning for graphs. Advances in Neural Information Processing Systems, 35: 620–634. Xiao, T.; Chen, Z.; Wang, D.; and Wang, S. 2021. Learning how to propagate messages in graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 1894–1903. Xiao, T.; Chen, Z.; and Wang, S. 2022. Representation matters when learning from biased feedback in recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2220–2229. Xiao, T.; Chen, Z.; and Wang, S. 2023. Reconsidering Learning Objectives in Unbiased Recommendation: A Distribution Shift Perspective. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2764– 2775. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018a. How Powerful are Graph Neural Networks? In ICLR. Xu, K.; Li, C.; Tian, Y.; Sonobe, T.; Kawarabayashi, K.-i.; and Jegelka, S. 2018b. Representation learning on graphs with jumping knowledge networks. In ICML, 5453–5462. PMLR. Yang, Y.; Feng, Z.; Song, M.; and Wang, X. 2020. Factorizable graph convolutional networks. In NeurIPS, volume 33, 20286–20296. Ying, R.; Bourgeois, D.; You, J.; Zitnik, M.; and Leskovec, J. 2019. Gnnexplainer: Generating explanations for graph neural networks. In NeurIPS. Zhang, M.; Huang, S.; Li, W.; and Wang, D. 2022. Tree structure-aware few-shot image classification via hierarchical aggregation. In European Conference on Computer Vision, 453–470. Springer. Zhang, M.; Yuan, J.; He, Y.; Li, W.; Chen, Z.; and Kuang, K. 2023. MAP: Towards Balanced Generalization of IID and OOD through Model-Agnostic Adapters. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11921–11931. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8328
2024
925
18,768
Effective Comparative Prototype Hashing for Unsupervised Domain Adaptation Hui Cui1, 2, Lihai Zhao3, Fengling Li4, Lei Zhu5* Xiaohui Han1,2,6*, Jingjing Li7 1Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center, Qilu University of Technology (Shandong Academy of Sciences) 2Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science 3University of Science and Technology Beijing 4University of Technology Sydney 5Tongji University 6Quan Cheng Laboratory 7University of Electronic Science and Technology of China {cuihui2018, zhaolihai2020, fenglingli2023, leizhu0608, xiaohhan}@gmail.com,[email protected] Abstract Unsupervised domain adaptive hashing is a highly promising research direction within the field of retrieval. It aims to transfer valuable insights from the source domain to the target domain while maintaining high storage and retrieval efficiency. Despite its potential, this field remains relatively unexplored. Previous methods usually lead to unsatisfactory retrieval performance, as they frequently directly apply slightly modified domain adaptation algorithms to hash learning framework, or pursue domain alignment within the Hamming space characterized by limited semantic information. In this paper, we propose a simple yet effective approach named Comparative Prototype Hashing (CPH) for unsupervised domain adaptive image retrieval. We establish a domain-shared unit hypersphere space through prototype contrastive learning and then obtain the Hamming hypersphere space via mapping from the shared hypersphere. This strategy achieves a cohesive synergy between learning uniformly distributed and category conflict-averse feature representations, eliminating domain discrepancies, and facilitating hash code learning. Moreover, by leveraging dual-domain information to supervise the entire hashing model training process, we can generate hash codes that retain inter-sample similarity relationships within both domains. Experimental results validate that our CPH significantly outperforms the state-of-the-art counterparts across multiple cross-domain and single-domain retrieval tasks. Notably, on Office-Home and Office-31 datasets, CPH achieves an average performance improvement of 19.29% and 13.85% on cross-domain retrieval tasks compared to the second-best results, respectively. The source codes of our method are available at: https://github.com/christinecui/CPH. Introduction The efficacy of hashing technique in the field of retrieval has been firmly established through extensive research (Wang et al. 2016, 2018; Zhu et al. 2024). While hashing offers inherent advantages in terms of data storage and retrieval efficiency, it frequently encounters challenges when aiming to *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. achieve satisfactory retrieval performance in real-world unsupervised scenarios. This is primarily due to the absence of semantic information that guides the hashing model in learning similarity-preserving hash codes. Transferring valuable knowledge from relevant and labeled data (referred to as source domain) to enhance the retrieval performance of current unsupervised hashing models stands as a sensible and promising strategy. However, the formulation mentioned above encounters a critical issue: the distribution of the source domain and the current dataset (referred to as the target domain) may diverge. Directly applying the model trained on the source domain to the target domain might lead to a significant performance degradation. This concern is confirmed by the results from the well-known domain adaptation method DANN (Ganin and Lempitsky 2015). The model trained on the MNIST dataset achieves a remarkable classification accuracy of 95.96% on MNIST itself. However, when this identical model is applied to the MNIST-M dataset, its classification accuracy plummets to 52.25%. In response to this challenge, the concept of domain adaptation emerged as a solution (Tzeng et al. 2015; Sun, Feng, and Saenko 2016; Zhang, Li, and Ogunbona 2017). Its objective is to bridge the disparities in domain distributions and facilitate the transfer of valuable knowledge across distinct domains. As a consequence, this concept contributes to improving performance within the target domain. Currently, only a limited number of unsupervised domain adaptive hashing methods have been proposed (Zhou et al. 2018; Liu and Zhang 2019; Huang, Zhang, and Gao 2022). Regrettably, none of them have managed to achieve a satisfactory level of retrieval performance. This is demonstrated by the results obtained from the pioneering approach PWCF (Huang et al. 2020), where the mAP values across six crossdomain retrieval tasks on Office-Home dataset barely reach around 30%. Such results raise questions regarding whether PWCF effectively transfers knowledge from the source domain to the target domain. Even the most recent method, DANCE (Wang et al. 2023b), does not surpass an average retrieval performance of 50% on Office-Home. Thus, the primary objective of this paper is to enhance the retrieval The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8329 performance of unsupervised domain adaptive hashing, encompassing both cross-domain and single-domain retrieval tasks. After a thorough analysis of the existing unsupervised domain adaptive hashing methods, we have discovered that early methods overly relied on general domain adaptation paradigms, leading to a lack of effective integration between domain adaptation and hash learning. For example, DAH (Venkateswara et al. 2017a) and the well-regarded domain adaptation method DAN (Long et al. 2015) exhibit a significant degree of similarity. The only distinctions are that DAH substitutes the final layer of DAN with a hash layer and employs a commonly used negative log-likelihood function in hash learning to train its model. DAH-IGAN (He et al. 2019) incorporates the GAN concept from DANN (Ganin and Lempitsky 2015) into its hashing framework, training the model by simultaneously maximizing the domain discriminator loss and minimizing the label predictor loss in a manner similar to DANN. In the past two years, methods have shifted their focus towards generating reliable pseudolabels for the target domain, which in turn guide the learning process of hashing models, such as DHLing (Xia et al. 2021), PEACE (Wang et al. 2023a), and DANCE. However, these approaches share a common characteristic: they perform domain alignment in the Hamming space. Since the dimension of the Hamming space is usually much smaller than that of the feature space, conducting domain alignment in the Hamming space may lead to the loss of the rich semantic information embedded in the original samples. As a consequence, this has the potential to impact the ultimate retrieval performance of domain adaptive hashing. Different from methods mentioned above, we propose a simple yet effective method, dubbed as Comparative Prototype Hashing (CPH). The core idea of CPH is to establish a domain-shared unit hypersphere space corresponding to the Hamming space of hash codes. Specifically, CPH maximizes the distance between prototypes belonging to different categories, thus encouraging a uniform distribution of features on the hypersphere of feature space and avoiding category conflict. Additionally, by aligning prototypes from both the source and target domains, CPH enhances the compactness of feature representations for samples within the same category across both domains. This prototype-based comparative learning not only promotes domain alignment but also advances the learning of discriminative feature representations, potentially transforming the relationships between samples from the source and target domains into relationships between prototypes. Besides, CPH incorporates the relation preservation and quantization from both domains into the hash learning framework. Empowered by these strategies, CPH effectively learn discriminative hash codes, resulting in a substantial enhancement in retrieval performance across cross-domain and single-domain retrieval tasks. The workflow of our approach is illustrated in Figure 1, and the primary contributions of our method can be summarized as follows: • We seamlessly integrate domain adaptation and hash learning within a straightforward framework utilizing a domain-shared unit hypersphere space, resulting in a significant enhancement of retrieval performance in the context of unsupervised domain adaptive hashing. • Technically, we perform prototype comparative learning to obtain the domain-shared space, and map this space into a Hamming space under the constraints of semantic relations and quantization in both the source and target domains. • Extensive experimental results demonstrate that the proposed CPH method outperforms state-of-the-art unsupervised domain adaptive hashing methods in terms of retrieval accuracy, both in single-domain and cross-domain scenarios. Particularly, when compared to the secondbest results on Office-Home and Office-31 databases, the average improvement in cross-domain retrieval is 19.29% and 13.85%. Related Work Learning to Hash Hashing aims to learn hash functions that encode highdimensional data into binary hash codes while maintaining semantic relationships. It offers advantageous attributes in terms of data storage efficiency and retrieval speed, which has consequently garnered significant attention (Zhu et al. 2020; Lu et al. 2019; Cui et al. 2020, 2021). Hashing can be broadly categorized based on its reliance on semantic labels: unsupervised hashing and supervised hashing. Supervised hashing methods utilize explicit labels as supervised information to learn discriminative hash codes. Examples include SDH (Shen et al. 2015), DSRH (Zhao et al. 2015), NINH (Lai et al. 2015) and DPSH (Li, Wang, and Kang 2016). However, despite their capability to achieve commendable performance, the substantial cost associated with annotating labels poses a significant obstacle to their scalability. In contrast, unsupervised hashing methods exhibit superior scalability compared to their supervised counterparts since they are independent of semantic labels. As a result, unsupervised hashing stands as a suitable solution for practical retrieval systems. Representative approaches in this category include LSH (Gionis, Indyk, and Motwani 1999), ITQ (Gong et al. 2013), SGH (Jiang and Li 2015), and GraphBit (Wang et al. 2023c). Nonetheless, the absence of explicit semantic label supervision in unsupervised scenarios can potentially limit the discriminative capability of hash codes, thereby undermining overall retrieval performance. This situation underscores the pressing need for cost-effective alternatives to manual annotation, which can serve as viable sources of semantic supervision in hashing. Unsupervised Domain Adaptation Unsupervised domain adaptation aims to transfer knowledge acquired from a well-labeled source domain to a different yet related target domain where labeled data is unavailable (Pan and Yang 2010; Wang and Deng 2018; Xu et al. 2020). In this field, mainstream methods can be primarily categorized into two types: metric learning-based methods and adversarial learning-based methods. The former mitigates distribution gaps by minimizing statistical criteria. Representative approaches includes DAN (Long et al. 2015), RTN The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8330 HashEncoder Domain-shared Hypersphere Space MLP Source Domain Target Domain Relation Preservation Loss Hamming Space ... Source Hash Codes Target Hash Codes Quantization Loss Source Features Target Features Similarity Relationships Relation Preservation Loss 1 2 3 Prototype Contrastive Loss ... Figure 1: The workflow of the proposed PCH method. (Long et al. 2016), JAN (Long et al. 2017), and CGDM (Du et al. 2021). The latter employs a domain classifier to distinguish whether a sample originates from the source or the target domain, while concurrently encouraging the feature generation network to extract domain-invariant representations through a two-player mini-max game. Noteworthy instances in this category include DANN (Ganin and Lempitsky 2015), ADDA (Tzeng et al. 2017), and AAA (Li et al. 2022). The domain adaptation field is continuously and vigorously evolving, potentially giving rise to a plethora of novel methodologies and concepts in the foreseeable future, along with fostering more extensive combinations with downstream tasks. Domain Adaptive Hashing Over the last decade, a handful of domain adaptive hashing methods have emerged. Initial approaches are grounded in machine learning principles, such as LapITQ+ (Zhou et al. 2018), GTH (Liu and Zhang 2019), PWCF (Huang et al. 2020), and DAPH (Huang, Zhang, and Gao 2022). These methods commonly utilize distribution metrics to generate domain-invariant features. With the rise of deep learning, recent approaches have shifted towards deep learning-based domain adaptive hashing (Venkateswara et al. 2017a; Xia et al. 2021; Wang et al. 2023a,b). The strategies for domain adaptation have progressively expanded to encompass adversarial learning, construction of domain classifiers, parameter sharing across networks, alignment of pseudo-labels, and more. However, none of these methods have achieved satisfactory performance. They heavily relied on general domain adaptation paradigms, struggling to effectively integrate domain adaptation and hash learning. Enhancing the retrieval performance of domain adaptive hashing remains a compelling pursuit. Methodology Problem Definition Assuming that we have a labeled source domain Ds = {(xs i, ys i )}ns i=1 with ns samples and an unlabeled target domain Dt = {(xt i)}nt i=1 with nt samples. Ds and Dt exhibit distinct data distributions while sharing a common label space Y = {1, 2, · · · , C}. Our goal is to transfer knowledge from the source domain Ds to assist the target domain Dt in learning a hashing model that generates similaritypreserving hash codes Bt = [bt 1, · · ·, bt nt] ∈{−1, 1}nt×r, where r is the length of the hash code. For single domain retrieval, we investigate the retrieval system in which both queries and databases originate from the target domain Dt. In the context of cross-domain retrieval, we seek samples from the source domain Ds that possess similar semantics to the given queries from the target domain Dt. Prototype Contrastive Learning Previous studies have demonstrated the noteworthy progress in hash learning via contrastive learning (Qiu et al. 2021; Luo et al. 2021; Wang et al. 2022, 2023b). However, many of these methods consider instance-level contrastive learning, wherein samples other than the augmented view of the current sample within the same batch are considered as negative samples. These negative samples may inevitably lead to the category collision issue. Additionally, they usually directly apply the contrastive loss to hash codes, lacking the ability to effectively incorporate the semantic content inherent in the samples. To address these concerns, we conduct categorylevel contrastive learning based on a domain-shared feature representation space. By treating prototypes from the same category in both source and target domains as positive pairs, and those from different categories as negative pairs, we facilitate the alignment of the source and target domains in the shared feature space. Furthermore, this approach enables simultaneous learning of uniform, conflict-averse feature representations across both domains. To achieve the aforementioned objectives, we first calculate the source prototype codes based on the source labels. The learning process for the prototype code of the c-th category (c ∈Y) can be described as follows: ps c = Pns i f s i 1 (ys i = c) Pns i 1 (ys i = c) , ps c ← ps c ∥psc∥2 , (1) where f s i represents the feature representation of the source domain sample in the shared feature space. This feature representation is obtained using a Multi-Layer Perceptron architecture MLP(·; θmlp) as follows: f s i = MLP (xs i; θmlp) . (2) Here, θmlp denotes the trainable parameters. 1(·) acts as an indicator function that returns 1 if the argument is true and 0 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8331 otherwise. It’s important to note that ps c represents the prototype code that considers all training data from Ds. During the training process, ps c is initialized using the prototype code learned from the first epoch. Subsequently, in each successive epoch, ps c is updated based on the sampled data. Subsequently, we obtain pseudo-labels for the target data using a nearest source prototype approach. To be precise, the pseudo-label for the j-th target sample is obtained through the following process: ˆyt j = arg max c cos f t j, ps c  , (3) where cos(·) signifies the cosine similarity function. After obtaining pseudo-labels for target domain samples, we can proceed to deduce the target prototype codes, applying a method similar to that used for source prototypes. Specifically, the prototype code for the c-th category in the target domain is determined by the following process: pt c = Pnt j f t j1 ˆyt j = c  Pnt j 1 ˆyt j = c  , pt c ← pt c ∥ptc∥2 . (4) Likewise, pt c undergoes updates during each epoch. We leverage prototypes from both the source and target domains to facilitate prototype adaptation through contrastive learning. Specifically, within the shared feature representation space, we possess C prototypes from the source domain, denoted as {ps 1, . . . , ps C}, as well as an analogous set of C prototypes from the target domain, represented as {pt 1, . . . , pt C}. Our approach designs a Prototype Contrastive Loss (PCL), which is formulated as follows: LP CL = 1 C C X c=1 −log exp  (ps c)T pt c τ  exp  (psc)T ptc τ  + C P i=1,i̸=c exp  (psc)T pt i τ  (5) ≈ 1 C C X c=1 −(ps c)T pt c τ | {z } prototypical alignment + 1 C C X c=1 log C X i=1,i̸=c exp (ps c)T pt i τ  . | {z } prototypical uniformity (6) Based on the analysis presented in ProPos (Huang et al. 2023), it has been determined that Eq. (5) can be approximated through the prototypical alignment and prototypical uniformity outlined in Eq. (6). This theoretical validation supports the simultaneous acquisition of uniform and category conflict-averse feature representations across both domains by our method. Additionally, this strategy implicitly transforms the fundamental relationships among samples into relationships among prototypes. As these learned prototypes offer greater discrimination than the original sample features, they further enhance our subsequent unsupervised domain adaptive hashing learning. Hash Code Learning To achieve the core essence of hashing, we preserve the inherent relationships among samples into the hash codes. As our prototypical contrastive learning ensures that feature representations are uniformly distributed on a unit hypersphere, we consequently opt to maintain the similarity between samples in this feature space into the hash codes. Firstly, given that we possess the labels from the source domain, a logical step is to utilize them for directly guiding our hash learning process. This can be formalized as follows: Lrel1 = ∥ηSs −cos(Hs, Hs)∥2 F. (7) Here, η stands for the scaling factor, Ss signifies the similarity matrix computed by the source labels. Hs =  hs 1, · · ·, hs ns  ∈Rns×r corresponds to the relaxed hash codes of the source domain, which are generated by our HashEncoder. For the i-th source sample, hs i is computed as follows: hs i = HashEncoder (f s i ; θhash) , (8) where θhash denotes the trainable parameters. Secondly, to transfer knowledge from the source domain to the target domain and engage in unsupervised domain adaptive hash learning, we enforce the constraint that similar features in the domain-shared feature representation space of both domains should generate corresponding hash codes. This can be expressed as follows: Lrel2 = cos(Fs, Ft) −cos(Hs, Ht) 2 F . (9) Here, F∗ =  f ∗ 1 , · · ·, f ∗ n∗  ∈ Rn∗×d, and H∗ = [h∗ 1, · · ·, h∗ n∗] ∈Rn∗×r, with ∗∈{s, t}. They are produced using MLP(·; θmlp) and HashEncoder(·; θhash), respectively. By combining Eq. (7) and Eq. (9), we formulate our Relation Preservation Loss (RPL) as follows: LRP L = γLrel1 + (1 −γ)Lrel2, (10) where γ serves as the fusion factor that balances the importance of each term. With this, we successfully integrate the relationships between samples from both domains into our hash learning framework. To ensure that the relaxed hash codes approximate the binary hash codes, we apply a quantization process to the hash codes for both domains. The formulation of our Quantization Loss (QL) is as follows: LQL = 1 2 Bt −Ht 2 F + 1 2 ∥Bs −Hs∥2 F , (11) where B∗= sgn(H∗) = [b1, · · ·, bn∗] ∈{−1, 1}n∗×r represents the binary hash codes. sgn(·) is the element-wise sign function that returns 1 if the element is positive and returns −1 otherwise. Finally, we establish the overall objective function for our CPH as follows: min L = λ1LP CL + λ2LQL + λ3LRP L, (12) where λ1, λ2, λ3 are trade-off parameters. Out of Sample Extension After undergoing several rounds of iterative optimizations, our CPH model is thoroughly trained, resulting in the attainment of optimal network parameters. Subsequently, for any The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8332 Method Office-Home Office-31 A→R R→A C→R R→C P→R R→P A→D A→W D→A D→W W→A W→D LSH 11.49 11.45 6.94 7.24 12.24 13.45 16.04 15.35 13.60 43.99 14.67 38.80 ITQ 25.88 25.37 14.83 14.92 26.81 28.19 29.55 28.53 26.83 58.89 25.09 58.00 DSH 9.69 9.67 5.47 5.28 8.49 8.26 16.66 15.09 16.33 41.07 13.58 39.24 SGH 22.93 22.53 13.62 13.51 24.51 25.73 24.98 22.47 22.17 56.36 20.52 53.94 GraphBit 18.18 16.87 11.51 10.81 18.91 21.32 24.48 23.12 22.09 53.82 21.34 51.43 ITQ+ 14.25 9.55 17.61 17.99 15.00 42.29 GTH-g 16.95 17.54 8.46 11.88 17.82 18.57 30.85 18.44 21.99 48.48 20.02 50.23 GTH-h 18.67 16.25 8.39 11.61 19.91 18.82 31.86 18.27 21.62 48.08 19.70 47.58 PWCF 34.57 28.95 24.22 18.42 34.03 34.44 39.78 34.86 35.12 72.91 35.01 67.94 DAPH 21.19 22.28 13.25 12.26 26.61 24.26 29.60 22.94 25.48 60.67 24.31 45.42 PEACE 45.97 42.68 38.72 28.36 53.04 54.39 46.69 48.89 46.91 83.18 46.95 78.82 DANCE 44.53 43.54 39.03 28.87 53.73 55.14 44.78 47.66 46.68 84.75 48.61 78.39 Ours 71.18 63.28 58.65 42.84 71.27 74.77 68.37 60.61 52.84 95.88 60.14 99.90 Imp. ↑25.21 ↑19.74 ↑19.62 ↑13.97 ↑17.54 ↑19.63 ↑21.68 ↑11.72 ↑5.93 ↑11.13 ↑11.53 ↑21.08 Avg. ↑19.29 ↑13.85 Table 1: Cross-domain retrieval performance comparison with baselines on Office-Home and Office-31. The best result in each column is marked in bold. The second best result in each column is underlined. given target query sample, we can employ the trained model to generate its corresponding binary code. This procedure is neatly summarized in the following forward propagation sequence: xt q input −→MLP xt q  →HashEncoder xt q  output −→ht q (13) Finally, by employing a straightforward quantization approach, the hash code bt q for the given target query sample xt q can be derived as follows: bt q = sgn ht q  . Experiment Experimental Dataset We conduct experiments on three publicly available datasets to validate the performance of our method, namely OfficeHome (Venkateswara et al. 2017b), Office-31 (Saenko et al. 2010), and Digits (Wang et al. 2023a). Office-Home is a dataset collected specifically for the evaluation of domain adaptation algorithms. This dataset comprises images of 65 categories found typically in office and home settings, divided into four domains: Artistic images (A), Clip Art (C), Product images (P), and Real-World images (R). Consistent with previous methods, we conduct experiments on six transferable image retrieval tasks, namely: A→R, R→A, C→R, R→C, P→R, and R→P. Office-31 dataset serves as a vital resource for researching, evaluating, and comparing solutions to the domain shift problem. It collects images from three distinct domains: Amazon (A), DSLR (D), and Webcam (W), each containing images belonging to 31 common office environment categories. By randomly selecting a pair of domains as the source and target domains, we conduct experiments across six transferable image retrieval tasks: A→D, A→W, D→A, D→W, W→A, and W→D. Digits consists of two handwritten digit recognition datasets, MNIST (M) (LeCun et al. 1998) and USPS (U) (Hull 1994). They stand as prevalent cross-domain datasets, containing handwritten digits from 0 to 9. In our experiments, the two datasets serve as source and target domains for each other, resulting in two transferable image retrieval tasks: M→U and U→M. Same as previous methods (Huang et al. 2020; Wang et al. 2023b), we employ the source domain images along with 90% of the target domain images for training purposes. A random subset of 10% of the target domain images is set aside for testing. In the case of cross-domain retrieval, the database is composed of source domain images, whereas in the case of single-domain retrieval, the database is constituted by target domain images. All the above experimental datasets are kindly organized and shared by PCWF. Baseline Method and Evaluation Metric We compare our CPH with several state-of-the-art methods, containing five unsupervised hashing methods and seven transfer hashing methods. The unsupervised methods include LSH (Gionis, Indyk, and Motwani 1999), ITQ (Gong et al. 2013), DSH (Jin et al. 2014), SGH (Jiang and Li 2015) and GraphBit (Wang et al. 2023c). The transfer hashing methods include ITQ+ (Zhou et al. 2018), GTH-g (Liu and Zhang 2019), GTH-h (Liu and Zhang 2019), PWCF (Huang et al. 2020), DAPH (Huang, Zhang, and Gao 2022), PEACE (Wang et al. 2023a) and DANCE (Wang et al. 2023b). The former four are shallow transfer hashing methods while the latter three and ours are deep transfer hashing methods. Due to the lack of privileged knowledge, partial experimental results of baselines are derived from DAPH, PEACE and DANCE. We use mean Average Precision (mAP), Top-K Precision Curve, and Precision-Recall Curve to measure the quality of obtained hash codes. For all evaluation metrics, higher values indicate better performance. In the experiments, we repeat our method 5 times and report their mAP results. Implementation Detail Our hash learning model features a straightforward network structure. MLP(·; θmlp) is comprised of one fully connected layer (d →d), a batch-normalization layer, and an The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8333 Method Bit Office-Home Office-31 A→R R→A C→R R→C P→R R→P A→D A→W D→A D→W W→A W→D GTH-g 16 10.20 9.51 6.04 5.90 10.84 11.08 26.25 11.85 15.76 34.40 16.14 31.79 32 13.08 13.93 7.86 9.52 15.28 16.17 28.35 15.76 21.15 41.36 19.23 42.86 128 16.51 19.52 8.53 13.92 20.81 21.24 31.68 20.55 21.93 50.09 20.17 53.54 GTH-h 16 9.54 8.18 6.17 6.30 11.32 10.81 24.86 11.94 19.02 34.15 14.66 40.58 32 13.43 12.67 7.77 8.97 15.71 15.36 24.65 15.56 20.98 41.67 17.97 42.33 128 13.78 19.73 8.57 14.54 21.16 20.16 32.01 22.16 22.56 51.62 21.55 51.57 DAPH 16 11.92 14.46 8.16 8.12 17.11 14.37 22.46 15.94 19.69 52.39 19.44 34.01 32 17.72 19.63 10.48 10.64 22.47 20.25 25.15 19.09 21.99 54.28 22.00 36.58 128 22.27 23.78 14.32 13.39 28.25 25.34 32.90 27.49 29.11 64.25 26.58 47.59 Ours 16 62.19 53.41 53.47 35.64 64.18 66.65 59.54 52.54 43.47 90.86 55.53 98.33 32 67.87 60.16 56.71 39.71 68.31 71.72 64.00 56.33 49.34 95.04 59.18 99.49 128 72.17 64.62 53.25 42.71 72.89 75.50 69.20 61.18 54.89 97.54 61.49 99.99 Table 2: Partial performance comparison of cross-domain retrieval on Office-Home and Office-31 with varying bits. 0 0.2 0.4 0.6 0.8 1 Recall @ 64 bits 0 0.2 0.4 0.6 0.8 1 Precision GTH-h GTH-g DAPH Ours 0 100 200 300 400 Retrieved samples 0 0.2 0.4 0.6 0.8 Precision @ 64 bits GTH-h GTH-g DAPH Ours 0 0.2 0.4 0.6 0.8 1 Recall @ 64 bits 0 0.2 0.4 0.6 0.8 1 Precision GTH-h GTH-g DAPH Ours 0 100 200 300 400 Retrieved samples 0 0.2 0.4 0.6 0.8 Precision @ 64 bits GTH-h GTH-g DAPH Ours 0 0.2 0.4 0.6 0.8 1 Recall @ 64 bits 0 0.2 0.4 0.6 0.8 1 Precision GTH-h GTH-g DAPH Ours 0 100 200 300 400 Retrieved samples 0 0.2 0.4 0.6 0.8 Precision @ 64 bits GTH-h GTH-g DAPH Ours Figure 2: The Precision-Recall Curves and TopK Precision Curves and of GTH-g, GTH-h, DAPH and CPH.. activation function ReLU(·). In HashEncoder(·; θhash), two fully connected layers (d →d′ →r) are employed. The former one is followed by a batch-normalization layer and the activation function ReLU(·). The latter one is followed by the activation function Tanh(·). For the implementation of our CPH, we have utilized the open-source PyTorch. Adam optimizer (Kingma and Ba 2015) is utilized to train the whole framework by a standard back-propagation strategy. Specifically, the learning rate is set to 0.0001, the batch size is set to 256, and the training epoch is set to 70 on three datasets. The trade-off hyper-parameters within the overall objective function are set as {λ1 = 1, λ2 = 1, λ3 = 100}, {λ1 = 0.1, λ2 = 0.01, λ3 = 0.1}, and {λ1 = 0.01, λ2 = 0.01, λ3 = 0.01} on Office-Home, Office-31 and Digits, respectively. η in set to 1.1 and γ is set to 0.9. Evaluation on Cross-Domain Retrieval To demonstrate the superiority of our method on crossdomain retrieval, we conduct experiments to compare the retrieval performance of our method and all baseline methods on Office-Home and Office-31 when the hash code length is fixed at 64 bits. The mAP results are presented in Table 1. Based on these experimental findings, we have made the following observations and analyses: Compared to all baseline methods, our CPH achieves significant performance improvements across all transferable tasks on both datasets. Specifically, on Office-Home, our CPH outperforms the second-best results by 25.21%, 19.74%, 19.62%, 13.97%, 17.54%, and 19.63% for different cross-domain tasks, respectively. The average mAP improvement across six cross-domain tasks is 19.29%. On Office-31, our CPH surpasses suboptimal results by 21.68%, 11.72%, 5.97%, 11.13%, 11.52%, and 21.08% for six cross-domain tasks, respectively. The average mAP improvement across the six tasks is 13.85%. To further validate the effectiveness of our proposed method in cross-domain retrieval, we conduct a comparative analysis with several cross-domain retrieval hashing methods (namely GTH-g, GTH-h, and DAPH) for which we are able to obtain the original source codes. The mAP results for different methods on two datasets, with hash code lengths varying with different hash code lengths (16 bits, 32 bits, and 128 bits), have been shown in Table 2. From these results, it is evident that our CPH approach achieves a significantly superior performance compared to these methods. Furthermore, in Figure 2, we present the Top-K Precision Curves and Precision-Recall Curves of GTH-g, GTHh, DAPH, and our methods for the three cross-domain tasks across three datasets, specifically using 64-bit hash codes. These curves further underscore the superiority of our CPH method in the realm of cross-domain retrieval. In the case of the Top-K Precision Curves, as the number of retrieved images increases, our method consistently outperforms other baselines across all three cross-domain tasks. The PrecisionRecall Curves show that, across all scenarios, the area under the curve of our CPH method surpasses that of the competing approaches. Evaluation on Single-Domain Retrieval To demonstrate the effectiveness of our method in singledomain retrieval, we conduct experiments to compare the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8334 Task P→R A→D M→U Bit 16 32 64 128 16 32 64 128 16 32 64 128 LSH 5.84 10.62 17.57 24.92 16.04 26.18 39.68 49.04 26.69 33.49 35.64 38.53 ITQ 20.07 29.64 33.15 34.81 40.83 49.27 56.16 59.41 13.39 22.58 39.67 40.16 DSH 6.10 11.44 16.61 14.45 22.45 33.38 40.09 46.31 41.42 45.30 47.85 50.76 SGH 18.97 26.18 32.61 34.97 38.67 45.59 53.57 57.37 15.60 30.78 35.55 41.78 GraphBit 15.42 21.80 24.89 28.97 33.21 41.17 51.46 53.48 24.96 32.54 37.54 44.82 ITQ+ 15.60 20.60 24.96 24.05 35.03 42.62 43.12 39.12 50.22 49.66 44.38 43.21 GTH-g 15.05 21.20 27.67 28.40 37.11 45.69 50.22 55.81 45.41 39.72 34.34 34.73 GTH-h 13.37 22.03 26.40 28.99 39.88 46.60 50.74 54.73 43.38 40.09 34.14 32.80 DAPH 20.77 29.01 33.35 34.92 46.74 49.43 58.63 60.41 47.53 54.86 60.15 60.39 PEACE 28.99 37.93 42.97 47.29 55.43 57.89 61.21 64.14 52.77 56.25 65.27 69.99 DANCE 31.37 37.64 44.13 48.93 54.42 58.02 63.09 67.91 52.65 55.98 66.81 70.47 Ours 44.99 49.35 52.45 51.40 60.60 62.11 65.76 68.20 66.76 71.34 72.64 72.46 Imp. ↑13.62 ↑11.42 ↑8.32 ↑2.47 ↑5.17 ↑4.09 ↑2.67 ↑0.29 ↑12.57 ↑15.09 ↑5.83 ↑1.99 Avg. Imp. ↑8.96 ↑3.05 ↑8.87 Table 3: Single-domain retrieval performances comparison with baselines. The best result in each column is marked in bold. The second best result in each column is underlined. Variant Office-Home A→R R→A C→R R→C P→R R→P CPH-v1 4.51 5.59 2.93 3.11 5.83 5.90 CPH-v2 67.26 61.51 20.79 31.33 70.59 70.38 CPH-v3 62.94 58.45 50.78 37.97 64.20 65.14 CPH-v4 4.53 2.71 3.01 3.00 6.13 12.55 CPH-v5 35.50 31.31 27.63 22.66 40.42 42.50 Ours 71.18 63.28 58.65 42.84 71.27 74.77 Table 4: Ablation study results on Office-Home. retrieval performance of our CPH approach with that of all baseline methods. We present the mAP results in Table 3 for three tasks (P→R, A→D, M→U) across Office-Home, Office-31, and Digits datasets, with hash code lengths varying from 16 to 128 bits. From the experimental results, it is evident that similar to cross-domain retrieval, CPH achieves significant improvements in various tasks. The average improvements in the three single-domain retrieval tasks are 8.96%, 3.05%, and 8.87%, respectively. Ablation Study To clearly understand the influence of each components in CPH, we design several variants to further evaluate our model. CPH-v1 denotes the variant approach that does not transfer any source knowledge. CPH-v2 denotes the variant approach that removes the Prototype Contrastive Loss LP CL. CPH-v3 denotes the variant approach that removes the Quantization Loss LQL. CPH-v4 denotes the variant approach that removes the Relation Preservation Loss LRP L. CPH-v5 denotes the variant approach that does not preserve source domain knowledge. The experiments are conducted on Office-Home when the hash code length is fixed as 64 bits, and the corresponding results are reported in Table 4. We can easily observe that transferring knowledge from the source domain makes a significant contribution to improving model performance. Moreover, without the preservation of the semantic relations of images, the performance of model suffers from obvious reduction Thus, all components in our GTH-g DAPH Ours 20 40 60 20 40 60 20 40 60 Figure 3: The t-SNE visualizations of 64-bit hash codes on Office-Home. method lead to the improvements of CPH. Visualization Figure 3 gives the t-SNE analyses on Office-Home datasets. The figures show that, the hash codes generated by our CPH show more distinct structures compared to GTH-g and DAPH. This phenomenon underscores the efficacy of the proposed CPH method in generating more discriminative binary codes, thereby enhancing the success of unsupervised domain adaptive image retrieval. Conclusion While admiring the merits of domain adaptation and illuminating the challenges inherent in current domain adaptive hashing methods, we propose a simple yet effective Comparative Prototype Hashing (CPH) approach. This method substantially boosts the performance of unsupervised domain adaptive hashing across both cross-domain and singledomain retrieval scenarios. It performs a cohesive collaboration between discriminative feature representation learning, domain alignment, and hash code learning on a domainshared unit hypersphere space. Comprehensive experimentation conducted on three widely recognized domain adaptation benchmarks validates the superior performance of the proposed CPH method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8335 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grants 62172263, in part by the CCF-Baidu Open Fund under Grant CCFBAIDU OF2022008, in part by the Key Project of Quancheng Provincial Laboratory of Shandong under Grant QCLZD202303, in part by the Shandong Provincial Natural Science Foundation of China under Grant ZR2022MF295, in part by the Fundamental Research Promotion Plan of Qilu University of Technology (Shandong Academy of Sciences) under Grant 2021JC02020, in part by the Pilot Project for Integrated Innovation of Science, Education and Industry of Qilu University of Technology (Shandong Academy of Sciences) under Grant 2022JBZ01-01, in part by the Research Project of Provincial Laboratory of Shandong, China under Grant SYS202201. References Cui, H.; Zhu, L.; Li, J.; Cheng, Z.; and Zhang, Z. 2021. Twopronged Strategy: Lightweight Augmented Graph Network Hashing for Scalable Image Retrieval. In MM, 1432–1440. Cui, H.; Zhu, L.; Li, J.; Yang, Y.; and Nie, L. 2020. Scalable Deep Hashing for Large-Scale Social Image Retrieval. TIP, 29: 1271–1284. Du, Z.; Li, J.; Su, H.; Zhu, L.; and Lu, K. 2021. CrossDomain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation. In CVPR, 3937–3946. Ganin, Y.; and Lempitsky, V. S. 2015. Unsupervised Domain Adaptation by Backpropagation. In ICML, 1180–1189. Gionis, A.; Indyk, P.; and Motwani, R. 1999. Similarity Search in High Dimensions via Hashing. In VLDB, 518– 529. Gong, Y.; Lazebnik, S.; Gordo, A.; and Perronnin, F. 2013. Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval. TPAMI, 35(12): 2916–2929. He, T.; Li, Y.; Gao, L.; Zhang, D.; and Song, J. 2019. One Network for Multi-Domains: Domain Adaptive Hashing with Intersectant Generative Adversarial Networks. In IJCAI, 2477–2483. Huang, F.; Zhang, L.; and Gao, X. 2022. Domain Adaptation Preconceived Hashing for Unconstrained Visual Retrieval. TNNLS, 33(10): 5641–5655. Huang, F.; Zhang, L.; Yang, Y.; and Zhou, X. 2020. Probability Weighted Compact Feature for Domain Adaptive Retrieval. In CVPR, 9579–9588. Huang, Z.; Chen, J.; Zhang, J.; and Shan, H. 2023. Learning Representation for Clustering via Prototype Scattering and Positive Sampling. TPAMI, 45(6): 7509–7524. Hull, J. J. 1994. A Database for Handwritten Text Recognition Research. TPAMI, 16(5): 550–554. Jiang, Q.; and Li, W.-J. 2015. Scalable Graph Hashing with Feature Transformation. In IJCAI, 2248–2254. Jin, Z.; Li, C.; Lin, Y.; and Cai, D. 2014. Density Sensitive Hashing. TCYB, 44(8): 1362–1371. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In ICLR, 1–15. Lai, H.; Pan, Y.; Liu, Y.; and Yan, S. 2015. Simultaneous Feature Learning and Hash Coding with Deep Neural Networks. In CVPR, 3270–3278. LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based Learning Applied to Document Recognition. PROC, 86(11): 2278–2324. Li, J.; Du, Z.; Zhu, L.; Ding, Z.; Lu, K.; and Shen, H. T. 2022. Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks. TPAMI, 44(11): 8196–8211. Li, W.-J.; Wang, S.; and Kang, W. 2016. Feature Learning based Deep Supervised Hashing with Pairwise Labels. In IJCAI, 1711–1717. Liu, J.; and Zhang, L. 2019. Optimal Projection Guided Transfer Hashing for Image Retrieval. In AAAI, 8754–8761. Long, M.; Cao, Y.; Wang, J.; and Jordan, M. I. 2015. Learning Transferable Features with Deep Adaptation Networks. In ICML, 97–105. Long, M.; Zhu, H.; Wang, J.; and Jordan, M. I. 2016. Unsupervised Domain Adaptation with Residual Transfer Networks. In NeurIPS, 136–144. Long, M.; Zhu, H.; Wang, J.; and Jordan, M. I. 2017. Deep Transfer Learning with Joint Adaptation Networks. In ICML, 2208–2217. Lu, X.; Zhu, L.; Cheng, Z.; Li, J.; Nie, X.; and Zhang, H. 2019. Flexible Online Multi-modal Hashing for Large-scale Multimedia Retrieval. In MM, 1129–1137. Luo, X.; Wu, D.; Ma, Z.; Chen, C.; Deng, M.; Ma, J.; Jin, Z.; Huang, J.; and Hua, X. 2021. CIMON: Towards Highquality Hash Codes. In IJCAI, 902–908. Pan, S. J.; and Yang, Q. 2010. A Survey on Transfer Learning. TKDE, 22(10): 1345–1359. Qiu, Z.; Su, Q.; Ou, Z.; Yu, J.; and Chen, C. 2021. Unsupervised Hashing with Contrastive Information Bottleneck. In IJCAI, 959–965. Saenko, K.; Kulis, B.; Fritz, M.; and Darrell, T. 2010. Adapting Visual Category Models to New Domains. In ECCV, volume 6314, 213–226. Shen, F.; Shen, C.; Liu, W.; and Shen, H. T. 2015. Supervised Discrete Hashing. In CVPR, 37–45. Sun, B.; Feng, J.; and Saenko, K. 2016. Return of Frustratingly Easy Domain Adaptation. In AAAI, 2058–2065. Tzeng, E.; Hoffman, J.; Darrell, T.; and Saenko, K. 2015. Simultaneous Deep Transfer Across Domains and Tasks. In ICCV, 4068–4076. Tzeng, E.; Hoffman, J.; Saenko, K.; and Darrell, T. 2017. Adversarial Discriminative Domain Adaptation. In CVPR, 2962–2971. Venkateswara, H.; Eusebio, J.; Chakraborty, S.; and Panchanathan, S. 2017a. Deep Hashing Network for Unsupervised Domain Adaptation. In CVPR, 5385–5394. Venkateswara, H.; Eusebio, J.; Chakraborty, S.; and Panchanathan, S. 2017b. Deep Hashing Network for Unsupervised Domain Adaptation. In CVPR, 5385–5394. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8336 Wang, H.; Sun, J.; Luo, X.; Xiang, W.; Zhang, S.; Chen, C.; and Hua, X. 2023a. Toward Effective Domain Adaptive Retrieval. TIP, 32: 1285–1299. Wang, H.; Sun, J.; Wei, X.; Zhang, S.; Chen, C.; Hua, X.; and Luo, X. 2023b. DANCE: Learning A Domain Adaptive Framework for Deep Hashing. In WWW, 3319–3330. Wang, J.; Liu, W.; Kumar, S.; and Chang, S. 2016. Learning to Hash for Indexing Big Data - A Survey. PIEEE, 104(1): 34–57. Wang, J.; Zeng, Z.; Chen, B.; Dai, T.; and Xia, S. 2022. Contrastive Quantization with Code Memory for Unsupervised Image Retrieval. In AAAI, 2468–2476. Wang, J.; Zhang, T.; Song, J.; Sebe, N.; and Shen, H. T. 2018. A Survey on Learning to Hash. TPAMI, 40(4): 769– 790. Wang, M.; and Deng, W. 2018. Deep Visual Domain Adaptation: A Survey. Neurocomputing, 312: 135–153. Wang, Z.; Xiao, H.; Duan, Y.; Zhou, J.; and Lu, J. 2023c. Learning Deep Binary Descriptors via Bitwise Interaction Mining. TPAMI, 45(2): 1919–1933. Xia, H.; Jing, T.; Chen, C.; and Ding, Z. 2021. Semisupervised Domain Adaptive Retrieval via Discriminative Hashing Learning. In MM, 3853–3861. Xu, R.; Liu, P.; Wang, L.; Chen, C.; and Wang, J. 2020. Reliable Weighted Optimal Transport for Unsupervised Domain Adaptation. In CVPR, 4393–4402. Zhang, J.; Li, W.; and Ogunbona, P. 2017. Joint Geometrical and Statistical Alignment for Visual Domain Adaptation. In CVPR, 5150–5158. Zhao, F.; Huang, Y.; Wang, L.; and Tan, T. 2015. Deep Semantic Ranking based Hashing for Multi-label Image Retrieval. In CVPR, 1556–1564. Zhou, J. T.; Zhao, H.; Peng, X.; Fang, M.; Qin, Z.; and Goh, R. S. M. 2018. Transfer Hashing: From Shallow to Deep. TNNLS, 29(12): 6191–6201. Zhu, L.; Lu, X.; Cheng, Z.; Li, J.; and Zhang, H. 2020. Deep Collaborative Multi-View Hashing for Large-Scale Image Search. TIP, 29: 4643–4655. Zhu, L.; Zheng, C.; Guan, W.; Li, J.; Yang, Y.; and Shen, H. T. 2024. Multi-Modal Hashing for Efficient Multimedia Retrieval: A Survey. TKDE, 36(1): 239–260. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8337
2024
926
18,769
Modeling Knowledge Graphs with Composite Reasoning Wanyun Cui, Linqiu Zhang Shanghai University of Finance and Economics [email protected], [email protected] Abstract The ability to combine multiple pieces of existing knowledge to infer new knowledge is both crucial and challenging. In this paper, we explore how facts of various entities are combined in the context of knowledge graph completion (KGC). We use composite reasoning to unify the views from different KGC models, including translational models, tensor factorization (TF)-based models, instance-based learning models, and KGC regularizers. Moreover, our comprehensive examination of composite reasoning revealed an unexpected phenomenon: certain TFbased models learn embeddings with erroneous composite reasoning, which ultimately violates their fundamental collaborative filtering assumption and reduces their effects. This motivates us to reduce their composition error. Empirical evaluations demonstrate that mitigating the composition risk not only enhances the performance of TF-based models across all tested settings, but also surpass or is competitive with the state-of-the-art performance on two out of four benchmarks. Our code, data and supplementary material are available at https://github.com/zlq147/CompilE 1 Introduction Diverse paradigms have been developed for knowledge graph modeling, including translation models (Bordes et al. 2013; Sun et al. 2019; Zhang et al. 2020; Lin et al. 2015), tensor factorization models (Hitchcock 1927; Trouillon et al. 2016; Yang et al. 2015), instance-based learning (Cui and Chen 2022), and KGC regularizers (Zhang, Cai, and Wang 2020). Given the diversity of different KGC forms, it is crucial to provide a unified understanding for them. Firstly, this aids in a deeper understanding of the principles and application domains of each method. Secondly, it motivates new algorithmic innovations. To this end, we propose a novel paradigm for representing knowledge graphs: composite reasoning. Our motivation for adopting composite reasoning in knowledge graph modeling is straightforward. We aim to leverage the known facts about other entities to predict the target entity. For example, consider a knowledge graph with composition Alphabet = Google + DeepMind + · · ·. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. If we know the fact Google, employee, Jeff Dean), we can infer (Alphabet, employee, ?) = Jeff Dean. The composite reasoning unifies several existing paradigms for knowledge graph modeling, such as translation models, tensor factorization models, and instance-based learning models; as well as knowledge graph regularization methods like DURA (Zhang, Cai, and Wang 2020). We show how composite reasoning works in Fig. 1. The results provides novel insights to interpreting and comparing different KGC models. Through a comparative analysis of different KGC models from the viewpoint of composite reasoning, we have discovered an anomalous characteristic of tensor factorization (TF) models: a query can be decomposed into several entities that are completely unrelated to the query entity. (see Fig. 1 and Table 1) This finding unveils a fundamental issue with traditional factorization-based approaches, namely, the learned embeddings may violate the collaborative filtering assumption due to erroneous knowledge composition. More details of the comparison can be found in Sec 3.5. To address the erroneous knowledge composition problem in TF-based models, we propose a measure to mitigate and reduce the caused generalization risk. In this paper, we refer to this risk as composite risk. Measuring and reducing the composite risk pose challenges as obtaining ground truth for knowledge composition is hard. One of our key observation is that we can relax the definition of low-risk entities to neighbor entities, thereby obtaining a lower bound for the composite risk. Our experiments demonstrate a strong correlation between prediction quality and the approximated composition risk (see Sec 4.4). Comparison with other KGC explanations The embedding spaces of many existing KGC models are designed according to how humans explain knowledge. For example, translational models usually explicitly represent inverse/symmetric/transitive relations via embedding translations. Tensor factorization-based models conform to the low-rank assumption of real-world knowledge. However, these explanations are usually only from an intra-triple perspective, i.e. explaining a single triplet fact. The composite reasoning-based explanation provides a novel inter-triplet view to explain the interactions among different facts. The main contribution of this paper includes: (1) We propose a novel composite reasoning perspective to unify difThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8338 Nicaragua TC Islands Tajikistan Canada Afghanistan Brazil Finland Belgium 0.05 0.10 0.15 0.20 0.25 (a) ComplEx Macau Djibouti Nazi Germany Denmark-GB New Caledonia Belgium CS America Ghana 0.0 0.1 0.2 0.3 (b) CP Brazil Canada France Spain Germany Argentina Belgium Italy 0.24 0.26 0.28 (c) TransE Brazil Canada France Argentina Colombia Italy Spain Chile 0.22 0.24 0.26 (d) RotatE Chile Colombia Brazil Argentina Sweden Portugal Panama Peru 0.0012 0.0013 0.0014 (e) CIBLE Colombia Brazil Belgium Uruguay Peru Panama Venezuela Chile 1.7 1.8 1.9 (f) DURA Figure 1: How composite reasoning unifies and interprets different KGC models. For each model, we show top 8 entities from the perspective of composite reasoning for query (Mexico, official language, ?) in FB15k-237. For TransE, RotatE, and DURA, less αi indicates higher composite dependency. For other models, higher αi indicates higher composite dependency. For ComplEx and CP, entities that are intuitively unrelated for humans are marked in red. ferent KGC models. (2) We compare different modeling approaches under the framework of composite reasoning and uncover the anomalous knowledge composition in TF-based models. (3) We quantify how errors in tensor factorization model’s decomposition affect its generalization capability. We optimize the TF models by approximating and reducing the composite risk. 2 The Composite Reasoning Framework for Knowledge Graph Completion In this section, we present the formulation of the KGC problem and demonstrate how it can be represented within the composite reasoning framework. ComplEx CP TransE RotatE CIBLE DURA 0.073 ↓ 0.084 ↓ 0.197 0.207 0.211 0.191 Table 1: Composite rationality of different models. Knowledge Graph Completion: A knowledge graph is a collection of facts represented as triples in the form of (head, relation, tail), denoted as KG = (hi, ri, ti)N i=1. As the available facts in the knowledge graph are incomplete, a common task for evaluating knowledge graph representation is knowledge graph completion. In this paper, we approach the task as a link prediction problem, which involves predicting missing values for queries of the form (h, r, ?) or (?, r, t). The Composite Reasoning Framework In this framework, we utilize the notation score(h, r, t) to represent the plausibility of a triple, such that the prediction to (h, r, ?) is the t with highest plausibility. To illustrate composite reasoning, consider the example of score(Alphabet, employee, ?) = score(Google, employee, ?) + score(DeepMind, employ− ee, ?) + · · ·. In order to effectively represent this composite reasoning, it must satisfy the following condition: ∀t, score(Alphabet, employee, t) = score(Google, employee, t) + score(DeepMind, employee, t) (1) Building upon this example, we formally define composite reasoning as the process of combining known facts about other entities to model the target entity. Specifically, given a query (h, r, ?), the composite reasoning framework is formulated as: ∀t, score(h, r, t) = X (hi,r,t)∈KG αi · score(hi, r, t) (2) Here, αi represents the weight assigned to the i-th entity hi, and the constraint (hi, r, t) ∈KG ensures that the prediction relies on known facts. In Sec 3, we will demonstrate how different models can be explained using different αi values within this framework. 3 Unifying KGC via Composite Reasoning In this section, we explain how to use the composite reasoning framework to unify different KGC models, including TF-based models (Sec 3.1), translational models (Sec 3.2), instance-based learning models (Sec 3.3), and the DURA regularizer (Sec 3.4). 3.1 Explaining TF Models Tensor Factorization (TF)-based models is a widely studied class of knowledge graph embedding models. The basic idea is to represent a triplet as a high-dimensional tensor. TF models approximate the tensor by decomposing it into the product of tensors corresponding to entities and relations. More formally, a triple (h, r, t) is encoded into e(h, r, t) ∈Rd using: e(h, r, t) = h ⊗r ⊗t (3) where h, r, t ∈Rd represent the tensors for the corresponding head, relation, and tail, and ⊗denotes the product in the Euclidean space (CP(Hitchcock 1927), DistMult(Yang et al. 2015)) or the complex space (ComplEx(Trouillon et al. 2016)). The plausibility of a fact is modeled as the sum of values across all its dimensions: score(h, r, t) = d X i=1 e(h, r, t)i (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8339 Compositional View We use the composite reasoning framework to represent TF models based on its linearity. Specifically, for a given entity h in the knowledge graph, we represent it as a linear combination of other entities: h = X i aihi + ∆ (5) where ai is the weight of hi, and ∆is the residual. Since TF is linear, the linear decomposition of h in Eq.(5) also determines its generalization to unknown relations. This allows us to model the relationship between entity composition and model generalization. Specifically, we use this composition to transform the model’s prediction of (h, r, ?) into a combination of known facts from the knowledge graph: h = X hi∈KG(r) aihi + ∆ (6) where KG(r) denotes the set of entities whose relation r is known in the knowledge graph, i.e., KG(r) = {hi|∃t, (hi, r, t) ∈KG}. Then, we can transform the representation of (h, r, ?) into a combination of known facts from the knowledge graph: ∀t, e(h, r, t) = X hi∈KG(r) aie(hi, r, t) + e(∆, r, t) (7) When explaining TF under the composite reasoning framework, we have: α(TF) i = ai s.t. h = X hi∈KG(r) aihi + ∆ (8) Representation Capability of the Composition The ability to connect the composition of entities with model generalization is that any query (h, r, ?) can be represented by the facts of known entities. To establish such connections, we want to minimize the impact of the residual term e(∆, r, t). We measure the capability of the entity composition by the residual ratio: residual ratio = mina ||e(∆, r, t)|| ||e(h, r, t)|| (9) In large-scale knowledge graphs, the number of entities for a given relation is always greater than the dimension of entity embeddings (i.e. |KG(r)| > d). For example, in WN18RR, the mean of |KG(r)| is 3722, while d is usually set to 500 or 2000. This means that we can always find a decomposition a with residual ratio = 0 for large-scale knowledge graphs. For smaller datasets, the effect of residual is more significant. We will empirically analyze the residual ratio in Sec 5.3. 3.2 Explaining Translational Models Translational Models treat r as a translation in the entity embedding space. The score function is defined as score(h, r, t) = ||trans(h, r) −t|| (10) where trans(h, r) is the translated embedding of h for relation r. For example, TransE (Bordes et al. 2013) defines the translation function as transTransE(h, r) = h + r. RotatE (Sun et al. 2019) is another well-known translational model, which consider the translation as a rotate in the complex space transRotatE(h, r) = h ◦r. Compositional View For the query (h, r, ?), we assume that at least one entity already contain the target t of relation r. That is, ∃hi, (hi, r, t) ∈KG. For example, when predicting (Alphabet, employee, ?) = Jeff Dean, we assume that a known fact about employee-Jeff Dean is already in the training knowledge graph (e.g. (Google, employee, Jeff Dean)). It is noteworthy that one-to-one relations cannot be represented under such assumption. We also assume that the high expressiveness of highdimensional neural networks leads to very low training loss: ∀(h, r, t) ∈KG, ||trans(h, r) −t|| = 0 (11) Given the aforementioned assumptions, for any query (h, r, ?), we can establish that for all candidate answer t, there exists (hi, r, t) ∈KG such that ||trans(hi, r) −t|| = 0. Therefore, we have: score(h, r, t) = ||trans(h, r) −trans(hi, r)|| (12) Taking it further, we use hi to express the prediction results of the translational model. According to Eq. (11) and Eq. (12), the top-k tail entities can be represented by: topktscore(h, r, t) = arg min kt||trans(h, r) −trans(h(r, t), r)|| (13) where h(r, t) denotes the head entity h whose relation r is t in the known KG, i.e., (h(r, t), r, t) ∈KG. Based on Eq. (13), we use ||trans(h, r) −trans(h(r, t), r)|| to align translational models with the composite reasoning framework: α(TRANS) i = ||trans(h, r) −trans(hi, r)|| (14) 3.3 Explaining Instance-based Learning Models CIBLE (Cui and Chen 2022) is a recently proposed knowledge graph completion model based on instance-based learning. This model utilizes prototypes modeling to represent the knowledge graph. Its scoring function for (h, r, ?) can be formulated by: score(h, r, t) = β X (p,r,t)∈KB fhr(p) (15) where β is a coefficient to normalize the score, fhr(p) denotes the plausibility of a candidate prototype p: fhr(p) = max(γ −∥transr(emb(h)) −transr(emb(p))∥, 0) (16) When explaining CIBLE with composite reasoning, we have: α(CIBLE) i = fhr(hi) (17) 3.4 Explaining the DURA Regularizer DURA is a recently proposed effective and widelyapplicable KGC regularizer. Its basic form is: score(h, r, t) = ||h ⊗r −t|| (18) We noticed that its form is compatible with the translational model in Eq. (10). Thus, similar to Eq. (14), we represent DURA under the composite reasoning framework: α(DURA) i = ||h ⊗r −hi ⊗r|| (19) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8340 3.5 Understanding and Comparing the Composite Reasoning of KGC Models In the preceding discussion, we employed composite reasoning to elucidate various KGC models. In this subsection, we provide a more direct understanding of composite reasoning by visualizing how different KGC models combine facts from diverse entities. To illustrate this, we consider the query (Mexico, official language, ?) from the FB15k-237 dataset and present the top eight entities ranked by their corresponding αi values. The visualization results show that the composite reasoning framework provides a convincing explanation for the behavior of KGC models. In the majority of cases, the top entities identified by the framework align closely with human intuition. For instance, the TransE model leverages facts about Brazil and Canada, which are highly associated with Mexico, as well as Spain, which shares the same official language as Mexico. These findings demonstrate the effectiveness of the composite reasoning framework in capturing meaningful relationships between entities. However, the composition results obtained by two TFbased models, CP and ComplEx, yielded unexpected outcomes. The top entities identified exhibited both low relevance to Mexico and different tail entities, such as Turks and Caicos Islands (TC Islands) and Macau. Intuitively, these entities are unlikely to contribute to accurate predictions. This phenomenon is not a mere coincidence. To further investigate it, we computed the average composite rationality between the top 8 decomposed entities and the query entity for all queries in the test set in FB15k-237. The composite rationality between two entities was measured using the Jaccard coefficient of their corresponding triplets. Table 1 presents the results obtained for different models. Notably, the tensor factorization-based CP and ComplEx models displayed significantly lower average relevance values compared to the other models. In Sec 4, we will delve into the causes behind this phenomenon, discuss its experimental implications, and propose solutions. 4 Modeling and Alleviating Composition Risk for TF-based Models 4.1 Measuring Erroneous Knowledge Composition via Composition Risk Under the composite reasoning framework, the prediction regarding h is an aggregation of the known facts of other entities. As a result, decomposing into certain entities is more likely to result in generalization errors than others. For instance, if the model decomposes Mexico as Mexico = a1Panama + a2Macau, the predictions for Mexico’s official language will use Macau’s facts, which is obviously riskier than using Panama’s facts. Therefore, we aim to identify and mitigate the impact of entities with higher risk to generalization errors. Furthermore, unlike the decomposition into hj, the term e(∆, headquarters, t) cannot be represented by facts of existing entities. We posit that this residual term is also riskier. Motivated by this, we propose the concept of composition risk for TF models, which refers to the risk of generalization errors caused by decomposing into riskier entities or the residual. More formally, when representing the composition of h, we divide the entities into two categories: reliable entities and risky entities. For example, Panama is a reliable entity for Mexico, while Macau is a risky entity. We want the composite reasoning to rely on reliable entities. This is illustrated in Fig. 2. Suppose for (h, r, ?), the composition of h is: h = X hi∈reliable(h)∩KG(r) aihi + X hj∈risky(h)∩KG(r) ajhj + ∆ (20) According to Eq. (7), to make the model’s behavior be more consistent with entities in reliable(h), we expect P hi∈reliable(h) aie(hi, r, t) to be close to e(h, r, t) and P hj∈risky(h) aje(hj, r, t) + e(∆, r, t) to be close to zero. We formulate the composition risk formally as the ratio associated with the risky composition and the residual: cra(h, r, t) = ||e(h, r, t) −P hi∈reliable(h)∩KG(r) aie(hi, r, t)|| ||e(h, r, t)|| (21) By minimizing this ratio, we effectively reduce the impact of risky decompositions and the residual. It should be noted that for a fixed TF model, there are multiple compositions a for h. As long as there exists an a such that cra(h, r, t) is minimized, the model’s prediction for (h, r, t) will depend maximally only on the entities in reliable(h), which is the desired outcome. Therefore, we take the a that minimizes the cfa(h, r, t) to define the composition risk. Definition 4.1 (Composition risk). Composition risk w.r.t. (h, r, t) is defined as: cr(h, r, t) = min a cra(h, r, t) (22) 4.2 Composition Risk Leads to the Violation of Collaborative Filtering Assumption The concept of using tensor factorization is based on the principle of collaborative filtering (Koren, Bell, and Volinsky 2009). One of the central assumption of collaborative filtering in knowledge graphs (KGs) is that entities that share similar relationships are likely to have similar characteristics in other relationships as well. For example, Alphabet and Google share the same CEO, so they are likely to have the same headquarters. However, we found that traditional TF models can easily fit the training data while violating the collaborative filtering assumption. The learned embeddings of similar entities are not necessarily similar and may even be orthogonal. This phenomenon has already been reported in (Zhang, Cai, and Wang 2020). In this paper, we aim to further explain how this phenomenon leads to generalization errors from the perspective of composite reasoning. We illustrate this by the example in Table 2. Despite fitting all the training data, the TF model does not adhere to the collaborative filtering assumption. Although Google and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8341 … … … … Composite of the original TF-model Composite of CompilE Macau ⚠️ Panama ✅ Macau ⚠️ Panama ✅ located_in official_language ? Macau disconnected risky⚠️ Panama connected reliable✅ Mexico North America ☺  alleviate composition risk Figure 2: Motivation of modeling and alleviating composition risk. The composite of the original TF-model may rely on risky entities (e.g. Macau for Mexico since they are disconnected). By alleviating composite risk, we encourage the composite to rely on reliable entities (e.g. Panama for Mexico since they are connected.) Google [1,0] Alphabet [0,1] CEO, Sundar Pichai [1,1] 1 (cr = 1) 1 (cr = 1) headquarters, Mountain View [1,0] 1 (cr = 1) pred = 0 Table 2: An example of how TF models can violate the collaborative filtering assumption and result in incorrect predictions. The goal is to predict the value in the bottom right corner. The values in square brackets represent the corresponding tensors. Alphabet have the same CEO, their embeddings are orthogonal. This results in the model not being able to predict that Alphabet’s headquarters is in Mountain View using the knowledge of Google’s headquarters. We link the violation of the collaborative filtering assumption to composition risk. The high expressive capacity of high-dimensional TF models can cause the model to neglect learning effective entity compositions. We show the composition risk of the facts in Table 2. Even though the model fits the training data perfectly, it still has a high composition risk because Alphabet and Google are connected. Reducing the composition risk encourages the model to learn the association between Google and Alphabet, and thus make accurate predictions. 4.3 Approximating and Minimizing the Lower Bound of Composition Risk Minimizing composition risk requires accurate estimation of reliable(hi) and risky(hi). In this subsection, we will explain how to estimate and optimize the lower bound of the composition risk as an alternative to directly optimizing it. reliable(h) is a set of entities that have highly consistent facts with h and can be used for prediction. It is reasonable to assume that these entities have at least one identical fact with h from the KG. connected(h) = {hi|h ̸= hi, ∃r1, r2, t, (h, r1, t) ∈KG, (hi, r2, t) ∈KG} (23) Based the linearity of TF models, the lower bound of the composition risk can be calculated in Theorem 4.2. Theorem 4.2 (Lower bound of composition risk). Assuming that connected(h) is a weaker restriction of reliable(h), i.e. reliable(h) ⊆connected(h), we have: cr(h, r, t) ≥ mina ||e(h, r, t) −P hi∈connected(h)∩KG(r) aie(hi, r, t)|| ||e(h, r, t)|| (24) We use the lower bound as the approximated composition risk, denoted as ˆcr(h, r, t). See the proof in the supplementary material. 4.4 The Impact of (Approximated) Composition Risk on Generalization Errors To demonstrate the relationship between composition risk and generalization errors, we examined the correlation between the approximated composition risk and the accuracy of predictions for entities in real-world datasets. Specifically, we investigate the relationship between the model’s prediction quality, as measured by the mean reciprocal rank (MRR), and the composition risk (CR) of queries in the test set. We use Spearman’s rank correlation coefficient to quantify the correlation, with a stronger correlation indicating a greater impact of ˆcr on the model’s generalization ability. Additionally, we compare this correlation to the relationship between the frequency of an entity in the knowledge graph and the MRR, as a baseline. This is because the predictions for more frequent entities tend to be easier. The results are presented in Fig. 3(a) 3(b). We also plot the direct impact of ˆcr on MRR in Fig. 3(c) 3(d). It can be seen that the correlation of the approximated composition risk ˆcr is significantly stronger. This verifies ˆcr brings generalization errors. Since ˆcr is a metric that can be optimized, this motivates us to decrease it during training. 4.5 Alleviating Composition Risk in Training Incorporating Composition Risk into TF Models To minimize the composition risk in TF models, we incorporate it The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8342 ComplEx ComplEx_N3 ComplEx_DURA CP DistMult 0.0 0.5 1.0 Spearman size - MRR ̂ cr - MRR (a) FB15k-237 ComplEx ComplEx_N3 ComplEx_DURA CP DistMult 0.0 0.5 1.0 Spearman size - MRR ̂ cr - MRR (b) WN18RR 0-0.1 0.1-0.2 0.2-0.3 0.3-0.4 0.4-0.5 0.5-0.6 0.6-0.7 0.7-0.8 0.8-0.9 0.9-1.0 ̂ cr 0.0 0.2 0.4 0.6 0.8 1.0 MRR ComplEx N3 DURA CP DistMult (c) FB15k-237 0-0.1 0.1-0.2 0.2-0.3 0.3-0.4 0.4-0.5 0.5-0.6 0.6-0.7 0.7-0.8 0.8-0.9 0.9-1.0 ̂ cr 0.0 0.2 0.4 0.6 0.8 1.0 MRR ComplEx N3 DURA CP DistMult (d) WN18RR Figure 3: Correlation between composition risk and prediction performance for fact in test sets. For N3 and DURA, we use ComplEx as their base models. as a penalty term in the training loss. Specifically, we use the following loss function: L = Lorigin + β X (h,r,t)∈KG ˆcr(h, r, t) (25) where Lorigin is the original loss function for the TF model(Zhang, Cai, and Wang 2020; Lacroix, Usunier, and Obozinski 2018), and β is the weight for the composition risk term. We denote this model as CompilE (composition risk alleviation) Finding the Optimal Composition In Eq. (24), we need to compute a to minimize ˆcr. This can be done by solving a least squares problem, as the equation is a classical linear regression problem. 5 Effect of Reducing Composition Risk in TF models 5.1 Setup Baselines We compare our proposed method with several state-of-the-art models and regularization techniques as baselines. These include classic tensor factorization models such as ComplEx(Trouillon et al. 2016), DistMult(Yang et al. 2015), and CP(Hitchcock 1927), regularization methods like N3(Lacroix, Usunier, and Obozinski 2018) and DURA(Zhang, Cai, and Wang 2020), and other state-ofthe-art KGC models like TransE(Bordes et al. 2013), RotatE(Sun et al. 2019), NeuralLP(Yang, Yang, and Cohen 2017), RNNLogic(Qu et al. 2020), CIBLE(Cui and Chen 2022), and NBFNet(Zhu et al. 2021). We use ComplEx as our default model and also incorporate traditional regularization techniques to reduce parameter complexity. We refer to our model with N3 regularization as CompilEN and with DURA regularization as CompilED. Dataests We use four datasets of different scales, including two larger datasets (FB15k-237 and WN18RR), and two smaller datastes (UMLS and Kinship). Evaluation We use standard evaluation metrics commonly used in KGC, including Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits@k under the filtered setting. 5.2 Main Results The main results for the four benchmarks are presented in Table 3 and Table 4. It can be observed that CompilE outperforms all other baselines on smaller datasets. On larger datasets, it also achieves better performance than other baselines, except for the GNN-based NBFNet. This confirms the effectiveness of our approach. Effect improvement across different datasets and baselines Our method shows improvement over both DURA and N3 on all four datasets. This suggests that traditional TFbased models need to optimize their knowledge composition in addition to using state-of-the-art regularizers. This is also supported by the results shown in Fig.3. Effect on knowledge-sparse datasets Our method demonstrates higher effectiveness on small-scale datasets. For example, on Kinship, the MRR of CompilEN improved by 3.2% over other TF-based models. We believe this is because overfitting is more likely to occur on smaller datasets, making effective composition more crucial. This supports the value of our proposed method in knowledge-sparse scenarios. 5.3 Capabilities of the Composite Reasoning In Sec 3.1, we explained that the effectiveness of the entity decomposition framework can be assessed using the residual ratio. We plot the residual ratios of various models on different datasets in Fig. 4. Consistent with our analysis in Sec 3.1, the residual ratios are close to zero on large-scale knowledge graphs, which suggests that entity decomposition is more effective in these cases. Even on small-scale knowledge graphs, CompilE effectively reduces the residual ratios and thus improves the capability of entity decomposition. 6 Related Work Researchers have discovered that the representation of knowledge graphs can be improved by optimizing the way different facts are composited. Prior studies have implicitly optimized the compositionality between entities by decreasing model complexity (Lacroix, Usunier, and Obozinski 2018). More recent efforts, however, have focused on directly optimizing specific compositions between facts, such The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8343 FB15k-237 WN18RR MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 TransE 0.294 0.465 0.226 0.501 RotatE† 0.338 0.241 0.375 0.533 0.476 0.428 0.492 0.571 NeuralLp† 0.237 0.173 0.259 0.361 0.381 0.368 0.386 0.408 RNNLogic+† 0.349 0.258 0.385 0.533 0.513 0.471 0.532 0.579 CIBLE† 0.341 0.246 0.378 0.532 0.490 0.446 0.507 0.575 NBFNet† 0.415 0.321 0.454 0.599 0.551 0.497 0.573 0.666 TF-based models DistMult 0.343 0.251 0.376 0.525 0.440 0.410 0.451 0.499 CP 0.332 0.244 0.364 0.509 0.438 0.416 0.444 0.482 ComplEx 0.350 0.259 0.386 0.531 0.460 0.429 0.471 0.521 DURA 0.371 0.276 0.560 0.491 0.449 0.571 N3 0.367 0.271 0.403 0.558 0.488 0.441 0.503 0.581 CompilED 0.372 0.277 0.408 0.563 0.495 0.453 0.510 0.579 CompilEN 0.368 0.272 0.404 0.559 0.492 0.447 0.506 0.582 Table 3: Effect on larger benchmarks.†: the results are from (Cui and Chen 2022). UMLS KINSHIP MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 RotatE† 0.744 0.636 0.822 0.939 0.651 0.504 0.755 0.932 NeuralLP† 0.483 0.332 0.563 0.775 0.302 0.167 0.339 0.596 RNNLogic† 0.842 0.772 0.891 0.965 0.722 0.598 0.814 0.949 CIBLE† 0.856 0.787 0.916 0.970 0.728 0.603 0.820 0.956 NBFNet† 0.778 0.688 0.840 0.938 0.606 0.435 0.725 0.937 TF-based-models DistMult 0.725 0.615 0.788 0.954 0.456 0.270 0.537 0.892 CP 0.819 0.718 0.910 0.964 0.653 0.507 0.755 0.937 ComplEx 0.840 0.765 0.902 0.968 0.660 0.513 0.762 0.938 DURA 0.841 0.767 0.900 0.966 0.670 0.526 0.773 0.941 N3 0.842 0.767 0.905 0.969 0.697 0.560 0.796 0.953 CompilED 0.861 0.792 0.920 0.972 0.724 0.593 0.830 0.962 CompilEN 0.868 0.802 0.924 0.973 0.713 0.579 0.813 0.955 Table 4: Effect on smaller benchmarks. The improvement brings by CompilE is more significant. FB15k-237 WN18RR UMLS KINSHIP 0.00 0.25 0.50 0.75 1.00 residual ratio ComplEx ComplEx_N3 DistMult CP CompilEN Figure 4: Representation capabilities of the entity decomposition for model generalization. as equal and inverse relations (Minervini et al. 2017), compositions between entities of the same category (Guo et al. 2015; Cao et al. 2022), and compositions between entities under the same head-relation (Zhang, Cai, and Wang 2020). However, these works lack a general framework to model one-to-many fact composition and do not accurately depict the connection between composition regularization and model generalization. 7 Conclusion This study provides a comprehensive understanding of composite reasoning for KGC models, including TF-based models, translational models, instance-based learning models, and KGC regularizers. We take advantage of the composite reasoning to uncovers a novel issue with TF-based models where irrelevant entities can be incorporated into the inference process, causing generalization errors. This issue is rooted in the models’ violation of the low-rank assumption due to inaccurate composite learning. We propose to mitigate this composition risk, effectively enhancing the performance of these models. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8344 References Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. In NeurIPS. Cao, Z.; Xu, Q.; Yang, Z.; and Huang, Q. 2022. ER: Equivariance Regularizer for Knowledge Graph Completion. In AAAI. Cui, W.; and Chen, X. 2022. Instance-based Learning for Knowledge Base Completion. In Advances in Neural Information Processing Systems. Guo, S.; Wang, Q.; Wang, B.; Wang, L.; and Guo, L. 2015. Semantically smooth knowledge graph embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 84–94. Hitchcock, F. L. 1927. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics, 6(1-4): 164–189. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8): 30–37. Lacroix, T.; Usunier, N.; and Obozinski, G. 2018. Canonical Tensor Decomposition for Knowledge Base Completion. In ICML. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 29. Minervini, P.; Costabello, L.; Mu˜noz, E.; Nov´aˇcek, V.; and Vandenbussche, P.-Y. 2017. Regularizing knowledge graph embeddings via equivalence and inversion axioms. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 668–683. Springer. Qu, M.; Chen, J.; Xhonneux, L.-P.; Bengio, Y.; and Tang, J. 2020. RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs. In ICLR. Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; and Tang, J. 2019. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In ICLR. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, ´E.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In ICML, 2071–2080. PMLR. Yang, B.; Yih, W.-t.; He, X.; Gao, J.; and Deng, L. 2015. Embedding entities and relations for learning and inference in knowledge bases. In ICLR. Yang, F.; Yang, Z.; and Cohen, W. W. 2017. Differentiable learning of logical rules for knowledge base reasoning. In NeurIPS. Zhang, Z.; Cai, J.; and Wang, J. 2020. Duality-induced regularizer for tensor factorization based knowledge graph completion. Advances in Neural Information Processing Systems, 33: 21604–21615. Zhang, Z.; Cai, J.; Zhang, Y.; and Wang, J. 2020. Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 3065–3072. AAAI Press. Zhu, Z.; Zhang, Z.; Xhonneux, L.-P.; and Tang, J. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. NeurIPS. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8345
2024
927
18,770
Discovering Sequential Patterns with Predictable Inter-event Delays Joscha C¨uppers1, Paul Krieger2, Jilles Vreeken1 1 CISPA Helmholtz Center for Information Security 2 Saarland University [email protected], [email protected], [email protected] Abstract Summarizing sequential data with serial episodes allows nontrivial insight into the data generating process. Existing methods penalize gaps in pattern occurrences equally, regardless of where in the pattern these occur. This results in a strong bias against patterns with long inter-event delays, and in addition that regularity in terms of delays is not rewarded or discovered—even though both aspects provide key insight. In this paper we tackle both these problems by explicitly modeling inter-event delay distributions. That is, we are not only interested in discovering the patterns, but also in describing how many times steps typically occur between their individual events. We formalize the problem in terms of the Minimum Description Length principle, by which we say the best set of patterns is the one that compresses the data best. The resulting optimization problem does not lend itself to exact optimization, and hence we propose HOPPER to heuristically mine high quality patterns. Extensive experiments show that HOPPER efficiently recovers the ground truth, discovers meaningful patterns from real-world data, and outperforms existing methods in discovering long-delay patterns. Introduction Summarizing event sequences is one of the key problems in data mining. Most existing methods do so in terms of serial episodes and allow for gaps (Tatti and Vreeken 2012) and interleaving (Bhattacharyya and Vreeken 2017) of pattern occurrences. By penalizing every gap equally regardless of where in a pattern it occurs, these methods have a strong bias against long inter-event delays, whereas methods that do not penalize gaps (Fowkes and Sutton 2016) are prone to discover spurious dependencies. What both of these classes lack is a pattern to be able to specify when the next symbol is to be expected. To illustrate, let us consider a toy example of a single event sequence of all national holidays of a given country over the span of multiple years. As is usual, some holidays are ‘fixed’ as they always occur on the same date every year, and others depend on the lunar cycle and hence ‘move’ around. Existing methods have no trouble finding holidays that occur right after each another, e.g. 1st Christmas Day right before 2nd Christmas Day, struggle with long Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. delays, such as Whit Monday happening 49 days after Easter Monday, and outright fail when the relationship is ‘far’ and ‘loose’ such as Easter occurring between 82 to 114 days after New Year’s. In this paper, we present a method that can find and describe all these types of dependencies and delays. To do so, we propose to explicitly model the distributions of inter-event delays in pattern occurrences. That is, as patterns we do not just consider serial episodes, but also discrete distributions that model the number of time-steps between subsequent events of a pattern. This allows us to discover patterns like New Year 82−114 −−−−−→Easter Monday 49 −→ Whit Monday, which specify there is a uniformly distributed delay of 82 to 114 days between New Year’s and Easter Monday, and a fixed delay of 49 days until Whit Monday. We define the problem of mining a succinct and nonredundant set of sequential patterns in terms of the Minimal Description Length Principle (MDL) (Gr¨unwald 2007), by which we are after that model that compresses the data best. Simply put, unlike existing methods we do not plainly prefer patterns with ‘compact’ occurrences but rather those for which the inter-event delays are reliably predictable, no matter if these delays are short or long. This way we can automatically determine which discrete-valued distribution best characterizes the inter-event delays. In practice, we consider Uniform, Gaussian, Geometric, or Poisson distributions, but this set can be trivially extended. The resulting problem does not lend itself for exact search, which is why we propose the effective HOPPER algorithm to efficiently discover good pattern sets in practice. Starting from just the singletons, HOPPER considers combinations of current patterns as candidates, uses an optimistic estimate to prune out unpromising candidates, explores both short and far dependencies, assigns the best-fitting delay distributions, and greedily chooses the candidate that improves the score most. Through extensive evaluation, we show that HOPPER works well in practice. On synthetic data we demonstrate that unlike the state-of-the-art, we recover the ground truth well both in terms of patterns and delay distributions even in challenging settings where patterns include delays of hundreds of time steps. On real-world data, we show that HOPPER discovers easily interpretable patterns with meaningful delay distributions. We make all code, synthetic data, and real-world datasets available in the supplementary material. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8346 Preliminaries In this section, we discuss preliminaries and introduce the notation we use throughout the paper. Notation As data D we consider a set of |D| event sequences S ∈D each drawn from a finite alphabet Ωof discrete events e ∈Ω, i.e. S ∈Ω|S|. We write S[i] to refer to the ith event in S, and ||D|| to denote the total number of events in D. As patterns we consider serial episodes. A serial episode p is also a sequence drawn over Ω, i.e. p ∈Ω|p|. We write p[i] for the ith event in p. We will model the inter-event delays between a subsequent pair of events p[i] and p[i + 1] using discrete delay distribution πp,i(· | Θp,i). Whenever clear from context we simply write πp,i(·). Finally, a window wS is an ordered set of indices into S. Two windows aS and bS are in conflict iff they contain the same index, formally iff |aS ∩bS| > 0. A window wS is said to match a pattern p if they identify the same events in the same order, i.e. when ∀i∈[1,|p|]S[wS[i]] = p[i] and ∀i∈[1,|p|−1]πp,i(wS[i+1]−wS[i]) > 0, if p matches we write wS p . Whenever S is clear from context, we simply write wp. All logarithms are base 2 and we define 0 log(0) = 0. Minimum Description Length The Minimum Description Length (MDL) principle (Gr¨unwald 2007) is a computable and statistically wellfounded model selection principle based on Kolmogorov Complexity (Li and Vit´anyi 1993). For a given model class M, it identifies the best model M ∈M as the one minimizing L(M) + L(D|M), where L(M) is the length of model M and L(D|M) the length of the data D given M. This is known as two-part, or crude MDL, in contrast to one-part, or refined MDL (Gr¨unwald 2007), which is not computable for arbitrary model classes. We use two-part MDL because we are particularly interested in the model. In MDL we are never concerned with materialized codes, we only care about code lengths. To use MDL we have to define a model class M, and code length functions for the model and data given the model. We present these next. MDL for Patterns with Predictable Delays In this section we formally define the problem. Decoding the Database Before we define how to encode a sequence database using patterns with delay distributions we give the intuition, by explaining how to decode a database from a given cover. A cover C is a description of the data in terms of the patterns p in model M. Formally, a cover is defined as a tuple (Cp, Cd), where pattern stream Cp describes which pattern (windows) are used in what order, and delay stream Cd consists of the inter-event delays within those windows. Next we explain how to decode a cover C to reconstruct the encoded data. In Figure 1 we show a toy example. We show a sequence S, a model M, and two covers of S using M. S: decoded a b c f a 2 3 d e 5 Data S: a d b f a c e Cp: a d b f a c e Cp: p q f a Cd: 2 3 5 Cover 1 (Singletons): Cover 2 (Patterns): Model: a a : b b : c c : d d : e e : f f : p : a Θ1 −−→b Θ2 −−→c Θ1 = G(0.5) Θ2 = N(3, 0.1) q : d Θ1 −−→e Θ1 = U(5, 5) Figure 1: Toy example showing two possible encodings of the same data. Cover 1 uses only singletons, Cover 2 additionally uses two patterns, p and q . A cover consist of the pattern stream Cp encoding the patterns, and the delay stream Cd encoding the inter-event delays. The first gap of pattern p is modeled with a geometric distribution, and the second with a normal distribution. The one gap of q is modeled by a uniform distribution. We first consider Cover 1. We start by reading the first code from the pattern stream Cp. This is an a which we look up in M and find it encodes event ‘a’. We write this to S[0]. We iterate reading and writing until S is decoded. Next, we consider Cover 2. We again read the first code from Cp, which is now a p . We look up that this stands for pattern p. We write its first symbol, a, to S[0]. To know where in S we should write ‘b’ we read a code from the delay stream Cd. We read a 2, which means we write ‘b’ to S[0+2]. We continue until we have decoded this instance of pattern p, and then read the next symbol from Cp. This is a q . We start decoding it from the first empty position in S. We iterate until S is fully decoded. Calculating the Encoding Cost Now we know how to decode a sequence, we formally define how to compute the encoded sizes of the data and model. Encoding the data To describe the data without loss, we need in addition to the pattern and delay streams, to know the number and length of sequences in D. We hence have L(D|CT) = LN(|D|) + X S∈D LN(|S|) + L(Cp) + L(Cd) , where we encode the numbers using the MDL-optimal encoding for integers z ≥1 (Rissanen 1983). It is defined as LN(z) = log∗z + log c0 where log∗z is the expansion log z + log log z + · · · where we only include the positive terms. To ensure this is valid encoding, i.e. one that satisfies the Kraft inequality, we set c0 = 2.865064 (Rissanen 1983). To encode the pattern stream Cp and the delay stream Cd, we use prefix codes, which are codes that are proportional in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8347 length to their probability. For the pattern stream we have, L(Cp) = X p∈M −usg(p) log usg(p) P q∈M usg(q) ! , where usg is the empirical frequency of pattern code p in the pattern stream Cp. We encode the delay stream Cd similarly, encoding the inter-event delays dj between events p[i] and p[i + 1] of every instance of a pattern p using the corresponding delay distribution πp,i(dj). We hence have L(Cd) = X p∈M |p|−1 X i=1 usg(p) X j=1 −log πp,i(dj) . Encoding the Model As models we consider sets of patterns M that always include all singletons. We refer to the model that only consists of the singletons as the null model. For the encoded length of a model we have L(M) = LN(|Ω|) + log ||D|| −1 |Ω| −1  + LN(|P| + 1) +LN(usg(P)) + log usg(P) −1 |P| −1  + X p∈P L(p) , where we first encode the size of the alphabet Ωand the supports supp(e|D) of each singleton event. The latter we do using a so-called data-to-model code — an index over an enumeration of all possible ways to distribute ||D|| events over alphabet Ω(Vereshchagin and Vit´anyi 2004). Next, we encode the number |P| of non-singleton patterns p ∈M and their combined usage by LN(), and then their individual usages by a data-to-model code. Finally, we encode the nonsingleton patterns. To do so we need to specify how many, and which events a pattern consists of, as well as identify and parameterize its delay distributions. To reward similarities in delay behavior, we allow a distribution to be used for multiple inter-event gaps. As a default, we equip every pattern with one Geometric delay distribution. Formally, the encoded length of a non-singleton pattern p ∈M is L(p) = LN(|p|) + log(|p| −1) + log |p| −1 k  − X e∈p log supp(e|D) ||D||  + X Θ∈p L(Θ) , where we encode the number of events of p, then its number of delay distributions, k, and finally where in the pattern these are used. We encode the events of the pattern using prefix codes based on the supports of events e in D. To encode a delay distribution π(· | Θ) it suffices to encode Θ. For the non-default delay distributions we first encode its type out of the set Ψ = {Geometric, Poisson, Uniform, Normal} of discrete probability functions under consideration, for which we need −log |Ψ| bits. We then encode the parameter values θ ∈Θ. We use LN(θ) if θ ∈N, and LR(θ) if θ ∈R. We have LR(θ) = LN(d) + LN(⌈θ · 10d⌉) + 1 as the number of bits needed to encode a real number up to user-set precision p (Marx and Vreeken 2019). It does so by shifting θ by d digits, such that θ · 10d ≥10p. The Problem, Formally With the above, we can now formally state the problem. The Predictable Sequential Delay Problem Given a sequence database D over an alphabet Ω, find the smallest pattern set P and cover C such that the total encoded size, L(M, D) = L(M) + L(D|M) is minimal. Considering the complexity of this problem, even when we ignore delay distributions there already exist superexponential many possible patterns, exponentially many patterns sets over those, as well as, given a pattern set there exist exponentially many covers (Bhattacharyya and Vreeken 2017). Worst of all the search space does not exhibit any structure such as (weak-)monotonicity or submodularity that we can exploit. We hence resort to heuristics. The HOPPER Algorithm Now we have formally defined the problem and know how to score a model we need a way to mine good models. We break the problem into two parts, finding a good cover given a model, and finding a good model, and discuss these in turn. Finding Good Covers Given a model, we are after that description of the data that minimizes L(D | M). To compute L(D | M), we need a cover C. A cover consists of a set of windows, and hence we first need to find a set of good windows. Finding Good Windows Mining all possible windows for a pattern p can result in an exponential blow-up. To ensure tractability, we limit ourselves to the 100 windows per starting event with the most likely delays. To avoid wasting time on windows we will never use because they will be too costly, we restrict our search to those for which the delays fall within the 99.7% confidence-interval of the respective probability distribution. For a normal distribution, that corresponds to three standard deviations from the mean. In practice, it is extremely unlikely that we would like to include any of the not considered windows in cover C, hence these restrictions have a negligible to no effect on the results. Selecting a Good Cover Armed with a set of candidate windows, we next explain how to select a set C of these that together form a good cover. Ideally, we would like to select that cover C that minimizes L(D | M). Finding the optimal cover, however would require testing exponentially many combinations, which would, in turn, result in unfeasible runtime; we hence do it greedily. For a greedy approach we need a way to select the next window for addition. Generally speaking, we prefer long patterns with likely delays. Based on this intuition, we assign each window wp a score s(wp). At each step we select the window wp with the highest s(wp). If a window conflicts with a previously selected window, we skip it and proceed. We add windows until all events of D are covered. To ensure there always exist a valid cover we always include all singleton windows. As we prefer long patterns with likely deltas, our window score trades of pattern length (|p|c) against the cost of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8348 individual delays. Formally, we have s(wp) = |p|c − |p|−1 X k=1 −log πp,k(wp[k + 1] −wp[k]) where c is the average code cost of a singleton event under the null model, that is c = P e∈Ω−supp(e|D) log(supp(e|D)/||D||) ||D|| . Mining Good Models Now that we know how to find a good cover given a set of patterns, we explain how to discover a high-quality pattern set. Since there are super-exponentially many possible solutions, we again take a greedy approach. The general idea is that we use a pattern-growth strategy in which we iteratively combine existing patterns into new longer patterns. Before we explain our method in detail, we explain how we build a pattern candidate given two existing patterns and how to estimate the gain of such a candidate. Estimating Candidate Gains Computing the total encoded length L(M ⊕p′, D) for when we add a new pattern p′ to M is costly as this requires covering the data, which in turn requires finding good windows of p′. To avoid doing so for all candidates, we propose to instead use an optimistic estimator to discard those candidates for which we estimate no gain. Specifically, we want to estimate how many bits we will gain if we were to add pattern p′ to the model. To do so, we estimate the usage of p′. As we will explain below, every candidate p′ is constructed by concatenating two existing patterns p1, p2 ∈M. Assuming that p′ will be used maximally, we have an optimistic estimate of its usage as usg(p′) = min(usg(p1), usg(p2)), or, if p1 = p2 as usg(p′) = usg(p1)/2. We estimate the change in model cost by adding p′ by assuming all occurrences of the least frequent parent pattern are now covered by p′. Combined the estimated gain is, ∆L(M ⊕p′) = −ˆL(p′) + L( argmin p∈{p1,p2} usg(p)) where ˆL(p′) is the cost of p′ omitting the delay distribution between p1 and p2. We estimate ∆L(D | M ⊕p′) as ∆L(D | M ⊕p′) = s log(s) −s′ log(s′) + z log(z)− x log(x) + x′ log(x′) −y log(y) + y′ log(y′) where s is the sum of all usages, s = P p∈M usg(p), and, for readability, we shorten usg(p′) to z, usg(p1) to x, usg(p2) to y and write x′, y′, s′ for the “updated” usages, that is x′ = x −z, y′ = y −z and s′ = s −z. As we do not have any information about the delays between p1 and p2 we assume these are encoded for free. Putting the above together gives us an optimistic estimate of the total encoded cost when adding pattern p′ to M as ∆L(D, M ⊕p′) = ∆L(M ⊕p′) + ∆L(D | M ⊕p′) . Wherever clear from context, we simply write ∆L(p′). Algorithm 1: OPTIMIZEALIGNMENT Input : pattern candidate p′, alignment A Output: estimated gain∗, optimized alignment A∗ 1 gain∗←−∞ 2 while ∆LA(p′) > gain∗do 3 gain∗←∆LA(p′) 4 A∗←A 5 drop all delays d with minimal frequency from A 6 return gain∗, A∗ Estimating Candidate Occurrences When we want to evaluate a candidate pattern p′, constructed from patterns p1 and p2, we have to determine its occurrence windows. A simple and crude way to determine candidate windows is by mapping every occurrence of p1 to the nearest next occurence of p2. We call this procedure ALIGNNEXT. It is particularly good for finding a mapping with the shortest possible delays, but will not do well when delays are relatively long. For this, the ALIGNFAR algorithm by C¨uppers, Kalofolias, and Vreeken (2022) provides a better solution. In a nutshell, it efficiently discover that mapping A that minimizes the variance in delays. By a much larger search space it is naturally more susceptible to noise. As a result, both strategies can give a good starting points, but neither will likely give an alignment that optimizes our MDL score. We propose to greedily optimize these mappings using an optimistic estimate. We first observe that given a mapping, we can trivially compute the delays, on which we can then fit a distribution. We do so for all distributions π ∈Ψ and choose that π∗ p′(· | Θ∗) that minimizes the cost of encoding the delays. Second, we observe that a mapping also allows us to better estimate the usage of p′ as the number of mapped occurrences of p1 and p2. This gives a gain estimate under alignment A as ∆LA(p′) = −L(p′)+∆L(D | M ⊕p′)+ X d∈A log π(d|Θ∗). We now use this estimate to identify and remove those mappings with the lowest delay probability (i.e. those with minimal frequency) until ∆LA(p′) no longer increases. We give the pseudocode as Algorithm 1. Mining Good Pattern Sets Next, we explain how we use the gain estimation and cover strategy to mine good pattern sets P. We give the pseudo-code for our method, HOPPER, as Algorithm 2. The key idea is to use a bottom-up approach and iteratively combine previously found patterns into longer ones. We iteratively consider the Cartesian product of patterns p1, p2 ∈M as candidates. We evaluate these in order of potential gain. Events and patterns that occur frequently have the largest potential to compress the data, therefore we consider these combinations first. Specifically, we evaluate combinations of p1 and p2 in order of how many events they together currently cover (line 2). Given a pattern candidate p′ = p1 ⊕p2, we use our optimistic estimator to determine if we expect it to provide any The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8349 Algorithm 2: HOPPER Input : sequence database D, alphabet Ω Output: model M 1 CT ←Ω; Cand ←CT × CT; 2 forall p1, p2 ∈Cand do ordered descending on |p1|usg(p1) + |p2|usg(p2) 3 if ∆L(p1 ⊕p2) > 0 : 4 gain, p′ ←ALIGNCANDIDATE(p1, p2) 5 if gain > 0 ∧L(D, M) > L(D, M ⊕p′) : 6 p′ ←FILLGAPS(p′, |p1|) 7 M ←M ⊕p′ 8 M ←PRUNE(M) 9 Cand ←Cand ∪{M × p′, (p1, p2)} 10 M ←PRUNEINSIGNIFICANT(M) 11 return M gain in compression. If not, we move on to the next candidate. If we do estimate a gain based on usage of p1 and p2 alone, we proceed and optimize the alignment of occurrences of p1 and p2 to those of occurrences of p′. We do so using ALIGNCANDIDATE, for which we give the pseudocode in the supplementary. In a nutshell, it returns the best optimized result out of ALIGNNEXTand ALIGNFAR. If the alignment leads to an estimated gain, we compute our score exactly (l. 5) and if the score improves we are safe to add p′ to our model. We do so after we consider augmentations of p′ with events that occur between p1 and p2 (FILLGAPS, line 6) such that we further improve the score. Adding a new pattern to M can make previously added patterns redundant, e.g. when all occurrences of p1 are now covered by p′. We prune all patterns for which the score improves when we remove them from M (PRUNE). Finally, we create new candidates based on the just added pattern, and add (p1, p2) back to the candidate set, as we might want to build a different pattern from it in a later iteration. Before returning the final pattern set, we reconsider all patterns in the model and only keep those that give us a significant gain (Bloem and de Rooij 2020; Gr¨unwald 2007) in compression. We provide further details on the pattern mining procedure in the supplementary. As we consider the most promising candidates first, the more candidates we evaluate to have no gain, the more unlikely it becomes we will find a candidate that will provide any substantial gain. To avoid evaluating all of those unnecessarily, we propose an early stopping criterion by considering up to |Ω|2/100, but at least 1 000, unsuccessful candidates in a row. As our score is bounded from below by 0, we know that Hopper will eventually converge. Related Work Mining sequential patterns from event sequences has a rich history. Traditional sequential pattern miners focus on finding all frequent patterns (Agrawal and Srikant 1995; Laxman, Sastry, and Unnikrishnan 2007), these suffer from exponentially many patterns, making interpretation hard to impossible. Closed episodes (Yan, Han, and Afshar 2003; Wang and Han 2004) partially solve this, but are highly sensitive to noise. More recently, research focus shifted to mining patterns with a frequency that is significant with respect to some null hypothesis (Low-Kam et al. 2013; Petitjean et al. 2016; Tonon and Vandin 2019; Jenkins, WalzerGoldfeld, and Riondato 2022). While this alleviates, it does not solve the pattern explosion. Pattern set mining solves the pattern explosion by asking for a small and non-redundant set of patterns that generalizes the data well, as instead of asking for all patterns that satisfy some individual criterion. There exist different approaches to how to score a pattern set. ISM (Fowkes and Sutton 2016) takes a probabilistic Bayesian approach, unlike us they do not model gaps. SQS (Tatti and Vreeken 2012) is an example of a method that employs the Minimum Description Length principle to identify the best set of serial episodes, which are sequential patterns that allow for gaps. SQUISH (Bhattacharyya and Vreeken 2017) builds upon SQS and additionally allows interleaved and nested patterns. However, SQS and SQUISH, are not capable of finding patterns with long inter-event delays and penalize each individual gap uniformly, regardless where in the pattern it occurs. Existing methods that enrich patterns with delays can be categorized into two groups, methods that discover frequent patterns that satisfy some user set delay constrains (Yoshida et al. 2000; Giannotti et al. 2006; Dauxais et al. 2017; Cram, Mathern, and Mille 2012), and methods that discovers delay information from the data (Yen and Lee 2013; Nanni and Rigotti 2007). The latter, in contrast to our method, only consider the minimal delay between events, do not work on a single long sequence, and mine all frequent patterns, and hence also suffer from the pattern explosion. Existing pattern set miners that do model the inter-event delay solve different problems. Galbrun et al. (2018) propose to mine periodic patterns, which are patterns that continuously appear throughout the data with near-exact delays. It is therewith well-suited for the holidays example in the introduction, but less so for discovering patterns that only appear more locally. OMEN (C¨uppers, Kalofolias, and Vreeken 2022) does discover local patterns and delay distributions, but does so in a supervised setup between a pattern and a target attribute of interest. As such, each of the above methods consider part of the problem we study here, but none address it directly: we aim to discover a small set of sequential patterns where the delays between subsequent events in a pattern are modelled with a probability distribution. Experiments In this section we empirically evaluate HOPPER on synthetic and real-world data. We implement HOPPER in Python and provide the source code along with the synthetic data and the real-world data in the supplementary.1 We compare HOPPER to SKOPUS (Petitjean et al. 2016) as a representative statistically significant sequential pattern miner, SQS (Tatti and Vreeken 2012), SQUISH (Bhattacharyya and Vreeken 2017) and ISM (Fowkes and Sutton 2016) as representatives of the 1eda.rg.cispa.io/prj/hopper The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8350 general class of pattern set miners, and to PPM (Galbrun et al. 2018) as a representative of the periodic pattern miners. For all, we use the implementation by the authors. HOPPER considers delays up to a user set max delay, for all experiments we set it to 200. SKOPUS only works on a set of sequences, when the dataset consists of one sequence, we split the sequence into 100 equally long sequences. We parametrize SKOPUS to report the top 10 patterns of at most length 10, which corresponds to the ground-truth value in our synthetic experiments. PPM only accepts a single sequence as input, to make it applicable on databases of multiple sequences, we concatenate these into one long sequence. We give the full setup description in the supplementary. Synthetic Data To evaluate how well HOPPER recovers patterns with known ground, we consider synthetic data. To this end, we generate data as follows. For each synthetic configuration we generate 20 independent datasets. For each dataset we sample uniform at random one sequence of length 10000 over an alphabet of 500 events, we plant 10 unique patterns, uniformly, at random locations while avoiding collisions. The frequency of planted patterns, length and delay distributions between events we vary per experiment. As evaluation we consider standard F1 score. Where, to reward partial discovery, we weight a reported pattern pr by the relative edit distance to the planted pattern pp, that is, w(pr, pp) = max(1 −lev(pr, pp)/|pp|, 0) where lev is the Levenshtein edit distance. Since we do not want to reward redundant discoveries we cap the total reward to one per planted pattern. To illustrate, consider the example where we plant one pattern abcd, and discover two patterns, ab and cd. We value both as 0.5. As we can technically reconstruct the generating pattern we hence have a recall of one, but, as we have to do so using two rather than one pattern, we have a precision of 0.5. This way we reward partial discoveries, which is especially relevant for methods that are designed to pick up events that occur close to one another, but might miss the full pattern if it includes a long delay. We provide additional details on the evaluation in the supplementary. Sanity Check We start with a sanity check, where we run HOPPER on 20 data sets without structure, generated uniformly at random. It correctly does not report any patterns. Delay Distributions Next, we test how well HOPPER can recover patterns for varying numbers of delay distributions. We consider the case of no delay distributions up to a pattern including a delay distribution between every subsequent pair of events. We plant 10 unique patterns of length 10 and in total 200 pattern occurrences, that is, on expectation 20 instances per pattern. As delay distribution, we plant Uniform distributions with a delay of between 10 to 20 time steps. We present the results in the first panel of Fig. 2. We observe that HOPPER performs on par when there are no delay distributions and outperforms the state of the art when we increase their number. We find that SQUISH performs on par with SQS in our experiments and to avoid clutter from here onward postpone its results to the supplementary. Low Frequency Next, we evaluate performance with lowfrequency patterns, we decrease the frequency of the total number of planted patterns. We consider the same setting as above, where we set the number of distributions to four and decrease the total number of planted patterns from 200 to 100, that is, on expectation, from 20 to 10 per pattern. We show the results in the 2nd panel of Fig. 2. We observe that HOPPER outperforms all other methods, ultimately reducing to the performance of SQS in the low-frequency domain. Long Delays Next, we investigate how robust HOPPER is to long delays, to this end we plant 10 patterns at 200 locations. We plant patterns of length 3, with Normal distributed inter-event delays, with a standard deviation of one, and increase the mean stepwise from 1 to 180. We present the results in the third panel of Fig. 2. We observe HOPPER is very robust against long delays: even with an expected delay of 180 between the individual events it achieves a very high F1 score. In contrast, its competitors do not fare well; SQS and SKOPUS perform well initially but then quickly deteriorate. High Variance Finally, we evaluate HOPPER under increasing variance of inter-event delays. To this end we plant 400 occurrences of 10 patterns of length 3, with Normally distributed delays with mean 50 and varying the standard deviations. We show the results in the last panel of Fig. 2. We observe that HOPPER gets near perfect results for lower variance and high F1 score until a standard deviation of 7 at which point 95% percent of the probability mass is distributed over a range of 28 timestamps. In general, we observe that the higher the frequency, the more robust we are against higher variance. We can see that SKOPUS is consistent under increasing variance. This is probably due to the fact that SKOPUS does not care about the distance between events only about the order in which they occur. Real World Results Next, we evaluate Hopper on real-world data. We use eight datasets that together span a wide range of use-cases. We consider a dataset of all national Holidays in a European country over a century, the playlist a local Radio station recorded over a month, the Lifelog2 of all activities of one person recorded in over seven years, the MIDI data of hundred Bach Chorales (Dua and Graff 2017), all commits to the Samba project for over ten years (Galbrun et al. 2018), the Rolling Mill production log of steel manufacturing plant (Wiegand, Klakow, and Vreeken 2021), the discretized muscle activations of professional ice Skating riders (Moerchen and Fradkin 2010), and finally, three text datasets the Gutenberg project, resp. Romeo and Juliet by Shakespeare, A Room with a View by E.M. Forster, and The Great Gatsby by F. Scott Fitzgerald. We give the total number of events per dataset in Table 1 and further statistics in the supplementary. We run HOPPER, SQS, ISM, PPM, and SKOPUS on all datasets. We report the number of patterns (|P|), the average expected distance between the first and last event (E(wp[|p|] −wp[0])) and for HOPPER, the number of discovered delay distributions (#Θ). In the interest of space 2https://quantifiedawesome.com/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8351 0 1 2 3 4 5 6 7 8 9 0 0.2 0.4 0.6 0.8 1 # Θs per pattern F 1 20 18 16 14 12 10 0 0.2 0.4 0.6 0.8 1 Expected frequency per patten F 1 HOPPER SQS SKOPUS ISM PPM 1 51015204060100140180 0 0.2 0.4 0.6 0.8 1 Mean of inter-event delay F 1 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 1 Standard deviation of inter-event delay F 1 Figure 2: [Higher is better] F1 scores for recovering patterns from synthetic data. From left to right, we evaluate for varying numbers of inter-event distributions, expected frequency of a pattern, mean inter-event delay, resp. different standard deviations for normally distributed delays. We see that HOPPER performs on par with SQS when inter-event delays are few and simply structured, and outperforms the competition with a large margin whenever their structure is more complicated. HOPPER SQS PPM Dataset ||D|| |P| #Θ E(w) |P| E(w) |P| E(w) Holidays 37k 1 7 393 3 19.2 14 51.6 Radio 16k 22 43 48 15 5.8 587 71.9 Lifelog 40k 37 68 129 58 3.9 1.6k 119.1 Samba 29k 40 101 110 221 2.7 1.4k 17.1 Chorales 7k 56 57 4.7 114 2.6 433 2.6 Rolling 54k 237 489 7.4 470 5.0 3.6k 181.9 Skating 26k 86 160 9.1 160 4.0 1.4k 55.5 Romeo 37k 254 284 12.9 254 2.8 2.3k 332.7 Room 87k 565 610 3.1 701 2.5 – – Gatsby 64k 439 488 7.3 519 2.6 4.7k 641.9 Table 1: Results on real-world data. For HOPPER, SQS, and PPM we report the number of discovered patterns (|P|) and the average expected distance between the first and last symbol of a pattern (E(w)). For HOPPER we additionally give total number of discovered inter-event distributions (#Θ). we postpone the results of ISM and SKOPUS, along with the metrics runtime and average events per pattern to the supplementary. HOPPER terminates within seconds to hours, depending on the dataset. We find that while HOPPER and SQS discover similar numbers of patterns, those that HOPPER discovers reveal much longer range dependencies and, in general, include more events. PPM results in an order of magnitude more patterns, most of which are singletons. Next we look at the results for Holidays and Radio in more detail. On the Holidays dataset, HOPPER finds a single pattern, May 1st 155 −−→National Holiday 83 −→1st Christmas Day 1−→ 2nd Christmas Day 6−→New Year 80−112 −−−−−→Good Friday 3−→ Easter Monday 49 −→Whit Monday, where all delay distributions are uniform. The pattern precisely describes all fixed and all lunar-calendar dependent holidays within the year. In contrast, the competing methods only find fractions of this pattern, such as 1st Christmas Day, 2nd Christmas Day. We show the results for all methods in the supplementary. The Radio dataset includes all the songs played, as well as the ad slots and news segments, for a local radio station over the course of a month. On this data, HOPPER discovers the pattern Jingle 0−→Ads 0−→News 0−→Jingle U(3,5) −−−−→Jingle where the 0-gaps correspond to geometric distributions with p = 1 and the last inter-event delay is a uniform distribution. Other methods find comparable or parts of this patterns, but none give the immediate insight that the first four events follow directly after one another and the last Jingle plays between 3 to 5 events after the previous. More importantly, unlike other methods, HOPPER also picks up patterns such as Solo Para G(0.02) −−−−→As It Was N (48,25) −−−−−−→I Believe P(24) −−−−→Anyone for You that confirm our suspicion that radio stations often play the same sequence of particularly popular songs interspersed with less-wellknown songs. No other method finds any comparable patterns. HOPPER discovers much longer patterns than its competitors. Whereas most competitors find patterns of length 2, SQS patterns of at most 4 events, HOPPER discovers patterns of up to 7 events long. Together, this illustrates that HOPPER finds patterns that are not only more detailed in terms of the delay structure, but also in which events they describe. Conclusion We consider the problem of summarizing sequential data with a small set of patterns with inter-event delays. We formalized the problem in terms of the Minimum Description Length principle and presented the greedy HOPPER algorithm. On synthetic data we saw that our method recovers the ground truth well and is robust against high delays and variance. On real-world data we observed that HOPPER finds meaningful patterns that go beyond what state of the art methods can capture. While methods that only consider the order of events, can in theory find patterns with long delays, they often do not do this in practice. We introduce a more powerful pattern language that enables us to discover new structure in data. This comes with the trade-off, of a much larger search space and, in theory, makes us more susceptible to noise, however the experiments have shown that this is not a problem in practice. HOPPER achieves a high F1 score on all experiments in Fig. 2, despite these having 80% or more noise. Currently, we model the delay between subsequent events in a pattern. In practice, some events may depend on some event earlier in the pattern. We see it as an interesting direction for future work to extend our pattern language to include rule-like dependencies. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8352 References Agrawal, R.; and Srikant, R. 1995. Mining sequential patterns. In ICDE, 3–14. Los Alamitos, CA, USA: IEEE Computer Society. Bhattacharyya, A.; and Vreeken, J. 2017. Squish: Efficiently Summarising Event Sequences with Rich Interleaving Patterns. In SDM, 11. Bloem, P.; and de Rooij, S. 2020. Large-Scale Network Motif Analysis Using Compression. DAMI, 34: 1421–1453. Cram, D.; Mathern, B.; and Mille, A. 2012. A Complete Chronicle Discovery Approach: Application to Activity Analysis. Expert Systems, 29(4): 321–346. C¨uppers, J.; Kalofolias, J.; and Vreeken, J. 2022. Omen: Discovering Sequential Patterns with Reliable Prediction Delays. KAIS, 64(4): 1013–1045. Dauxais, Y.; Guyet, T.; Gross-Amblard, D.; and Happe, A. 2017. Discriminant Chronicles Mining: Application to Care Pathways Analytics. In AIME, 234–244. Springer. Dua, D.; and Graff, C. 2017. UCI Machine Learning Repository. Fowkes, J.; and Sutton, C. 2016. A Subsequence Interleaving Model for Sequential Pattern Mining. In KDD. Galbrun, E.; Cellier, P.; Tatti, N.; Termier, A.; and Cr´emilleux, B. 2018. Mining Periodic Patterns with a MDL Criterion. In ECMLPKDD18, 535–551. Springer. Giannotti, F.; Nanni, M.; Pedreschi, D.; and Pinelli, F. 2006. Mining Sequences with Temporal Annotations. In Proceedings of the 2006 ACM Symposium on Applied Computing, 593–597. Dijon France: ACM. ISBN 978-1-59593-108-5. Gr¨unwald, P. 2007. The Minimum Description Length Principle. MIT Press. Jenkins, S.; Walzer-Goldfeld, S.; and Riondato, M. 2022. SPEck: Mining Statistically-Significant Sequential Patterns Efficiently with Exact Sampling. Data Min Knowl Disc, 36(4): 1575–1599. Laxman, S.; Sastry, P. S.; and Unnikrishnan, K. P. 2007. A Fast Algorithm for Finding Frequent Episodes in Event Streams. In KDD07, 410–419. ACM. Li, M.; and Vit´anyi, P. 1993. An Introduction to Kolmogorov Complexity and its Applications. Springer. Low-Kam, C.; Raissi, C.; Kaytoue, M.; and Pei, J. 2013. Mining Statistically Significant Sequential Patterns. In ICDM, 488–497. Dallas, TX, USA: IEEE. ISBN 978-07695-5108-1. Marx, A.; and Vreeken, J. 2019. Telling Cause from Effect by Local and Global Regression. KAIS, 60: 1277–1305. Moerchen, F.; and Fradkin, D. 2010. Robust Mining of Time Intervals with Semi-Interval Partial Order Patterns. In SDM, 315–326. Nanni, M.; and Rigotti, C. 2007. Extracting Trees of Quantitative Serial Episodes. In Dˇzeroski, S.; and Struyf, J., eds., Knowledge Discovery in Inductive Databases, volume 4747, 170–188. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-540-75548-7. Petitjean, F.; Li, T.; Tatti, N.; and Webb, G. I. 2016. Skopus: Mining Top-k Sequential Patterns under Leverage. DAMI, 30(5): 1086–1111. Rissanen, J. 1983. A Universal Prior for Integers and Estimation by Minimum Description Length. Annals Stat., 11(2): 416–431. Tatti, N.; and Vreeken, J. 2012. The Long and the Short of It: Summarizing Event Sequences with Serial Episodes. In KDD, 462–470. ACM. Tonon, A.; and Vandin, F. 2019. Permutation Strategies for Mining Significant Sequential Patterns. In ICDM, 1330– 1335. Beijing, China: IEEE. ISBN 978-1-72814-604-1. Vereshchagin, N. K.; and Vit´anyi, P. M. B. 2004. Kolmogorov’s Structure Functions and Model Selection. IEEE Transactions on Information Theory, 50(12): 3265–3290. Wang, J.; and Han, J. 2004. BIDE: Efficient Mining of Frequent Closed Sequences. In ICDE, 79–90. Wiegand, B.; Klakow, D.; and Vreeken, J. 2021. Mining Easily Understandable Models from Complex Event Logs. In SDM, 10. Yan, X.; Han, J.; and Afshar, R. 2003. CloSpan: Mining: Closed Sequential Patterns in Large Datasets. In SDM, 166– 177. SIAM. Yen, S.-J.; and Lee, Y.-S. 2013. Mining Non-Redundant Time-Gap Sequential Patterns. Applied Intelligence, 39(4): 727–738. Yoshida, M.; Iizuka, T.; Shiohara, H.; and Ishiguro, M. 2000. Mining Sequential Patterns Including Time Intervals. In Dasarathy, B. V., ed., AeroSense 2000, 213–220. Orlando, FL. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8353
2024
928
18,771
Unveiling Implicit Deceptive Patterns in Multi-Modal Fake News via Neuro-Symbolic Reasoning Yiqi Dong1, Dongxiao He1,2*, Xiaobao Wang2*, Youzhu Jin3, Meng Ge4, Carl Yang5, Di Jin1,2 1School of New Media and Communication, Tianjin University, Tianjin, China, 2Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China, 3Beijing-Dublin International College, Beijing University of Technology, Beijing, China, 4Saw Swee Hock School of Public Health, National University of Singapore, Singapore, 5Department of Computer Science, Emory University, Georgia, USA. {dongyiqi, hedongxiao,wangxiaobao}@tju.edu.cn, [email protected], [email protected], [email protected], [email protected]. Abstract In the current Internet landscape, the rampant spread of fake news, particularly in the form of multi-modal content, poses a great social threat. While automatic multi-modal fake news detection methods have shown promising results, the lack of explainability remains a significant challenge. Existing approaches provide superficial explainability by displaying learned important components or views from well-trained networks, but they often fail to uncover the implicit deceptive patterns that reveal how fake news is fabricated. To address this limitation, we begin by predefining three typical deceptive patterns, namely image manipulation, cross-modal inconsistency, and image repurposing, which shed light on the mechanisms underlying fake news fabrication. Then, we propose a novel Neuro-Symbolic Latent Model called NSLM, that not only derives accurate judgments on the veracity of news but also uncovers the implicit deceptive patterns as explanations. Specifically, the existence of each deceptive pattern is expressed as a two-valued learnable latent variable, which is acquired through amortized variational inference and weak supervision based on symbolic logic rules. Additionally, we devise pseudo-siamese networks to capture distinct deceptive patterns effectively. Experimental results on two real-world datasets demonstrate that our NSLM achieves the best performance in fake news detection while providing insightful explanations of deceptive patterns. 1 Introduction Nowadays, the Internet’s rapid expansion has greatly facilitated the dissemination and acquisition of information. However, this also provides an avenue for malicious actors to fabricate and spread fake news with ulterior motives. The ubiquity of fake news makes it challenging for individuals to discern reliable information online and significantly threatens the modern media ecosystem (Allcott and Gentzkow 2017; Wang et al. 2023). This hazard becomes even more evident against the backdrop of Large Language Models *Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A picture authentically shows former U.S. President Donald Trump holding a "24-karat, gold-plated Trump bill." This is Elon musk and his parent. They had a black women helper. Who was not allowed to seat on their sofas. (a) Image manipulation (b) Image repurposing Figure 1: Typical examples of fake news manifesting different deceptive patterns. (LLMs) such as ChatGPT (OpenAI 2023), which inadvertently generates and propagates fake information due to AI hallucination (Goldstein et al. 2023). On the other hand, the Internet is increasingly flooded with multi-modal (e.g., text and image) online posts, renowned for their heightened allure and deceptive attributes (Cao et al. 2020). Consequently, developing automatic detection systems to verify and combat multi-modal fake news has become an urgent necessity. Existing efforts utilizing Deep Neural Networks (DNN) have been made to tackle the multi-modal fake news detection problem by integrating various features (Dhawan et al. 2022) by constructing graph (Jin et al. 2022a,b) or exploring cross-modal correlations (Qi et al. 2021; Dong et al. 2023). While achieving promising results, such methods often lack explainability and are commonly referred to as “black boxes”, as they focus on learning unclear latent features (Mishima and Yamana 2022). Poor explainability not only extremely undermines user trust but also impedes system debugging and upgrading. Recently, several approaches have attempted to provide explanations by highlighting the contributive semantics components within text description and image region (Wu, Liu, and Zhang 2023), exhibiting The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8354 coarse prediction scores from each view that includes individual modality and cross-modality correlation (Ying et al. 2023), or jointly locating evident contents and their logic interactions (Liu, Wang, and Li 2023). These explanations display the input components or views most relevant to the predictions in some way. However, they overlook a different route to explainability, one that involves uncovering how fake news is fabricated, which we term as deceptive patterns implicit in the news. Our starting point is that tracing back to the root, the diverse and unique features manifested within fake news articles stem from various deceptive patterns employed during their creation. We posit that unveiling these patterns could enhance the detection of fake news and provide succinct explanations behind the news being fake. Accordingly, inspired by common visual patterns prevalent in fake news (Cao et al. 2020), we explore three primary deceptive patterns frequently utilized to forge fake news: image manipulation, cross-modal inconsistency, and image repurposing. Among them, cross-modal inconsistency refers to the semantic inconsistency between text and image, which is a readily understandable pattern. Hence, we present two imperceptible fake news examples related to the other two patterns, sourced from Snopes1 in Figure 1. At first glance, both of them do not appear to be fake. However, in the original image shown in Figure 1 (a), Trump was holding a pen, not a commemorative bill, clearly indicating image manipulation. As for the image in Figure 1 (b), it actually depicts an unnamed mother, daughter, and maid in Johannesburg, South Africa, during apartheid, which conflicts with the textual description of the news, revealing image repurposing. In practice, predicting news authenticity and mining deceptive patterns as explanations jointly are challenging due to the lack of deceptive pattern labels for news samples in the dataset. Furthermore, deceptive patterns within fake news, as exemplified in Figure 1, are often not easily recognizable even to human annotators, rendering manual labeling unfeasible and augmenting the intricacy of our task. Thus, this study attempts to answer the question: can we unveil those unlabeled deceptive patterns in multi-modal news as an insightful and concise explanation? Fortunately, from the perspective of human cognition, there is at least one deceptive pattern if the news is fake, while no deceptive pattern if the news is real. Inspired by the powerful expressive capabilities of first-order logic language in capturing complex relationships (Enderton 2001), our mind starts by formalizing these rules using first-order logic as a form of weak supervision inspired by (Chen et al. 2022a). By doing so, we establish a correlation between the available labels for news authenticity and the presence of unsupervised deceptive patterns, enabling the underlying deceptive patterns to be automatically learned. Building upon these insights, we propose a Neuro-Symbolic Latent Model (NSLM) that concurrently predicts the veracity of news and reveals deceptive patterns as explanations. Central to our NSLM is the modeling of each deceptive pattern’s existence as a corresponding two-valued learnable latent variable, learned through weak supervision from logic rules. 1https://www.snopes.com Specifically, the presence prediction of each deceptive pattern is treated as an atomic predicate in the logic rules, and the final prediction is aggregated using the conjunction of these individual predicates. This design effectively captures that the presence of one or more deceptive patterns indicates fake news, whereas the absence of all deceptive patterns confirms the news as real. Overall, we formulate the problem as a probabilistic maximum likelihood estimation with latent variables and adopt variational auto-encoding (Kingma and Welling 2014) to address it. For effectively capturing different deception patterns, we design a pseudo-siamese network within the encoder. In addition, we employ a distill-based strategy to influence the learning of latent variables subject to the pre-specified logic rules. To sum up, the contributions of our work are three-folded: • We propose a novel fake news detection approach named NSLM, capable of revealing the unlabeled deceptive patterns within multi-modal news data as illuminating explanations. • Each deceptive pattern is treated as a two-valued learnable latent variable, and we introduce logic rules based on human cognition to provide weak supervision for the existence of the proposed three deceptive patterns. • Experimental results on two benchmark datasets demonstrate that our NSLM achieves state-of-the-art performance in fake news detection and provides clear explanations for its predictions. 2 Preliminaries 2.1 Task Definition Given a news article x with text xt, an attached image xv, and image contexts xr retrieved by the image inverse search (Zlatkova, Nakov, and Koychev 2019), this work aims at predicting its label y ∈{Real, Fake} by modeling the probability distribution p(y | x), while at the same time mining its deceptive patterns acting as explanations. Here, we associate the presence of each proposed deceptive pattern with a twovalues learnable latent variable zk ∈{Not Exist, Exist}, k ∈ {IM, CI, IR}, where zIM for image manipulation, zCI for cross-modal inconsistency, and zIR for image repurposing. Note that we assume the independence of zk. We further define z = (zIM, zCI, zIR). Formally, our objective function based on maximum likelihood estimation is given as follows: max O = E(x,y∗)∼ptrain log p (y∗| x) , (1) where y∗is the ground truth label of news article x, and ptrain denotes the distribution of the training data. 2.2 Logic Rules To introduce weak supervision signals for imperceptible deceptive patterns and subsequently unveil these patterns as explanations, our model incorporates logic rules based on human intuition. We empirically observe that fake news typically involves at least one deceptive pattern, whereas true news lacks any deceptive patterns. These logical intuitions are regarded as a crucial link connecting the veracity of news The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8355 Subnetwork FAKE The ¥20 insurance purchased when buying a plane ticket, many of which include delay insurance. chemotherapy, treatment, liver cancer... Image Text Image contexts RoBERTa BiLinear Similarity BiLinear Similarity FC layer Not Exist Exist RoBERTa Sub-network Inverse Search Sub-network Sub-network Input Pattern Mining Encoder Decoder Exist ResNet34 InceptionV3 Logical Rule Constraints … … … Figure 2: Architecture of the proposed NSLM. The main modules of our model include Pattern Mining, Encoder, Decoder, and Logical Rule Constraints. The learning of deceptive patterns in NSLM is constrained by symbolic logic rules. Here ⊕denotes the concatenation operation, both y∗and z are converted into vectors in continuous space. and the presence of deceptive patterns. Moreover, they are well-suited to be represented using first-order logic language with strong expressive capabilities, which can be formulated as follows: zIM ∧zCI ∧zIR ⇒y, (2) where zk serves as the unary body predicate, y serves as the head predicate, and the conjunction operator ∧shows the relationship between body predicates. Then the detailed reasoning rules derived by Eq. (2) can be defined as: y = Fake, iff ∃zk = Exist, y = Real, iff ∀zk = Not Exist. (3) With the above definitions in place, we will subsequently introduce the proposed latent model NSLM and outline how the logic rules are employed to supervise it. 3 Methodology Figure 2 illustrates the framework of the proposed NSLM, which aims to uncover implicit deceptive patterns in fake news acting as explanations when giving authenticity predictions of news. To achieve this, we formulate a neurosymbolic latent model and represent each deceptive pattern as a two-valued learnable latent variable zk that requires inference. As shown in Figure 2, our NSLM consists of a pattern mining module, encoder, and decoder, while also integrating a logical constraint component for guided learning. Given a multi-modal news article as input, the pattern mining module initially extracts coarse-grained features linked to three deception modes using pre-trained models. Subsequently, the encoder employs pseudo-siamese networks to process features from the pattern mining module, producing distinct latent variables, which are then fed into the decoder for final news credibility predictions. Besides, taking inspiration from (Hu et al. 2016), we apply knowledge distillation to incorporate information from the logic rules into variables y and z. In practice, we optimize the NSLM through a variational inference-based algorithm, where both the encoder and decoder are jointly optimized to train the model. 3.1 Probabilistic Formalization We begin by formulating fake news detection from a probabilistic standpoint, where the underlying deceptive patterns are treated as latent variables. Assuming that news articles are independent of each other, the objective function in Eq. (1) could be equivalently decomposed into maximizing the logarithmic likelihood function for each news article. Hence, we next delve into the details of our NSLM from the perspective of an individual news article. For a piece of news x, our objective is to compute the target distribution, considering the incorporation of latent variables, as follows: pθ(y | x) = X z pθ(y | z, x) p(z | x), (4) where pθ(y | z, x) defines the conditional probability of y given input x and latent variables z parameterized by θ, and p(z | x) denotes the prior distribution of the latent variables z conditioned on the input x. Nevertheless, due to latent variables introducing additional dimensions to the parameter space, direct optimization using the EM algorithm becomes computationally intractable. To address this, we adopt recent advancements in variational inference, i.e., the amortization of the variational posterior distribution using neural networks (Kingma and Welling 2014). Specifically, a variational posterior distribution qω(z | x, y) is introduced to approximate the true posterior distribution pθ(z | x, y), which makes the objective function for news x into maximizing the well-known Evidence Lower BOund (ELBO). The ELBO is defined as: Eqω(z|x,y) [log pθ (y | z, x)] −DKL [qω(z | x, y)∥p(z | x)] , (5) where DKL [·] denotes the Kullback-Leibler divergence. Here we treat the Eq. (5) with a negative sign as one term of the overall loss function to minimize: Lelbo (θ,ω) = −ELBO. (6) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8356 3.2 Parameterization Pattern Mining In pursuit of capturing the three underlying deceptive patterns within fake news, we devise three branches to extract pertinent features uk, k ∈{IM, CI, IR} corresponding to three patterns, i.e., Image Manipulation, Cross-modal Inconsistency, and Image Repurposing. To capture the image manipulation, we leverage InceptionV3 (Szegedy et al. 2016) coupled with a fully connected layer to extract coarse features uIM ∈Rd (d is the fixed feature dimension) from the image’s frequency domain. This choice stems from the fact that recompressed or tampered images often exhibit periodicity in the frequency domain (Qi et al. 2019), which can be effectively discerned by InceptionV3. To find out the cross-modal inconsistency, the pre-trained ResNet34 (He et al. 2016) and RoBERTa (Liu et al. 2019) with fully connected layers are employed to extract semantic features ev, et ∈Rd from image and text respectively. Leveraging the advantages of BiLinear Similarity (Kim et al. 2017) in capturing intricate relationships between two features, we apply it to uncover inconsistencies between ev and et, which can be computed as: uCI [i] = e⊤ v Wi CIet + bi CI, (7) where uCI ∈Rd denotes the pattern features for the crossmodal inconsistency, and uCI [i] , i ∈(1, 2, ..., d) is the component value of the i-th dimension of uCI, Wi CI ∈Rd×d is a learnable parameter matrix, bi CI is a bias for uCI [i]. As for image repurposing, it is difficult to be detected solely by the news content since it does not contain any contexts where the original image appeared. Therefore, we employ image reverse search to retrieve the contextual information of images from the Web. This process can be efficiently automated and scaled to a large number of images using Google’s Vision API 2, which returns a list of pages and entities related to the image. We gather the concatenated entities as image contexts. They are fed into the RoBERTa to obtain its representation er ∈Rd. Similarly, another BiLinear Similarity is applied between text embedding et and image contexts embedding er to learn their differences: uIR [i] = e⊤ t Wi IRer + bi IR, (8) where uIR ∈Rd represents the pattern features indicating the image repurposing. The parameters dimension is consistent with that in Eq.(7). Encoder & Decoder After calculating the above representations, we parameterize the variational distribution qω(z | x, y) and target distribution pθ(y | z, x) with neural networks, which corresponds to encoder and decoder in the variational autoencoder, respectively. The encoder is designed as a pseudo-siamese structure, whose goal is to generate a set of latent variables z that represent diverse deception patterns. More precisely, due to qω(z | x, y) = Q k qω,k (zk | x, y), we employed three structurally consistent but weight-disjoint sub-networks to model the three distinct distribution qω,k (zk | x, y), and 2http://cloud.google.com/vision/ each sub-network consists of two fully connected layers with a softmax function. For each sub-networkk, it utilizes the concatenation of uk and embeddings of y as input to generate the probability distribution of zk. The decoder mirrors the encoder’s sub-network structure. It takes the concatenation of the probability distribution of zIM, zCI, zIR, along with ev and et, as input to predict the distribution of the news credibility label y. Logical Rule Constraints We adopt the knowledge distillation strategy with a teacher model and a student model to integrate logic rules into latent variables, providing weak supervision inspired by (Chen et al. 2022a). The teacher model projects the variational distribution qω(z | x, y) into a subspace q⋆ ω (yz | x, y) adhering to the logic rules, with yz ∈{Real, Fake} representing the logical aggregation of z. This allows us to transfer logical knowledge to the student model pθ(y | z, x) that we aim to optimize. The whole process can be understood analogously to human education, where a knowledgeable teacher possesses systematic general rules and guides students by offering her solutions to specific questions (Hu et al. 2016). The following distillation loss is defined to guide this process: Llogic (θ, ω) = DKL (pθ(y | z, x) ∥q⋆ ω (yz | x, y)) . (9) A pivotal aspect here pertains to how to get the logical aggregation label yz, we transfer hard logic defined in Section 2.2 into soft logic with product t-norms (Li et al. 2019) to ensure differentiability. Then the projected distribution q⋆ ω (yz | x, y) is given by: q⋆ ω (yz = Real | x, y) = Y k qω,k (zk = Not Exist | x, y) , q⋆ ω (yz = Fake | x, y) = 1 −q⋆ ω (yz = Real | x, y) . (10) 3.3 Model Learning Next, we introduce the optimization strategy to achieve the objective in Eq. (1). Combining the ELBO loss Lelbo and logic loss Llogic , our final loss function Lall for the news x is defined as: Lall (θ, ω) = (1 −µ)Lelbo (θ, ω) + µLlogic (θ, ω), (11) where µ ∈(0, 1) is placed to balance between the two terms. During the training process, all training news samples are sequentially processed through the pattern mining module, encoder, and decoder, which are jointly optimized using Eq. (11). It’s crucial to highlight that in the variational distribution qω(z | x, y), y actually is the ground-truth label y∗for each x during training. In our encoder, y∗is converted into one-hot encoding and then used to derive embeddings. 3.4 Model Inference During the testing phase, the input news samples are first processed through the pattern mining module. Then we randomly initialize the probability of z from a standard Gaussian distribution and use it as input for the decoder. The output distribution of news authenticity y generated by the decoder is then passed through the encoder. This process, involving passing through the decoder and encoder, continues The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8357 to update the distributions of y and z iteratively until convergence. As a result, we obtain both the final news veracity and the latent variables with respect to deceptive patterns, providing valuable insights into how a piece of news is forged. This end-to-end training and decoding approach contributes to a more reliable and transparent explanation mechanism. 4 Experiments 4.1 Datasets and Experimental Setup Dateasets We evaluate the proposed NSLM on two realworld datasets called Fakeddit (Nakamura, Levy, and Wang 2020) and Weibo (Jin et al. 2017), respectively. The Fakeddit dataset is derived from diverse subreddits on the Reddit platform, comprising comments and metadata. Notably, due to the abundance of short-text samples in Fakeddit, extracting their internal semantic information poses challenges. To this end, we create a subset of the dataset by selecting samples with a token count greater than 15 for further evaluation. In the Weibo dataset, the real news samples are gathered from Xinhua News Agency, a reputable news source in China, while the fake news samples are verified using Weibo’s official rumor debunking system. In our study, we exclude samples for which the corresponding image or Google inverse search results are unavailable. Statistically, Fakeddit comprises 31,011 news samples for training and 6,181 for testing, whereas Weibo consists of 5,455 news samples for training and 1,493 for testing. Comparison Models To validate the performance of the proposed NSLM, we compare it against 11 baselines, including two categories of models: 1) Uni-modal methods, consisting of the pre-trained ResNet34 (He et al. 2016), InceptionV3 (Szegedy et al. 2016), and RoBERTa (Liu et al. 2019) models combined with a fully connected layer; 2) Multi-modal methods, containing EANN (Wang et al. 2018), SpotFake (Singhal et al. 2019), BTIC (Zhang, Gui, and He 2021), HMCAN (Qian et al. 2021), CAFE (Chen et al. 2022b), CMC (Wei et al. 2022), BMR (Ying et al. 2023) and LogicDM (Liu, Wang, and Li 2023). These methods commonly utilize deep neural networks and welldesigned strategies, such as cross-modal knowledge exploitation and contrastive learning. Although BMR and LogicDM offer a certain degree of explainability, they do not effectively identify the deceptive patterns that reveal how fake news is fabricated. In experiments, we employ the same preprocessed data to re-run the official code provided by the aforementioned papers for comparison. Implementation Details In our NSLM, we adopt a randomly sampled Gaussian distribution as the prior distribution p(z | x). We set the dimension d to 256 and the tradeoff weight µ to 0.5. During training, we use a batch size of 8, while for testing, the batch size is set to 16. We employ a learning rate of 1e-5 for both datasets. The Fakeddit dataset allows a maximum text length of 45 and an image contexts length of 12, while for the Weibo dataset, the respective maximum lengths are 110 for text and 10 for image contexts. The whole model is trained with the Adaptive Moment Estimation (Adam) optimizer (Kingma and Ba 2014). 4.2 Experiment Results Table 1 presents a comprehensive comparison of our NSLM against popular baseline methods in terms of Accuracy, Precision, Recall, and Macro F1 score. The results consistently indicate NSLM’s superior performance over other models across all four metrics on both datasets, especially NSLM brings 1.6% and 1.0% improvements in Accuracy over bestperforming CMC on the Fakeddit and Weibo datasets, respectively. Such performance proves the efficacy of unraveling the mechanisms underpinning fake news fabrication. Table 1 also reveals that the image uni-modal approaches yield quite inefficient performance, particularly evident in the Weibo dataset characterized by intricate semantic images. In stark contrast, the text uni-modal method exhibits much better performance, emphasizing the pivotal role of textual information in effective fake news detection. Moreover, the multi-modal methods generally achieve even more promising results, which demonstrates the potential for complementary effects of the two modalities to improve detection accuracy. Among the multi-modal models, we can observe the results of CAFE are suboptimal. This could be attributed to CAFE’s consideration of cross-modal ambiguity, which can be regarded as a specific aspect of deceptive patterns and might not universally apply in real scenarios. On the other hand, we notice the exceptional performance of CMC, which may relate to its adeptness in leveraging feature correlations through a well-designed mutual learning strategy. It’s important to note that CMC’s two-stage nature introduces additional training time and complexity compared to others. Regarding the best results of our NSLM, we believe this benefits from our model’s ability to reveal how fake news is fabricated, enabling the identification of common deceptive patterns shared among fake news. 4.3 Ablation Study To thoroughly comprehend the impact of each suggested deceptive pattern and its collective significance, we systematically exclude each pattern (w/o zk) individually and combinations of two patterns (w/o zk, zj, where k ̸= j). The empirical results for model variants in Accuracy and Macro F1 scores are reported in Table 2. From Table 2, it is evident that the removal of two latent variables has a more pronounced adverse impact on the model’s performance than the elimination of only one. This suggests that discarding more pattern features would lead to inferior results. Specifically, on the Fakeddit dataset, we found that the contribution of cross-modal inconsistency (zCI) holds slightly higher significance among the three deceptive patterns. Conversely, on the Weibo dataset, image manipulation (zIM) is the most influential. This divergence may arise from the variations in deceptive pattern distributions across datasets with different languages and platforms. The ablation results confirm the significance of capturing the three proposed deceptive patterns in enhancing performance, as removing any of these patterns results in decreased accuracy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8358 Categories Models Fakeddit Weibo Accuracy Precision Recall Macro F1 Accuracy Precision Recall Macro F1 Uni-modal (image) ResNet34 0.721 0.722 0.630 0.632 0.561 0.556 0.556 0.555 InceptionV3 0.737 0.726 0.665 0.674 0.584 0.583 0.583 0.583 Uni-modal (text) RoBERTa 0.832 0.819 0.806 0.812 0.829 0.828 0.829 0.829 Multi-modal (text+image) EANN 0.826 0.821 0.790 0.801 0.727 0.749 0.738 0.726 SpotFake 0.891 0.901 0.859 0.875 0.839 0.840 0.842 0.838 BTIC 0.897 0.888 0.885 0.886 0.835 0.838 0.838 0.835 HMCAN 0.892 0.885 0.876 0.880 0.832 0.833 0.835 0.832 CAFE 0.848 0.844 0.816 0.826 0.812 0.818 0.817 0.812 CMC 0.909 0.906 0.892 0.898 0.875 0.875 0.877 0.875 Multi-modal (text+image) +Explainability BMR 0.901 0.890 0.890 0.891 0.843 0.843 0.843 0.843 LogicDM 0.873 0.867 0.850 0.858 0.852 0.852 0.852 0.852 NSLM (Ours) 0.925 0.919 0.915 0.917 0.885 0.884 0.885 0.884 Table 1: Comparison with the considered uni-modal and multi-modal baselines on Fakeddit and Weibo datasets in terms of Accuracy, Precision, Recall, and Macro F1 score. The best results are in bold. Models Fakeddit Weibo Accuracy F1 Accuracy F1 NSLM 0.925 0.917 0.885 0.884 w/o zIM 0.923 0.914 0.864 0.863 w/o zCI 0.922 0.913 0.874 0.873 w/o zIR 0.923 0.914 0.874 0.874 w/o zIM, zCI 0.918 0.910 0.865 0.865 w/o zIM, zIR 0.921 0.912 0.865 0.865 w/o zCI, zIR 0.919 0.910 0.861 0.861 Table 2: Comparison with different variants of NSLM. The best results are in bold. The “w/o” is the abbreviation of “without”. The “F1” denotes “Macro F1”. 4.4 Overall Evaluation of Deceptive Patterns The revelation of underlying deceptive patterns in fake news is a fundamental aspect of our model. To achieve this, we employ logical constraints to weakly supervise the learning of deceptive patterns z. By adjusting the trade-off weight µ in the overall loss function Eq. (11), we aim to investigate the impact of varying levels of logical supervision on the quality of learned latent variables z, and how it subsequently affects the model performance. The results depicted in Figure 3 show the influence of varying the weight µ from 0.1 to 0.9 on three key metrics: Acc evaluates the overall accuracy of the predicted label y; Acch and Accs indicate the accuracy of yz obtained by logical aggregation of z through hard logic (Eq. (3)) and soft logic (Eq. (10)) respectively, which evaluate the overall quality of the learned z. Starting with µ = 0.1, Figure 3 illustrates that, with limited logical supervision, both Acch and Accs exhibit rather low values. This means that the latent variables z inadequately capture precise deceptive patterns in the absence of Weibo Fakeddit Acc Acch Accs 0.1 0.3 0.5 0.7 0.9 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.3 0.5 0.7 0.9 0.1 Acc Acch Accs Figure 3: Evaluation of the overall accuracy and deceptive patterns quality by varying the trade-off weight µ in the loss function Eq. (11). adequate logical guidance. As we increase this weight gradually, the quality of z improves significantly. Notably, when it hits 0.5, all the metrics achieve the best. However, as this value continues to increase, the model’s performance tends to stabilize or even exhibit a slight decline. This observation highlights the significance of logic rules in shaping the quality of learned z and emphasizes that a moderate value of µ is crucial to achieving optimal model performance. In addition, the overall Acc exhibits remarkable robustness to variations in the weight, showing relatively minor fluctuations throughout the range. This finding verifies our NSLM’s ability to discover deceptive patterns without compromising its overall predictive accuracy. In conclusion, the experimental analysis clarifies the effectiveness of deceptive patterns and the essential role of logical constraints. 4.5 Case Study To give an intuitive comprehension of our NSLM‘s explainability, we display the outputs of several fake news cases from the Weibo dataset in Figure 4. This illustration includes The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8359 Text: Pigmen were found in Jishan Pa goda, Shuangyang Town, Zhangping City, Fujian Province. Image contexts: Pigman, HumanBeast, Hybrid Learned deceptive patterns: image manipulation Predicted label: Fake (a) Good case Text: The ¥20 insurance purchased when buying a plane ticket, many of which include delay insurance. Image contexts: Chemotherapy, Treatment, Liver cancer. Learned deceptive patterns: cross-modal inconsistency, image repurposing Predicted label: Fake Text: #Boston Explosion# The 8year-old child died in today's explosion. Image contexts: Fitness, Power t-shirts, French marathon Learned deceptive patterns: None Predicted label: Fake Text: The 10th grade Math in the U.S. Please summarize your feelings in one sentence! Image contexts: Mathematics Textbook, High School, USA Learned deceptive patterns: None Predicted label: Real (b) Good case (c) Bad case (d) Bad case Figure 4: Several fake examples of learned deceptive patterns from the Weibo dataset. The texts are translated from Chinese to English. Good cases and bad cases showcase the successes and limitations of our NSLM, respectively. the retrieved image contexts, learned deceptive patterns, and predicted authenticity labels for each news example. For the first two cases, NSLM performs effectively. In case (a), it captures the presence of image manipulation, since the manipulated ears in its image provide clear evidence of fabrication through image tampering. In case (b), a stark incongruity between text and image is evident. This inconsistency also existed between text and the retrieved image contexts such as chemotherapy, therapy, and liver cancer that deviate from the textual description. The above observations substantiate the presence of both cross-modal inconsistency and image repurposing in this case. We also present two bad cases (c) and (d) in Figure 4 to further analyze the limitations of our NSLM. In case (c), our model incorrectly identifies the absence of deceptive patterns, likely due to poor image contexts retrieved through reverse search, failing to recognize the actual content of the image depicting a girl wearing a race bib for the Chosun City Jogging 5K. However, as we can see, the final predicted label is correct, suggesting that the imposed logical constraints may not be effectively incorporated. In the last example, the learned results also indicate the absence of all three deceptive patterns, resulting in an erroneous judgment of the prediction y that tries to be consistent with the logical aggregated label of z. While the image indeed represents a genuine math book, determining whether it belongs to the mentioned American 10th-grade mathematics requires leveraging external knowledge. It is worth mentioning that though several approaches achieve explainability by emphasizing specific content components or views in image and text of news, real-world scenarios may not always allow humans the time or expertise to carefully analyze every sample. Instead, they require clear and concise explanations. NSLM excels in providing such explanations directly, unveiling the deceptive patterns in fake news. For example, if people know that the case in Figure 4 (a) contains a deceptive pattern of image manipulation, they can quickly judge it as fake. This superiority becomes particularly valuable when dealing with large-scale datasets and time-sensitive situations, where quick and accurate decisions are paramount. 5 Related Works Explainable fake news detection has become a prominent area of research. For instance, (Chen et al. 2022a) made notable contributions in the field of fact-checking by utilizing evidential information and combining phrase-level veracity reasoning to determine the veracity of entire claims. This approach provides a more clear explanation. (Ying et al. 2023) disentangles multi-modal features through single-view prediction and explains which view is critical to the final decision. (Liu, Wang, and Li 2023) integrated logical clauses to express the reasoning process of the target task, identifying the contributing factors and selecting appropriate perspectives for explanations. While the above models achieved certain explainability, none could reveal the deceptive patterns within multi-modal fake news as concise explanations. Our work uniquely bridges the gap by unveiling those patterns through the constraints of symbolic logic rules. 6 Conclusion In this work, we blaze a novel path to explainability by elucidating unlabeled deceptive patterns within multi-modal news. In detail, we propose NSLM that converts the veracity of a news article into the presence of a set of deceptive patterns, thereby providing insightful explanations. Deceptive practices are constantly evolving, potentially giving rise to new patterns. So in the future, we plan to extend our model into a dynamically adaptable framework to adapt to these evolving patterns through the incorporation of a versatile combined pattern mining module, which is an extension of the Pattern Mining module in Figure 2. This extended module operates by amalgamating various input sources, thereby enabling the selection of specific inputs and the extraction of implicit deceptive pattern characteristics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8360 Acknowledgments This work was supported by the National Key Research and Development Program of China (2023YFC3304503), the National Natural Science Foundation of China (No. 62302333, 92370111, 62272340, 62276187), and the China Postdoctoral Science Foundation (No. 2023M732593). Carl Yang was not supported by any funds from China. References Allcott, H.; and Gentzkow, M. 2017. Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2): 211–36. Cao, J.; Qi, P.; Sheng, Q.; Yang, T.; Guo, J.; and Li, J. 2020. Exploring the role of visual content in fake news detection. Disinformation, Misinformation, and Fake News in Social Media: Emerging Research Challenges and Opportunities, 141–161. Chen, J.; Bao, Q.; Sun, C.; Zhang, X.; Chen, J.; Zhou, H.; Xiao, Y.; and Li, L. 2022a. Loren: Logic-regularized reasoning for interpretable fact verification. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, volume 36, 10482–10491. Chen, Y.; Li, D.; Zhang, P.; Sui, J.; Lv, Q.; Tun, L.; and Shang, L. 2022b. Cross-modal ambiguity learning for multimodal fake news detection. In Proceedings of the 14th ACM Web Conference, 2897–2905. Dhawan, M.; Sharma, S.; Kadam, A.; Sharma, R.; and Kumaraguru, P. 2022. Game-on: Graph attention network based multimodal fusion for fake news detection. arXiv preprint arXiv:2202.12478. Dong, Y.; He, D.; Wang, X.; Li, Y.; Su, X.; and Jin, D. 2023. A generalized deep markov random fields framework for fake news detection. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence, 4758– 4765. Enderton, H. B. 2001. A mathematical introduction to logic. Elsevier. Goldstein, J. A.; Sastry, G.; Musser, M.; DiResta, R.; Gentzel, M.; and Sedova, K. 2023. Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, 770–778. Hu, Z.; Ma, X.; Liu, Z.; Hovy, E.; and Xing, E. 2016. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2410–2420. Jin, D.; Wang, L.; Zheng, Y.; Li, X.; Jiang, F.; Lin, W.; and Pan, S. 2022a. CGMN: A contrastive graph matching network for self-supervised graph similarity learning. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2101–2107. Jin, D.; Wang, R.; Ge, M.; He, D.; Li, X.; Lin, W.; and Zhang, W. 2022b. RAW-GNN: RAndom walk aggregation based graph neural network. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2108–2114. Jin, Z.; Cao, J.; Guo, H.; Zhang, Y.; and Luo, J. 2017. Multimodal fusion with recurrent neural networks for rumor detection on microblogs. In Proceedings of the 25th ACM International Conference on Multimedia, 795–816. Kim, J.-H.; On, K.-W.; Lim, W.; Kim, J.; Ha, J.-W.; and Zhang, B.-T. 2017. Hadamard product for low-rank bilinear pooling. In Proceedings of the 5th International Conference on Learning Representations. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kingma, D. P.; and Welling, M. 2014. Auto-encoding variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations. Li, T.; Gupta, V.; Mehta, M.; and Srikumar, V. 2019. A logicdriven framework for consistency of neural models. In Proceedings of the 24th Conference on Empirical Methods in Natural Language Processing, 3924–3935. Liu, H.; Wang, W.; and Li, H. 2023. Interpretable multimodal misinformation detection with logic reasoning. arXiv preprint arXiv:2305.05964. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Mishima, K.; and Yamana, H. 2022. A survey on explainable fake news detection. IEICE TRANSACTIONS on Information and Systems, 105(7): 1249–1257. Nakamura, K.; Levy, S.; and Wang, W. Y. 2020. Fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection. In Proceedings of the 12th Language Resources and Evaluation Conference, 6149–6157. OpenAI. 2023. ChatGPT. https://openai.com/blog/chatgpt. Qi, P.; Cao, J.; Li, X.; Liu, H.; Sheng, Q.; Mi, X.; He, Q.; Lv, Y.; Guo, C.; and Yu, Y. 2021. Improving fake news detection by using an entity-enhanced framework to fuse diverse multimodal clues. In Proceedings of the 29th ACM International Conference on Multimedia, 1212–1220. Qi, P.; Cao, J.; Yang, T.; Guo, J.; and Li, J. 2019. Exploiting multi-domain visual information for fake news detection. In Proceedings of the 19th IEEE International Conference on Data Mining, 518–527. Qian, S.; Wang, J.; Hu, J.; Fang, Q.; and Xu, C. 2021. Hierarchical multi-modal contextual attention network for fake news detection. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, 153–162. Singhal, S.; Shah, R. R.; Chakraborty, T.; Kumaraguru, P.; and Satoh, S. 2019. Spotfake: A multi-modal framework for fake news detection. In Proceedings of the 5th IEEE International Conference on Multimedia Big Data, 39–47. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8361 Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, 2818–2826. Wang, X.; Dong, Y.; Jin, D.; Li, Y.; Wang, L.; and Dang, J. 2023. Augmenting affective dependency graph via iterative incongruity graph learning for sarcasm detection. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, volume 37, 4702–4710. Wang, Y.; Ma, F.; Jin, Z.; Yuan, Y.; Xun, G.; Jha, K.; Su, L.; and Gao, J. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th ACM Sigkdd International Conference on Knowledge Discovery & Data Mining, 849–857. Wei, Z.; Pan, H.; Qiao, L.; Niu, X.; Dong, P.; and Li, D. 2022. Cross-modal knowledge distillation in multi-modal fake news detection. In Proceedings of the 48th IEEE International Conference on Acoustics, Speech and Signal Processing, 4733–4737. Wu, L.; Liu, P.; and Zhang, Y. 2023. See how you read? Multi-reading habits fusion reasoning for multi-modal fake news detection. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, volume 37, 13736–13744. Ying, Q.; Hu, X.; Zhou, Y.; Qian, Z.; Zeng, D.; and Ge, S. 2023. Bootstrapping multi-view representations for fake news detection. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, volume 37, 5384–5392. Zhang, W.; Gui, L.; and He, Y. 2021. Supervised contrastive learning for multimodal unreliable news detection in COVID-19 pandemic. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 3637–3641. Zlatkova, D.; Nakov, P.; and Koychev, I. 2019. Factchecking meets fauxtography: verifying claims about images. In Proceedings of the 24th Conference on Empirical Methods in Natural Language Processing, 2099–2108. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8362
2024
929
18,772
ShapeBoost: Boosting Human Shape Estimation with Part-Based Parameterization and Clothing-Preserving Augmentation Siyuan Bian1, Jiefeng Li1, Jiasheng Tang3,4, Cewu Lu1,2 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China 2MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China 3DAMO Academy, Alibaba group, Hangzhou, China 4Hupan Lab, Hangzhou, China {biansiyuan, ljf likit, lucewu}@sjtu.edu.cn, [email protected] Abstract Accurate human shape recovery from a monocular RGB image is a challenging task because humans come in different shapes and sizes and wear different clothes. In this paper, we propose ShapeBoost, a new human shape recovery framework that achieves pixel-level alignment even for rare body shapes and high accuracy for people wearing different types of clothes. Unlike previous approaches that rely on the use of PCA-based shape coefficients, we adopt a new human shape parameterization that decomposes the human shape into bone lengths and the mean width of each part slice. This partbased parameterization technique achieves a balance between flexibility and validity using a semi-analytical shape reconstruction algorithm. Based on this new parameterization, a clothing-preserving data augmentation module is proposed to generate realistic images with diverse body shapes and accurate annotations. Experimental results show that our method outperforms other state-of-the-art methods in diverse body shape situations as well as in varied clothing situations. 1 Introduction Human pose and shape (HPS) recovery from monocular RGB images is an essential task of computer vision. It serves as a basis for human behavior understanding and has applications in various fields such as Virtual Reality, Augmented Reality, and Autopilot. Recent methods (Zhang et al. 2022; Li et al. 2022b,a, 2021) achieve high accuracy in human pose estimation, but their results of human shape estimation are often suboptimal. Due to the scarcity of image datasets featuring diverse body shapes, many existing methods for recovering human pose and shape suffer from overfitting on body shape estimation. Their results are particularly unsatisfactory for very thin or plump people. Previous approaches have attempted to solve the overfitting issue through two main strategies. The first kind of methods (Varol et al. 2017; Sengupta, Budvytis, and Cipolla 2020, 2021b,a) train on synthetic data and exploit proxy representations to reduce the domain gap, while the second kind of methods (Dwivedi et al. 2021; Omran et al. 2018; Agarwal and Triggs 2005) exploit shape cues which are easy to annotate as weak supervision. However, Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. for the first kind of methods, the synthetic images are unnatural with unrealistic texture and clothing, and the extracted proxy representations may be ambiguous and inaccurate. The situation is especially severe when the individual is wearing thick garments or is occluded in the image. For the second kind of methods, since 2D clues such as segmentations and silhouettes are highly correlated with the human pose and clothing, supervising with 2D clues may give wrong guidance of human shape in the case of inaccurate pose estimation or thick clothing. Moreover, the real-world images of extreme shapes are still insufficient. SHAPY (Choutas et al. 2022) improves the second kind of methods by using linguistic attributes and body measurements as supervision, which allows it making better estimates for clothed people. However, similar to other models trained on real-world datasets, it still performs poorly on images of people with extreme body shapes because of the lack of extreme body shapes in the training datasets. To sum up, just as shown in Fig. 1, the first kind of methods often fail on images with people in occlusion or thick clothing, while the second kind of methods often fail on images containing people with extreme body shapes. To overcome the above limitations, we propose ShapeBoost, a new shape recovery framework based on a novel part-based shape parameterization. The new shape parameters are composed of bone lengths and mean widths of body part slices. Using a novel semi-analytical algorithm, the body shape can be accurately and robustly recovered from these parameters. During training, the bone lengths can be calculated from human keypoints, and the part widths are regressed by the neural network. Compared to the original shape parameters derived from PCA coefficients, our new part-based parameterization has a clear local semantic meaning, making it easier to regress and more flexible in application. During training, ShapeBoost augments new image-shape pairs by randomly transforming the raw image and calculating the corresponding part-based parameters. For image transformation, a clothing-preserving augmentation method is proposed: we first segment the human body out of the image and randomly transform it into a different shape. Then, the human segmentation is pasted back onto the inpainted background image with the guidance of the appearance consistency heatmap (Fang et al. 2019). The corresponding shape parameters can be analytically retrieved by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 828 (a) Input Image (b) Sengupta et al. (c) SHAPY (d) Ours (e) Sengupta et al. (f) SHAPY (g) Ours Thick clothes Extreme Shape Vertex error (mm) 100 50 Figure 1: Previous SOTA methods for human shape estimation (Sengupta, Budvytis, and Cipolla 2021a; Choutas et al. 2022) (b, c) either fail on images of people wearing thick clothes or fail on images of people with extreme body shapes, while our method (d) achieves pixel-aligned results with high accuracy in both situations. Warmer colors on the human mesh represent higher per-vertex error. applying the equivalent transformation since each component in the part-based representation is clearly defined. Compared to previous approaches, ShapeBoost generates realistic images of diverse human shapes in natural clothing together with the corresponding faithful annotations. Moreover, our new parameterization accurately describes the extreme body shapes and encourages pixel-level alignment. As a result, our method overcomes the disadvantages of existing methods and achieves high accuracy on images of people in thick clothes as well as on images of people with extreme body shapes. We benchmark our method on SSP-3D (Sengupta, Budvytis, and Cipolla 2020) and HBW (Choutas et al. 2022) datasets. The results show that our method achieves state-of-the-art performance in both thick clothes situations and extreme body shape situations. The main contributions of this paper are summarized as follows: • We present an accurate and robust human shape parameterization together with a semi-analytical shape recovery algorithm, which is flexible and interpretable. • We propose ShapeBoost, a human shape recovery framework consisting of the a clothing-preserving data augmentation module and a shape reconstruction module. • Our approach outperforms previous approaches and can handle diverse clothing as well as extreme body shapes. 2 Related Work 2.1 3D Human Pose and Shape (HPS) Many algorithms have been proposed for reconstructing human pose and shape from RGB images, which are broadly categorized into two types. Firstly, model-based methods estimate parameters of a parameterized human model. Some methods (Bogo et al. 2016; Pavlakos et al. 2019; Guan et al. 2009) estimate human pose and shape parameters by optimization. Regression-based methods (Kanazawa et al. 2018; Kocabas, Athanasiou, and Black 2020; Kocabas et al. 2021; Li et al. 2022b, 2021), on the contrary, employ neural networks to estimate the parameters. To reduce the difficulty of regression, many regression-based methods employ intermediate representations, including keypoints (Kanazawa et al. 2018), silhouettes (Pavlakos et al. 2018), segmentation (Omran et al. 2018) and 2D/3D heatmaps (Tung et al. 2017), keypoints (Li et al. 2021, 2023b,a) etc. Some approaches (Kolotouros et al. 2019; Muller et al. 2021; Joo, Neverova, and Vedaldi 2021) combine optimization and regression. Secondly, model-free methods directly predict free-form representations of the human body, with the position of body model vertices predicted based on image features (Corona et al. 2022; Kolotouros, Pavlakos, and Daniilidis 2019; Varol et al. 2018; Lin, Wang, and Liu 2021a,b; Moon and Lee 2020), keypoints (Choi, Moon, and Lee 2020), or segmentations (Varol et al. 2018). These medthods mostly focus on human pose estimation and their results of human shape estimation are often unsatisfactory. Our work belongs to the model-based category, and we adopt inverse kinematics to estimate the human pose similar to HybrIK (Li et al. 2021) for simplicity. However, instead of directly regressing the shape parameters, we employ a flexible and interpretable parameterization and a new shape reconstruction pipeline to achieve more accurate and robust shape estimation. Our method can also be easily applied to different pose estimation backbones. 2.2 Estimating 3D Body Shape Most recent HPS estimation methods excel in precise pose estimation but exhibit limitations in accurately estimating the real human body shape under clothing. Some methods have attempted to address this issue, and they mainly focus on novel training datasets and the estimation framework. Training datasets for human shape estimation. Accurately annotating body shapes from 2D human datasets (Lin et al. 2014) is hard, and commmonly-used 3D human datasets (von Marcard et al. 2018; Ionescu et al. 2013) conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 829 Clothing-preserving Transform Encoder w Twist MLP Semi-analytical Shape Reconstruction Rest Pose Mesh ShapeBoost Part Width Skeleton During Training M0 Figure 2: The overall pipeline. First, the input image is randomly transformed with the clothing-preserving image transformation, and a convolutional neural network (CNN) is employed to extract skeleton, part widths and twist rotations. Then, the pose is obtained using inverse kinematics and the shape is obtained with our semi-analytical algorithm. The final mesh is retrieved based on the pose and shape parameter. The ShapeBoost framework consists of the image augmentation module and the shape reconstruction module. tain limited number of people. To overcome this limitation, some researchers have created synthetic image datasets by rendering the mesh generated by parameterized human models (Hoffmann et al. 2019; Sengupta, Budvytis, and Cipolla 2020; Varol et al. 2017; Weitz et al. 2021). However, it is difficult to obtain images with natural clothing and realistic scenes using the naive rendering. Recently, more realistic synthetic datasets (Bertiche, Madadi, and Escalera 2020; Pumarola et al. 2019; Liang and Lin 2019; Patel et al. 2021; Black et al. 2023) have been proposed, which contain people in different clothing with the help of human scans, simulation or deep generative networks. Choutas et al. (Choutas et al. 2022) have proposed the Model-Agency dataset, which uses images from model agency websites labeled with linguistic attributes and measurements. Although these new datasets contain more diverse body shapes, most datasets still lack people with extreme body shapes, and the authenticity of synthetic images remains insufficient. Estimation Framework. Several methods (Sengupta, Budvytis, and Cipolla 2020, 2021b,a) train the network directly on synthetic data. To reduce the domain gap, they use proxy representations (PRs) as input, such as part segmentation masks (Varol et al. 2017), silhouettes (Sengupta, Budvytis, and Cipolla 2020; Ruiz et al. 2022), Canny edge detection results (Sengupta, Budvytis, and Cipolla 2021b,a) or 2D keypoint heatmaps (Sengupta, Budvytis, and Cipolla 2020, 2021b,a). Other work (Dwivedi et al. 2021; Omran et al. 2018; Agarwal and Triggs 2005) uses real-world data for training and exploits 2D shape cues as supervision. Bodypart segmentation masks (Dwivedi et al. 2021; Omran et al. 2018) and silhouettes (Agarwal and Triggs 2005) are widely used among them. LVD (Corona et al. 2022) learns the vertex descent direction based on image-aligned features, and SHAPY (Choutas et al. 2022) uses linguistic attributes and body measurements as supervision. Unlike previous work, our method generates images with diverse human body shapes without altering clothing, lighting, and background details. Therefore, the diversity is rich and the domain gap is small. Since our framework utilizes our new parameterization, there is no ambiguity even when the human is in thick clothing and our method will not enlarge error even when the pose estimation is inaccurate. 3 Method In this section, we present our solution for human shape recovery (Fig. 2). First, we give background knowledge of the parameterization of SMPL model in Sec. 3.1. Considering its drawbacks, a flexible and interpretable part-based human shape parameterization is proposed in Sec. 3.2. Based on this new parameterization, in Sec. 3.3, we design a new human shape recovery framework called ShapeBoost. The training pipeline and loss functions are described in Sec. 3.4. 3.1 Preliminary SMPL Model. In this work, SMPL model (Loper et al. 2015) is employed to represent human body pose and shape. SMPL provides a differentiable function V(θ, β) that maps pose θ ∈R3J and shape parameters β ∈R10 to a human mesh V, where J is the number of joints. The pose parameters θ represent the relative rotation of body joints, and the shape parameters β are coefficients of a PCA body shape basis. SMPL model is drived in two steps: T = S(β), (1) V = V(θ, β) = P(θ, S(β)). (2) First, a rest-pose mesh T is constructed using function S. Second, the rest-pose mesh is driven to the target pose by function P. The shape of the mesh is determined only by β, and the posing procedure does not change the body shape. Most current methods regress shape parameters β directly. However, since most available training datasets lack people with diverse body shapes, these methods often overfit and fail to generalize to unseen body shapes. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 830 Bone length Vertex width The split of slices 1 2 3 Figure 3: Illustration of the shape decomposition procedure. From left to right, the figure shows the part segmentation, the definition of bone length and vertex width, and the slicing of one body part. 3.2 Part-based Parameterization In this work, we propose a novel parameterization of human shape using bone lengths and widths of part slices. Compared to the β representation which uses a global descriptor of the body shape, this new representation allocates shape descriptors to local body parts. This allows the network to learn from local image features and thus alleviates the overfitting problem. Furthermore, our parameterization is more flexible and interpretable, allowing compatibility with the our data augmentation procedure discussed in Sec. 3.3. In our parameterization, the SMPL mesh is divided into J = 24 segments according to the linear blending weight, and each segment has a corresponding central bone ended with two joints. The distance of one vertex from its corresponding bone is called the “width” of this vertex for short. Each body part is further sliced into n components along the bone, and the mean widths of the vertices in these n slices are used to represent the thickness of that part. The segmenting and slicing technique is visually illustrated in Fig. 3. In this way, the formula of SMPL model is converted to: T = M(l, w), (3) V = P(θ, M(l, w)), (4) where l ∈RJ−1 represents the bone lengths of the body skeleton and w ∈RnJ represents the mean widths of all part slices. Under our new representation, the SMPL model first derives a rest-pose mesh using M(l, w), and then uses function P to drive the mesh to the target pose just like the original SMPL model. Deriving the function M directly by a neural network is untrivial and can lead to overfitting. Therefore, a semianalytical algorithm is proposed that first solves a roughly correct mesh using analytical methods and then uses a multilayer perceptron (MLP) to correct the result using error feedback techniques. We can analytically retrieve a body shape that roughly conforms to the target bone lengths and part slice widths by (1) stretching the bones and broadening each part slice of the template mesh according to the target values. (2) using linear blend weights (LBS weights) to assemble these adjusted parts. (3) using the PCA-coefficients of SMPL to retrieve the shape parameters from the deformed template mesh. This mapping is referred to as M0. Masked Body Reshaped Body Background Inpainted BG. matting Heatmap-based Position Search inpainting random transform Figure 4: Illustration of clothing-preserving transformation. Since the input bone lengths and part widths often contain noise, the analytical algorithm sometimes produces suboptimal body shapes. Therefore, we use a 4-layer MLP to modify the analytically-retrieved shape parameters. The final formula of M can be written as T = M(l, w) = MLP(M0(l, w), l, w, ∆l, ∆w), (5) where ∆l and ∆w are the difference between the target bone lengths and part slice widths and the corresponding values obtained by M0. In practice, instead of regressing the bone lengths directly, we extract the bone lengths from human keypoints. This setting further encourages the network to only focus on local, per-part image features and thus alleviate overfitting. 3.3 ShapeBoost Armed with the part-based parameterization discussed in Sec 3.2, we can manipulate the body shape in an intuitive way by stretching the bone lengths and broadening the part slice widths. These manipulations enable us to augment the raw human images and retrieve the new ground truth body shape which accurately explains the figure in the image after the transformation. This framework, named ShapeBoost, generates diverse body shapes while preserving clothing, lighting, and background details, and then takes use of our new parameterization to reconstruct the body shape. Clothing-preserving Image Transformation. An intuitive way to change the human shape in an image is to apply the affine transformation to the input image. For example, scaling an image with an aspect ratio unequal to 1 results in a visually thinner or ampler human figure. However, applying the affine transform to the entire image results in a stretched background, which may leak the scaling information and thus incur overfitting. To alleviate this problem, we propose a silhouette-based augmentation method inspired by Instaboost (Fang et al. 2019). Instead of affine transforming the whole image, we first segment the human body out using the ground truth segmentation. Then we inpaint the background image, affine transform the segmented human body, and paste the transformed human body back onto the inpainted background image with the guidance of the appearance consistency heatmap (Fang et al. 2019). This method effectively avoids background stretching and produces more natural-looking images. The process is visually illustrated in Fig. 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 831 To simplify the discussion, we assume that the affine transformation consists of a rotation matrix and a scaling matrix, which is written as T = SR =  a 0 0 b   cos θ −sin θ sin θ cos θ  . (6) Shape-parameter Derivation. People in different poses are affected by the image transformation in different ways, which poses a great challenge for the derivation of the PCA-based shape parameters after the image transformation. However, with the part-based parameterization, we can still accurately explain the new body shape by estimating the widths and bone lengths of each body part. We use orthographic projection in our derivation. Given the camera and pose parameters, the bone lengths after transformation can be easily obtained by stretching the bones to ensure a consistent 2D joint projection. Compared to the derivation of bone lengths, the derivation of the part slice widths after transformation is more complex. Suppose a vertex indexed by k belongs to the j-th part. The distance of the vertex from the part bone on the 2D image plane, denoted by w2D k , is affected by the transformation according to the following equations: ¯w2D k = ab · l2D j ¯l2D j w2D k , (7) where l2D j and ¯l2D j represent the bone lengths of part j on the 2D image plane before and after the transformation, respectively; a and b are scaling factors mentioned in Eq. 6. A detailed derivation is available in the supplementary materials. It is noteworthy that Eq. 7 implies the 2D widths of vertices on the same part are scaled by the same factor. Therefore, the underlining 3D part width of part j is changed by ¯wj = ¯s s × ab · l2D j ¯l2D j × wj. (8) In the equation, s and wj are the scale factor of the orthographic projection and the 3D part width of part j before the image transformation, whereas ¯s and ¯wj are the corresponding values after the transformation. Due to scale ambiguity, ¯s is an ambiguous scaling factor that is difficult to directly derive. Therefore, in our training, we only supervise the projected results of the predicted part slice widths on the 2D image plane, without directly supervising their actual values. We hypothesize that the network can learn the best scaling factor ¯s using the prior knowledge of human body shape. 3.4 Training Pipeline and Loss Function The overall training pipeline is illustrated in Fig. 2. First, the input image is transformed using the clothing-preserving image transformation, and the convolutional neural network (CNN) backbone is utilized to process the augmented image and estimate the skeleton (3D keypoints extracted from heatmaps), twist angles and part slice widths. Second, we use these estimated values to reconstruct the pose and shape of the individual. The pose parameters are obtained with inverse kinematics similar to HybrIK (Li et al. 2021), while the shape parameters are retrieved using the semi-analytical algorithm discussed in Sec. 3.2. The final mesh is obtained based on the pose and refined shape parameters. We employ end-to-end training for the pipeline, and the loss function consists of three components: shape loss, pose loss, and shape-decompose loss. The CNN backbone is supervised by shape loss and pose loss, while the MLP used in the shape reconstruction module is supervised by shapedecompose loss. Shape Loss. In shape loss, we supervise the predicted part widths predicted by the CNN backbone. Specifically, we require the projection results of the part slice widths and the vertex widths to be close to the target value after data augmentation. K represents the number of vertices in the human mesh model and J represents the number of joints. Lshape = J X j ∥ˆw2D j −¯w2D j ∥2 2 + µ0 K X k ∥ˆw2D k −¯w2D k ∥2 2. (9) Pose Loss. Pose loss is designed to supervise the predicted skeleton and twist angle. We adopt the same loss function as HybrIK (Li et al. 2021) and denote it as Lpose. Shape-decompose Loss. Shape-decompose loss ensures that the shape reconstruction module predicts a valid human mesh while best preserving the part slice widths and bone lengths predicted by the CNN backbone. It consists of three loss functions Ldecomp = Lbone + Lwidth + µ1Lreg, (10) where Lbone = J X j  ∥˜xj −ˆxj∥1 + ∥˜lj −ˆlj∥1  , (11) Lwidth = J X j ∥˜wj −ˆwj∥2 2 + ∥˜wj ˜lj −ˆwj ˆlj ∥2 2 ! , (12) Lreg = ∥˜β∥2 2. (13) In the equations, ˜xj, ˜lj, ˜wj are the keypoint coordinates, the bone length and the part slice widths of part j refined by the MLP in the shape reconstruction module. Lbone and Lwidth supervise the preservation of the bone length and part slice widths respectively, and Lreg regularizes ˜β parameter. Overall Loss. The overall loss of our pipeline is formulated as L = Lpose + µ2Ldecomp + µ3Lshape. (14) 4 Experiments 4.1 Datasets We use 3DPW (von Marcard et al. 2018), Human3.6M (Ionescu et al. 2013), COCO (Lin et al. 2014), AGORA (Patel et al. 2021) and Model Agency Dataset (Choutas et al. 2022) for training. The original Model Agency Dataset contains 94, 620 images of 4, 419 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 832 Method Model PVE-T-SC ↓ HMR (Kanazawa et al. 2018) SMPL 22.9 SPIN (Kolotouros et al. 2019) SMPL 22.2 (Sengupta et al. 2020) SMPL 15.9 (Sengupta et al. 2021b) † SMPL 13.3 (Sengupta et al. 2021a) SMPL 13.6 HybrIK (Li et al. 2021) SMPL 22.8 LVD (Corona et al. 2022) SMPL 26.1 CLIFF (Li et al. 2022b) SMPL 18.4 SHAPY (Choutas et al. 2022) SMPL-X 19.2 SoY (Sarkar et al. 2023) SMPL 15.8 (Ma et al. 2023) SMPL 18.8 (Sengupta et al. 2021a)∗ SMPL 15.4 SHAPY (Choutas et al. 2022)∗ SMPL 12.2 ShapeBoost (Ours) SMPL 11.4 ShapeBoost (Ours) SMPL-X 12.0 Table 1: Quantitative comparisons with state-of-the-art methods on the SSP-3D test set in mm. Symbol † means using multiple images as input, and symbol ∗means retraining using the same training setting as our method. models, but we only use about one-third of these images in our training due to the unavailability of many images on the Internet. To avoid data bias, the images are sampled following previous work (Choutas et al. 2022). We also follow previous work and use synthetic data to assist network training. The rendering settings are identical to (Sengupta, Budvytis, and Cipolla 2021a). We evaluate our model on SSP-3D (Sengupta, Budvytis, and Cipolla 2020) and HBW datasets (Choutas et al. 2022). The results on SSP-3D dataset show the model’s performance on diverse human body shapes, while the results on HBW dataset indicate the model’s performance on images of people wearing different clothing. 4.2 Comparison with the State-of-the-art We evaluate the performance of different methods on SSP3D and HBW test and validation datasets. Following previous work, on SSP-3D dataset, we use PVE-T-SC, a scalenormalized per-vertex error metric to evaluate the model performance. On HBW dataset, we report the predicted height (H), chest (C) , waist (W), and hip circumference (HC) errors, and P2P20K errors of different models. All the experiments of our method use part slicing number n = 1 by default unless otherwise stated. For a fair comparison, we also retrain two best-performing networks (Sengupta, Budvytis, and Cipolla 2021a; Choutas et al. 2022) with the same datasets and settings as our method. Tab. 1 shows that our method surpasses previous works on SSP-3D dataset, which shows that our method can deal with the diverse human body shape much better than previous methods. Tab. 3 and 2 shows the performance on HBW validation and test dataset. On HBW test dataset, our method achieves comparable results with previous SOTA methods and predicts more accurate waist and hip circumferences. On HBW validation set, our method outperforms previous SOTA methods. These results prove that our method can Method H C W HC P2P20K SPIN 59 92 78 101 29 Sengupta et al. 2020 135 167 145 102 47 TUCH 58 89 75 57 26 Sengupta et al. 2021a 82 133 107 63 32 CLIFF 27 SHAPY 51 65 69 57 21 ShapeBoost (SMPL) 66 63 58 47 25 ShapeBoost (SMPL-X) 68 69 56 49 22 Table 2: Quantitative comparisons with state-of-the-art methods on the HBW test set in mm. Method H C W HC P2P20K Sengupta et al. 2021a 68 89 111 71 30 HybrIK 88 82 74 51 33 LVD # 89 131 87 31 SHAPY 63 59 85 54 25 Ma et al. 2023 112 87 133 59 41 Sengupta et al. 2021a∗ 72 66 74 49 29 SHAPY∗ 62 52 72 50 26 ShapeBoost (SMPL) 58 54 72 42 25 ShapeBoost (SMPL-X) 61 49 71 49 23 Table 3: Quantitative comparisons with state-of-the-art methods on the HBW validation set in mm. Symbol # means using ground truth scale and symbol ∗means retraining using the same training setting as our method. deal with diverse human clothing better than previous methods. Qualitative results are provided in Fig. 5. 4.3 Ablation Study To demonstrate the effectiveness of different components in our method, we conduct ablation studies on SSP-3D dataset and HBW validation set. Shape reconstruction. To analyze the effectiveness and robustness of our new human shape parameterization, we reconstruct body shapes using bone lengths and part slice widths with different reconstruction algorithms under different noise ratios. The results are shown in Tab. 4. All the model are trained on shape parameters sampled from Gaussian distributions and tested on 500 different body shapes obtained from AMASS dataset (Mahmood et al. 2019). “Hybrid” algorithm means using the semi-analytical algorithm, “Analytical” algorithm means solely employing the analytical algorithm, and “NN” algorithm means directly using the neural network without analytical steps. From the first three lines in Tab. 4, we observe that our proposed semi-analytical algorithm achieves the lowest error especially when the noise ratio is small. Additionally, when the noise is subtle, the parameterizations using different part slicing number (n = 1, 2, 3) all achieve an acceptable low error. When the noise ratio is large, the error ratio decreases with larger n. Thus, we can conclude that our semi-analytically method accurately reconstructs human shape, and a larger n makes The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 833 (b) SHAPY (a) Sengupta et al. (c) Ours (b) SHAPY (a) Sengupta et al. (c) Ours 100 0 55 0 100 0 110 0 110 0 60 0 Figure 5: Qualitative results on SSP-3D and HBW datasets. From left to right: Input image, (a) Sengupta et al. (Sengupta, Budvytis, and Cipolla 2021a) results, (b) SHAPY (Choutas et al. 2022) results, and (c) Our results. Warmer colors mean higher per-vertex error. Experiments on SSP-3D dataset use PVE-T-SC metric, and experiments on HBW dataset use P2P20K metric. V2V Error (mm) ↓ n Algo. 0%noise 1%noise 2%noise 5%noise 1 Hybrid 0.69 2.30 5.95 8.83 1 Analy. 6.14 6.59 8.99 12.34 1 NN 1.82 2.99 6.20 8.98 2 Hybrid 0.58 2.01 5.40 8.21 3 Hybrid 0.65 1.93 5.00 7.63 Table 4: Ablation experiments of reconstructing shape using our new shape parameterization in mm. it more robust to noise. Shape estimation from images. We also experiment using different parameterizations for estimating human body shapes from RGB images. Tab. 5 provides a comparison of the results obtained using the direct shape parameterization (β) (Li et al. 2021) with our novel parameterization utilizing n = 1 and n = 2. We use image augmentation in the training. Since it is hard to find a ground truth β for augmented images, we use the 2D coordinates of vertices as supervision. We find that using our new parameterization yields better results, but a larger n does not improve performance. The reasons are (1) the parameterization with n = 1 already achieves a small shape reconstruction error (2) using larger n complicates the regression task for the CNN backbone, resulting in a reduction in the accuracy of predicting part slicing widths. The effectiveness of data augmentation. We also make ablation studies with different training data quantitatively. The results are shown in Tab. 6. When the data augmentation module is not used, the performance of our model drops on both HBW and SSP-3D dataset. This shows the effectiveness of our data augmentation module. Method PVE-T-SC P2P20K β 12.3 26.0 n = 1 11.4 25.1 n = 2 11.6 26.2 Table 5: Ablation experiments of shape estimation from RGB images using different shape parameterization on SSP3D and HBW validation set in mm. Method PVE-T-SC P2P20K ShapeBoost (Ours) 11.4 25.1 w/o Augment 12.1 26.5 w/o Augment, w/o Decompose 12.4 27.0 Table 6: Ablation experiments of data augmentation module on SSP-3D and HBW validation set in mm. 5 Conclusion In this paper, we present ShapeBoost, a new framework for accurate human shape recovery that outperforms the current state-of-the-art methods. This framework exploits a new human shape parameterization that decomposes human shape into bone lengths and the mean width of each part slice. Compared to the existing representation with PCA coefficients, our new method is more flexible and interpretable. Based on the new shape parameterization, a new clothingpreserving data augmentation module is proposed to generate realistic images of various human shapes and the corresponding accurate annotations. Our method randomly augments the body shape without destructing the clothing details. Experiments show that our method achieves SOTA performance for extreme body shapes as well as achieves high accuracy for people under different types of clothing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 834 Acknowledgments Cewu Lu is the corresponding author. He is the member of Qing Yuan Research Institute, Qi Zhi Institute and MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China. This work was supported by the National Key R&D Program of China (No. 2021ZD0110704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Qi Zhi Institute, and Shanghai Science and Technology Commission (21511101200). References Agarwal, A.; and Triggs, B. 2005. Recovering 3D human pose from monocular images. TPAMI, 28(1): 44–58. Bertiche, H.; Madadi, M.; and Escalera, S. 2020. CLOTH3D: clothed 3d humans. In ECCV, 344–359. Springer. Black, M. J.; Patel, P.; Tesch, J.; and Yang, J. 2023. BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8726–8737. Bogo, F.; Kanazawa, A.; Lassner, C.; Gehler, P.; Romero, J.; and Black, M. J. 2016. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In ECCV. Choi, H.; Moon, G.; and Lee, K. M. 2020. Pose2mesh: Graph convolutional network for 3d human pose and mesh recovery from a 2d human pose. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part VII 16, 769–787. Springer. Choutas, V.; M¨uller, L.; Huang, C.-H. P.; Tang, S.; Tzionas, D.; and Black, M. J. 2022. Accurate 3D body shape regression using metric and semantic attributes. In CVPR, 2718– 2728. Corona, E.; Pons-Moll, G.; Aleny`a, G.; and Moreno-Noguer, F. 2022. Learned Vertex Descent: A New Direction for 3D Human Model Fitting. In ECCV. Dwivedi, S. K.; Athanasiou, N.; Kocabas, M.; and Black, M. J. 2021. Learning to regress bodies from images using differentiable semantic rendering. In ICCV, 11250–11259. Fang, H.-S.; Sun, J.; Wang, R.; Gou, M.; Li, Y.-L.; and Lu, C. 2019. Instaboost: Boosting instance segmentation via probability map guided copy-pasting. In ICCV, 682–691. Guan, P.; Weiss, A.; Balan, A. O.; and Black, M. J. 2009. Estimating human shape and pose from a single image. In ICCV, 1381–1388. IEEE. Hoffmann, D. T.; Tzionas, D.; Black, M. J.; and Tang, S. 2019. Learning to train with synthetic humans. In Pattern Recognition, 609–623. Springer. Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2013. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. TPAMI. Joo, H.; Neverova, N.; and Vedaldi, A. 2021. Exemplar finetuning for 3d human model fitting towards in-the-wild 3d human pose estimation. In 3DV. Kanazawa, A.; Black, M. J.; Jacobs, D. W.; and Malik, J. 2018. End-to-end recovery of human shape and pose. In CVPR. Kocabas, M.; Athanasiou, N.; and Black, M. J. 2020. VIBE: Video inference for human body pose and shape estimation. In CVPR. Kocabas, M.; Huang, C.-H. P.; Hilliges, O.; and Black, M. J. 2021. PARE: Part attention regressor for 3D human body estimation. In ICCV, 11127–11137. Kolotouros, N.; Pavlakos, G.; Black, M. J.; and Daniilidis, K. 2019. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In ICCV. Kolotouros, N.; Pavlakos, G.; and Daniilidis, K. 2019. Convolutional mesh regression for single-image human shape reconstruction. In CVPR, 4501–4510. Li, J.; Bian, S.; Liu, Q.; Tang, J.; Wang, F.; and Lu, C. 2023a. NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12933–12942. Li, J.; Bian, S.; Xu, C.; Chen, Z.; Yang, L.; and Lu, C. 2023b. HybrIK-X: Hybrid Analytical-Neural Inverse Kinematics for Whole-body Mesh Recovery. arXiv preprint arXiv:2304.05690. Li, J.; Bian, S.; Xu, C.; Liu, G.; Yu, G.; and Lu, C. 2022a. D&D: Learning Human Dynamics from Dynamic Camera. In ECCV. Li, J.; Xu, C.; Chen, Z.; Bian, S.; Yang, L.; and Lu, C. 2021. Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation. In CVPR, 3383–3393. Li, Z.; Liu, J.; Zhang, Z.; Xu, S.; and Yan, Y. 2022b. Cliff: Carrying location information in full frames into human pose and shape estimation. In ECCV, 590–606. Springer. Liang, J.; and Lin, M. C. 2019. Shape-aware human pose and shape reconstruction using multi-view images. In ICCV, 4352–4362. Lin, K.; Wang, L.; and Liu, Z. 2021a. End-to-end human pose and mesh reconstruction with transformers. In CVPR, 1954–1963. Lin, K.; Wang, L.; and Liu, Z. 2021b. Mesh graphormer. In ICCV, 12939–12948. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; and Black, M. J. 2015. SMPL: A skinned multi-person linear model. TOG. Ma, X.; Su, J.; Wang, C.; Zhu, W.; and Wang, Y. 2023. 3D Human Mesh Estimation from Virtual Markers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 534–543. Mahmood, N.; Ghorbani, N.; Troje, N. F.; Pons-Moll, G.; and Black, M. J. 2019. AMASS: Archive of motion capture as surface shapes. In ICCV, 5442–5451. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 835 Moon, G.; and Lee, K. M. 2020. I2l-meshnet: Image-to-lixel prediction network for accurate 3d human pose and mesh estimation from a single rgb image. In ECCV, 752–768. Springer. Muller, L.; Osman, A. A.; Tang, S.; Huang, C.-H. P.; and Black, M. J. 2021. On self-contact and human pose. In CVPR, 9990–9999. Omran, M.; Lassner, C.; Pons-Moll, G.; Gehler, P.; and Schiele, B. 2018. Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In 3DV, 484–494. IEEE. Patel, P.; Huang, C.-H. P.; Tesch, J.; Hoffmann, D. T.; Tripathi, S.; and Black, M. J. 2021. AGORA: Avatars in geography optimized for regression analysis. In CVPR, 13468– 13478. Pavlakos, G.; Choutas, V.; Ghorbani, N.; Bolkart, T.; Osman, A. A.; Tzionas, D.; and Black, M. J. 2019. Expressive body capture: 3d hands, face, and body from a single image. In CVPR. Pavlakos, G.; Zhu, L.; Zhou, X.; and Daniilidis, K. 2018. Learning to estimate 3D human pose and shape from a single color image. In CVPR, 459–468. Pumarola, A.; Sanchez-Riera, J.; Choi, G.; Sanfeliu, A.; and Moreno-Noguer, F. 2019. 3dpeople: Modeling the geometry of dressed humans. In ICCV, 2242–2251. Ruiz, N.; Bellver, M.; Bolkart, T.; Arora, A.; Lin, M. C.; Romero, J.; and Bala, R. 2022. Human body measurement estimation with adversarial augmentation. In 2022 International Conference on 3D Vision (3DV), 219–230. IEEE. Sarkar, R.; Dave, A.; Medioni, G.; and Biggs, B. 2023. Shape of You: Precise 3D shape estimations for diverse body types. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3519–3523. Sengupta, A.; Budvytis, I.; and Cipolla, R. 2020. Synthetic Training for Accurate 3D Human Pose and Shape Estimation in the Wild. In British Machine Vision Conference (BMVC). Sengupta, A.; Budvytis, I.; and Cipolla, R. 2021a. Hierarchical kinematic probability distributions for 3D human shape and pose estimation from images in the wild. In ICCV, 11219–11229. Sengupta, A.; Budvytis, I.; and Cipolla, R. 2021b. Probabilistic 3D human shape and pose estimation from multiple unconstrained images in the wild. In CVPR, 16094–16104. Tung, H.-Y.; Tung, H.-W.; Yumer, E.; and Fragkiadaki, K. 2017. Self-supervised learning of motion capture. NeurIPS, 30. Varol, G.; Ceylan, D.; Russell, B.; Yang, J.; Yumer, E.; Laptev, I.; and Schmid, C. 2018. Bodynet: Volumetric inference of 3d human body shapes. In ECCV, 20–36. Varol, G.; Romero, J.; Martin, X.; Mahmood, N.; Black, M. J.; Laptev, I.; and Schmid, C. 2017. Learning from synthetic humans. In CVPR, 109–117. von Marcard, T.; Henschel, R.; Black, M. J.; Rosenhahn, B.; and Pons-Moll, G. 2018. Recovering accurate 3d human pose in the wild using imus and a moving camera. In ECCV. Weitz, A.; Colucci, L.; Primas, S.; and Bent, B. 2021. InfiniteForm: A synthetic, minimal bias dataset for fitness applications. arXiv preprint arXiv:2110.01330. Zhang, H.; Tian, Y.; Zhang, Y.; Li, M.; An, L.; Sun, Z.; and Liu, Y. 2022. PyMAF-X: Towards well-aligned full-body model regression from monocular images. arXiv preprint arXiv:2207.06400. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 836
2024
93
18,773
Enhancing Job Recommendation through LLM-Based Generative Adversarial Networks Yingpeng Du1,4*†, Di Luo2*, Rui Yan2, Xiaopei Wang3†, Hongzhi Liu4†, Hengshu Zhu5, Yang Song6†, Jie Zhang1 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 3School of Languages and Communication Studies, Beijing Jiaotong University, Beijing, China 4School of Software and Microelectronics, Peking University, Beijing, China 5Career Science Lab, BOSS Zhipin, Beijing, China 6NLP Center, BOSS Zhipin, Beijing, China [email protected],di [email protected],[email protected],[email protected],[email protected], [email protected],[email protected],[email protected] Abstract Recommending suitable jobs to users is a critical task in online recruitment platforms. Existing job recommendation methods often encounter challenges such as the low quality of users’ resumes, which hampers their accuracy and practical effectiveness. With the rapid development of large language models (LLMs), utilizing the rich knowledge encapsulated within them, as well as their powerful reasoning capabilities, offers a promising avenue for enhancing resume completeness to achieve more accurate recommendations. However, directly leveraging LLMs is not a one-size-fits-all solution, as it may suffer from issues like fabricated generation and few-shot problem, both of which can degrade the quality of resume completion. In this paper, we propose a novel LLMbased GANs Interactive Recommendation (LGIR) approach for job recommendation. To alleviate the limitation of fabricated generation, we not only extract users’ explicit properties (e.g., skills, interests) from their self-description but also infer users’ implicit characteristics from their behaviors for more accurate and meaningful resume completion. Nevertheless, some users still suffer from the few-shot problem, which arises due to scarce interaction records, leading to limited guidance for high-quality resume generation. To address this issue, we propose aligning unpaired low-quality resumes with high-quality generated counterparts using Generative Adversarial Networks (GANs), which can refine resume representations for better recommendation results. Extensive experiments on three large real-world recruitment datasets demonstrate the effectiveness of our proposed method. Introduction Job recommendation is an essential task in today’s online recruitment platforms, significantly improving recruitment efficiency by accurately matching job seekers (aka users) with suitable positions. Although existing job recommendation methods (Le et al. 2019; Jiang et al. 2020; Hou et al. *These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2022) have achieved considerable success in recent years, they still face significant challenges, such as the low quality of user resumes and interference from the few-shot problem (Gope and Jain 2017), hindering their practical accuracy and efficiency. For example, some users may not invest sufficient effort in crafting their resumes or lack comprehensive self-awareness, resulting in incomplete and low-quality descriptions of their skills and job preferences. Inspired by the recent remarkable capabilities and rapid development of large language models (LLMs), it is intuitive to utilize their extensive knowledge, powerful text comprehension, and reasoning abilities to improve and rectify low-quality resumes. However, simply leveraging LLMs (Touvron et al. 2023; Brown et al. 2020) to enhance user resumes is not a onesize-fits-all solution for job recommendation. Due to the widespread fabrications and hallucinations within LLMs (Zhang et al. 2023), it is difficult to generate high-quality resumes without users’ reliable interactive information. Fig.1 (A) illustrates the resume generation process for a user using simple completion with a well-known LLM, ChatGPT. It underscores that the generated results often contain excessive unrelated and fabricated information, rendering them unsuitable for recommendation. To alleviate this fabricated generation, we propose exploring users’ interactive behaviors with recommender systems to mine their relevance to users’ abilities and preferences, thereby assisting the LLMs in better profiling users for resume completion. Specifically, users generally possess particular job skills, residential addresses, and educational backgrounds, which make them interact with jobs that contain corresponding responsibilities, locations, and levels. As a result, we propose inferring users’ implicit characteristics (e.g., skills, preferences) from their interaction behaviors to help LLMs profile users and generate high-quality resumes. Although exploring users’ interactive behaviors can help LLMs better profile users, they may still suffer from the few-shot problem, limiting the quality of resume completion for certain users. Specifically, users with few interaction records (aka the long-tail effect) still face challenges with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8363 … Skills: UI Design, Interface Design, Product Interaction, Web Design Proficient in Photoshop, Illustrator, Sketch Projects: Redesigned the user interface for a web application, resulting in a 30% increase in user engagement and improved user satisfaction. Created a mobile app interface that streamlined the user journey, leading to a 20% reduction in user drop-off rates. Developed a responsive website design that improved accessibility and usability across multiple devices. … Generate a resume based on the following information: Two years of experience in UI work, interface design, product interaction, and web design; Proficient in Photoshop, Illustrator, Sketch. GPT 3.5 (A) Extract valuable information beyond users’ original resumes to allevaite fabrication of LLMs. Fabrication generation (B) Align the low-quality generation of fewshot users with high-quality generation. Many-shot users Few-shot users High-quality generation Low-quality generation User 1 User 3 User 4 User 2 unpair unpair Job 1 Job 3 Job 5 Job 7 Job 9 Job 2 Job 4 Job 6 Job 8 Job 10 Align Figure 1: The difficulty and motivation behind leveraging LLMs for job recommendation. fabrications and hallucinations within LLMs, as they lack sufficient interactive guidance for high-quality resume completion. To alleviate this problem, we propose aligning the generated resumes of few-shot users with the high-quality resumes of users who have extensive interaction records as shown in Fig.1 (B). Due to the lack of paired high-quality and low-quality resumes for a specific user in real-world scenarios, we introduce a Generative Adversarial Networks (GANs) (Goodfellow et al. 2020) based method to align the unpaired resumes across different users, which can refine the generated resumes of few-shot users. Specifically, the generator aims to improve the representations of low-quality resumes by fooling the discriminator, while the discriminator strives to distinguish between the refined representations and the high-quality representations as effectively as possible. Through iterative training of GANs, the generator plays a crucial role in refining the representations of low-quality resumes, which can bridge the gap between few-shot users and many-shot users to enhance the quality of resume completion for all users. To sum up, we propose an LLM-based GANs Interactive Recommendation (LGIR) method for job recommendation in this paper, which aims to address the limitations of fabricated generation in LLMs and the few-shot problem that degrades the quality of resume completion. To tackle the fabricated generation limitation, we extract valuable information beyond users’ resumes. Specifically, we not only extract users’ explicit properties from their self-descriptions but also infer their implicit characteristics from their behaviors, leading to accurate and meaningful resume completion. To mitigate the few-shot problem that restricts the quality of generated resumes, we propose a transfer representation learning strategy using GANs, which align low-quality resumes with unpaired high-quality resumes, enhancing the overall quality. We evaluate our model on three real-world datasets, demonstrating consistent superiority over state-ofthe-art methods for job recommendation. Ablation experiments and a case study further substantiate the motivations and effectiveness behind our proposed method. Related Work Job Recommendation. Job recommendation has gained significant popularity in online recruitment platforms and can be primarily categorized into three groups: behaviorbased methods, content-based methods, and hybrid methods. Behavior-based methods have been developed to leverage user-item interaction for job recommendation. Collaborative filtering based methods (Koren, Bell, and Volinsky 2009) have gained popularity among these approaches, which can be modified with deep neural networks (He and Chua 2017) and graph models (He et al. 2020) for more accurate recommendation results. Content-based methods utilize the rich semantic information present in resumes and job requirements using text-matching strategies or text enhancement techniques, such as CNN (Zhu et al. 2018), RNN (Qin et al. 2018), and memory networks (Yan et al. 2019). Hybrid methods combine the strengths of both behavior-based and content-based approaches. Specifically, they construct the embeddings of users and jobs based on their text content and leverage user-item interaction for job recommendation (Le et al. 2019; Jiang et al. 2020; Hou et al. 2022). However, these methods often suffer from the low quality of users’ resumes. To address this challenge, we propose utilizing the rich knowledge and reasoning abilities encapsulated within LLMs to improve the resume quality for recommendation. Large Language Models for Recommendation. Large Language Models (LLMs) (Touvron et al. 2023; Brown et al. 2020) are revolutionizing recommendation systems (Wu et al. 2023). Due to their extensive assimilation of knowledge (Liu, Zhang, and Gulla 2023), LLMs have the distinct advantage of comprehending contextual information (Geng et al. 2022), leading to improved recommendation accuracy and user satisfaction. They offer potential solutions to the cold-start problem with zero-shot recommendation capabilities (Sileo, Vossen, and Raymaekers 2022). Their capacity to generate language-based explanations also enhances recommendation interpretability (Gao et al. 2023). However, challenges arise in their direct application, including knowledge gaps and a tendency for unrealistic results The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8364 Job Description 1 Job Description 2 Job Description N Resume … … Interest Descriptions Large Language Model Completed Resume Concat Target Job Description Interact Sim-BERT … … Trainable Fixed Interactive Resume Representation Classifier … … Generator … High-quality Resume Low-quality Resume Aligned Resume Discriminator High-quality or Low-quality? Generative Adversarial Network Match Layer Prompt Template Interactive Resume Completion with LLMs Interaction Numbers Low-quality High-quality Unknown 0 ∞ ℒ𝒞 Classifier Training Process of Classifier Datasets BCE Loss ℒ𝒟 ℒ𝒢 ℒ𝓇ℯ𝒸 [ , , ] … … Trainable Fixed Job Description Representation Train Phase: ℒ𝒞 ℒ𝒟 ℒ𝒢 ℒ𝓇ℯ𝒸 Job Recommendation Figure 2: The architecture of the LLM-based GANs Interactive Recommendation (LGIR), mainly contains the interactive resume completion method for resume generation by LLMs and the GANs-based method for resume quality alignment. (Liu et al. 2023). Recent studies utilize constructive prompts and in-context learning to control and direct LLM outputs, with methods such as (Hou et al. 2023)’s sequential recommendation prompts, (Gao et al. 2023)’s interactive recommendation framework, and (Wang et al. 2023)’s generative recommendation framework. Some also harness user behavior history for guidance (Chen 2023). Nonetheless, pervasive long-tail issues remain challenges, which can further exacerbate the hallucination problem of LLMs. To address these, our work uniquely employs Generative Adversarial Networks (GANs) to enhance representations of few-shot users, aiming to improve recommendation quality. Problem Definition Let C = {c1, · · · , cN} and J = {j1, · · · , jM} represent the sets of N users and M jobs, respectively. Each user or job is associated with a text document describing the resume or job requirement. Specifically, we denote the resume of user c as Tc = [w1, · · · , wlc], where wi is the i-th word in the resume and lc denotes the the length of resume Tc. Similarly, the requirement description of job j with length lj is denoted as Tj = [w1, · · · , wlj]. We suppose to know the interaction records between users and jobs, which can be represented as an interaction matrix R ∈RN×M, where Rik = 1 if user ci has interacted with the job jk, and Rik = 0 otherwise. In this paper, our goal is to recommend appropriate jobs to users. Formally, we propose learning a matching function g(ci, jk) based on the interaction records R and the documents T. We then make the top-K recommendation based on this matching function. The Proposed Method The overall architecture of the proposed method is shown in Fig.2. Firstly, we propose an interactive resume completion method to alleviate the limitation of the fabricated generation in LLMs. Secondly, we propose a GANs-based aligning method to refine LLMs’ representations of low-quality resumes. Finally, we propose a multi-objective learning framework for job recommendation. A LLM-based Method for Resume Completion To enhance the quality of users’ resumes and thereby improve job recommendations, we propose leveraging the extensive knowledge and superior reasoning abilities of Large Language Models (LLMs). Specifically, we introduce two methods, named Simple Resume Completion (SRC) and Interactive Resume Completion (IRC), aimed at improving the quality of users’ resumes for more accurate recommendations. Simple Resume Completion with LLMs To improve the quality of users’ resumes, we propose completing users’ resumes using a prompting approach that directly leverages LLMs’ knowledge and generation abilities. Specifically, we construct the prompt for LLMs based on the user’s selfdescription as follows: Gc = LLMs(promptSRC, Tc) (1) where promptSRC denotes the command that triggers the LLMs to complete the user u’s resume based on his/her selfdescription Tc, the details of which are shown in the upper part of Fig.3. However, the SRC strategy may suffer from the fabricated and hallucinated generation of LLMs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8365 Interactive Resume Completion with LLMs To mitigate the limitation of fabricated generation in LLMs, we propose exploring users’ interactive behaviors with recommender systems, thus assisting LLMs to better profile users for resume completion. For instance, users typically have specific job skills, residential addresses, and educational backgrounds, which influence their interactions with job positions containing corresponding responsibilities. Consequently, users’ implicit characteristics (e.g., skills, preferences) can be inferred from their interaction behaviors for more accurate and meaningful resume completion. Specifically, we adopt a particular prompting approach for resume completion by LLMs, with consideration of both user’s selfdescription and his/her interactive behaviors: Gc = LLMs(promptIRC, Tc, Rc) (2) where Rc = {Tjk|Rc,jk = 1} denotes the requirements of jobs that the user c has interacted with. The details of the promptIRC is shown in the lower part of Fig.3. To utilize the user resumes and job requirements, we adopt the BERT to encode them into constant text embeddings Wt ∈Rd (Yang et al. 2022). Specifically, we first maintain the text order and place a unique token [CLS] before it, then we feed the combined sequence into the SIMBERT model and use the output of the token [CLS] as the semantic embeddings of the descriptive text (e.g., WGci = SIM-BERT(Gci)). Finally, we employ a multi-layer perceptron to encode these semantic embeddings: xci = MLPuser([Pi; WGci ]), (3) xjk = MLPjob([Qk; WTjk ]), (4) where Gci and Tjk denote the user ci’s LLMs-generated resume and the job jk’s requirement description. Pi, Qk ∈Rd represent the ID embeddings for user ci and job jk, respectively. MLPuser and MLPjob denote the multi-layer perceptron with hidden layers [2 · d →de′ →de] and the activation function Relu(·) = max(·, 0). d, de and de′ indicate the dimensions of hidden layers in the multi-layer perceptron. A GAN-based Aligning Method for Resume Refine While the exploration of users’ interactive behaviors does enable LLMs to more effectively profile users, it may still encounter the few-shot problem. Specifically, users with limited interaction records might lead to difficulties in generating high-quality resumes. To address this challenge, we propose refining the low-quality resumes of few-shot users. The approach comprises two main components: a classifier designed to detect low-quality resumes, and Generative Adversarial Networks (GANs) employed for aligning resumes. Classifier To detect the low-quality resumes for alignment, we propose a classifier C to distinguish between highquality resumes and low-quality resumes, i.e., C(x) = σ(W c 2 · Relu(W c 1 · x)) (5) where W c 1 ∈Rdc×de and W c 2 ∈R1×dc represent the parameters within the classifier C and we define them as ΘC = {W c 1, W c 2}. We posit that users with either extremely Job Description 1 Prompt Template: Please make appropriate revisions and improvements based on the user’s original resume to generate a concise and clear new resume, and highlight more skills and experience information. The user’s resume is: [Resume content]. LLM Resume Simple Resume Completion Job Description 2 Job Description N Resume … Interest Descriptions LLM Interactive Resume Completion Prompt Template: Please make appropriate revisions and improvements based on the user’s original resume and job description of interest to generate a concise and clear new resume, highlighting more skills and experience information. The user’s resume is: [Resume content]. Job descriptions that the user is interested in are: [Interest Description1, …, Interest Description K]. Figure 3: The difference between Simple Resume Completion and Interactive Resume Completion. few or rich interaction records may respectively result in low-quality and high-quality resume generation by LLMs. To this end, we introduce the cross-entropy loss to train the classifier C on these partial users, i.e., LC = E(ci,yci)∼TC[yci·log(ˆyci)+(1−yci)·log(1−ˆyci)] (6) where ˆyci = C(xci) denotes the quality prediction for user ci’s generated resume, and TC = T ↑ C S T ↓ C assembles the users for classifier learning (T ↑ C = {(ci, 1)| P k Rik ≥κ1} and T ↓ C = {(ci, 0)| P k Rik ≤κ2} represent the many-shot and few-shot users). yci serves as the ground truth, where yci = 1 if ci ∈T ↑ C and yci = 0 if ci ∈T ↓ C . The thresholds κ1 and κ2 are used to select the many-shot and few-shot users. Generator To improve the resume quality, we introduce a generator G to refine the representations of low-quality resumes as identified by the aforementioned classifier C. Specifically, the generator G aims to map the low-quality resume representations to their high-quality counterparts: G(x) = W g 2 · Relu(W g 1 · x) (7) where W g 1 ∈Rdg×de, W g 2 ∈Rde×dg represent the parameters in the generator G and are defined as ΘG = {W g 1 , W g 2 }. Discriminator The principal function of the discriminator is to differentiate between samples originating from two distinct distributions. Specifically, we introduce a discriminator D to discern whether a given resume representation is a product of the generator’s refinement process or a direct encoding of a high-quality resume: D(x) = σ(W d 2 · Relu(W d 1 · x)) (8) where W d 1 ∈Rds×de, W d 2 ∈R1×ds represent the parameters of D, and are defined as ΘD = {W d 1 , W d 2 }. Adversarial Learning To align the representations of the low-quality and high-quality resumes, we propose engaging in a mini-max game between a generator and a discriminator (Goodfellow et al. 2020). The discriminator D is responsible for distinguishing samples from distinct distributions. For the training of D, we aim to maximize the following probability, which determines whether a representation stems from the generator’s The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8366 refinement or a high-quality generated resume: max ΘD LD = Eci1 ∼ˆ T ↑ C [log D(xci1 )]+Eci2 ∼ˆ T ↓ C [1−log D(G(xci2 ))] (9) where ˆT ↑ C and ˆT ↓ C denote the high-quality and low-quality generated resumes detected by the classifier C, respectively. The generator G focuses on refining low-quality generated resume representations to resemble high-quality resume representations. For the training of G, we minimize the generator loss by deceiving the discriminator D: min ΘG LG = Eci∼ˆT ↓ C [1 −log D(G(xci))] (10) Through iterative training of the generator and discriminator in a competitive manner, this adversarial training process drives both components to improve, ultimately leading to the creation of low-quality samples that increasingly resemble high-quality ones. Multi-objective Learning for Recommendation To explore the high-quality resume representations for improved recommendation, we utilize the Classifier C and Generator G to obtain aligned resume representations, denoted as zci, for all users, regardless of whether they are few-shot users or many-shot users, i.e., zci = xci, if C(xci) ≥0.5; G(xci), if C(xci) < 0.5 (11) To predict users’ behaviors on jobs, we propose a deep model to capture the non-linear and complex relationship between the user ci and the job jk, i.e., ˆRi,k = g(ci, jk) = W p·[zci+xjk; zci−xjk; zci⊙xjk] (12) where ⊙denotes the element-wise product, W p ∈R1×3·de maps to a score or probability of jk that user ci will engage. For the recommendation target, we adopt the pairwise loss to define the recommendation objective function as follows, Lrec = max Θ X (i,j1,j2)∈D log σ( ˆRi,j1 −ˆRi,j2)−λ||Θ||2 (13) where the train set D = {(ci, j1, j2)} means that user ck gave positive feedback to job j1 (i.e., Ri,j1 = 1) instead of job j2 (i.e., Ri,j2 = 0). The Θ denotes all parameters that need to be learned in the proposed model and λ is the regularization coefficient of L2 norm || · ||2. Experiment In this section, we aim to evaluate the performance and effectiveness of LGIR. Specifically, we conduct several experiments to study the following research questions: • RQ1: Whether the proposed method LGIR outperforms state-of-the-art methods for job recommendation? • RQ2: Whether LGIR benefits from inferring users’ implicit characteristics from their behaviors for more accurate and meaningful resume generation? • RQ3: Whether LGIR benefits from aligning the few-shot resumes with high-quality representations? • RQ4: How LGIR achieves SOTA results in case level? Dataset # Users # Items # Interaction Designs 12,290 9,143 166,270 Sales 15,854 12,772 145,066 Tech 56,634 48,090 925,193 Table 1: Statistics of the experimental datasets. Experimental Setup Datasets We evaluated the proposed method on three realworld data sets, which were provided by a popular online recruiting platform. These data sets were collected from 106 days of real online logs for job recommendation in the designer, sales, and technology industries, respectively. These data sets contained the rich interaction between users and employers. In addition, these data sets also contained text document information, which were the resumes of the users and the descriptions of job positions. The characteristics of these data sets are summarized in Table 1. Evaluation Methodology and Metrics We spitted the interaction records into training, validation, and test sets equally. To evaluate the performance, we adopted three widely used evaluation metrics for top-n recommendation (Zhao et al. 2022): mean average precision (MAP@n), normalized discounted cumulative gain (NDCG@n) and mean reciprocal rank (MRR), where n was set as 5 empirically. We sampled 20 negative instances for each positive instance from users’ interacted and non-interacted records. Experimental results were recorded as the average of five runs with different random initialization of model parameters. Baselines We took the following state-of-the-art methods as the baselines, including content-based methods (i.e., BPJFNN (Qin et al. 2018)), collaborative filtering based methods (i.e., MF (Koren, Bell, and Volinsky 2009) and NCF (He et al. 2017)), hybrid methods (i.e., PJFFF (Jiang et al. 2020), SHPJF (Hou et al. 2022), SGL-text(Wu et al. 2021) , LightGCN-text(He et al. 2020), and LightGCN+SRC), and LLMs based method (i.e., SGPT-BE (Muennighoff 2022), SGPT-ST (Reimers and Gurevych 2019), SGPT-ST+SRC). Implementation Details We adopted the ChatGLM-6B (Du et al. 2022) as the LLM model in this paper. For a fair comparison, all methods were optimized by the AdamW optimizer with the same latent space dimension (i.e., 64), batch size (i.e., 1024), learning rate (i.e., 5×10−5), and regularization coefficient (i.e., 1 × 10−4). We set d = 768, de′ = 128, de = 64, and dc = ds = dg = 256 for the proposed method. We carefully searched other special hyper-parameters for best performance, and early stopping was used with the patience of 50 epochs. Model Comparison (RQ1) Table 2 outlines the performance of various job recommendation methods, highlighting the top-2 results for each dataset. The conclusions drawn are as follows: 1. Effectiveness of LGIR: The proposed method LGIR consistently surpasses all baseline methods, improving The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8367 Models Designs Sales Tech MAP@5 NDCG@5 MRR MAP@5 NDCG@5 MRR MAP@5 NDCG@5 MRR SGPT-BE 0.0712 0.1140 0.2128 0.0526 0.0932 0.1726 0.1464 0.2092 0.3344 SGPT-ST 0.0694 0.1107 0.2077 0.0519 0.0926 0.1714 0.1422 0.2025 0.3289 SGPT-ST + SRC 0.0727 0.1177 0.2185 0.0511 0.0925 0.1719 0.1541 0.2194 0.3442 BPJFNN 0.1415 0.2156 0.3436 0.1138 0.2038 0.3030 0.2018 0.2948 0.4704 MF 0.1914 0.2913 0.4557 0.0887 0.1628 0.2789 0.4359 0.6054 0.7555 NCF 0.2071 0.3230 0.4944 0.1463 0.2670 0.3941 0.4105 0.5706 0.7414 PJFFF 0.1182 0.1855 0.3299 0.0690 0.1255 0.2199 0.2802 0.4040 0.6127 SHPJF 0.1862 0.2875 0.4531 0.1334 0.2436 0.3705 0.3710 0.5189 0.7016 SGL-text 0.2716 0.4309 0.5941 0.1508 0.2712 0.3945 0.4416 0.6230 0.7836 LightGCN-text 0.2664 0.4218 0.5955 0.1629 0.2980 0.4271 0.4676 0.6591 0.8093 LightGCN+SRC 0.2649 0.4189 0.5926 0.1611 0.2939 0.4204 0.4719 0.6661 0.8146 LGIR(ours) 0.2887* 0.4622* 0.6319* 0.1751* 0.3225* 0.4548* 0.5086* 0.7191* 0.8434* Imprvement 6.28% 7.26% 6.11% 7.50% 8.22% 6.49% 7.78% 7.96% 3.54% Table 2: Performance of the proposed and baseline methods for job recommendation. ∗indicates that the improvements are significant at the level of 0.01 with paired t-test. Dataset Method MAP@5 NDCG@5 MRR Designs BASE 0.2627 0.4128 0.5829 SRC 0.2601 0.4076 0.5781 IRC 0.2859 0.4560 0.6220 LGIR 0.2887 0.4622 0.6319 Sales BASE 0.1617 0.2945 0.4250 SRC 0.1652 0.3031 0.4331 IRC 0.1671 0.3065 0.4359 LGIR 0.1751 0.3225 0.4548 Tech BASE 0.4994 0.7088 0.8374 SRC 0.5048 0.7148 0.8435 IRC 0.5056 0.7153 0.8400 LGIR 0.5086 0.7191 0.8434 Table 3: Performance of the variants for ablation studies. the best baseline by 6.65%, 7.40%, and 6.42% on designs, sales, and tech datasets, respectively. 2. Limitations of LLM-only Methods: LLMs methods (SGPT) perform poorly, indicating that relying solely on textual descriptions is ineffective due to inherent limitations such as meaningless information. 3. Challenges with Hybrid Methods: Hybrid methods like PJFFF and SHPJF, perform inadequately, likely due to the unstructured and varying organization habits of users. 4. Success of GCN-based Methods: GCN-based methods like LightGCN, which utilize preference encoding, achieve the best performance among baselines, signifying the importance of combining interactions and text. 5. Simple Resume Completion’s Limitations: The strategy of simple resume completion (SRC) shows minimal improvement (e.g., LGCN vs. LGCN + SRC), revealing that merely leveraging LLMs isn’t universally effective due to their tendency to generate fabricated content. 20% 40% 60% 80% 100% 0.55 0.60 0.65 0.70 0.75 mrr designs LGIR IRC 20% 40% 60% 80% 100% 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 mrr sales LGIR IRC Figure 4: Performance comparison of LGIR and the variant IRC for few-shot analysis. Ablation Study (RQ2&3) To assess the effectiveness of the LGIR’s module design, it’s compared to several special cases: - BASE: A two-tower text matching model that uses the original self-description from users for recommendation. - SRC: Utilizes the generated resumes of users with a simple resume completion (SRC) strategy without GANsbased learning for job recommendation. - IRC: Leverages the generated resumes with the interactive resume completion (IRC) strategy, but without GANs-based learning for aligning unpaired resumes. - LGIR: The proposed method, including both the IRC strategy and GANs-based learning for recommendation. Table 3 shows the performance of these methods, i.e. LGIR, BASE, SRC, and IRC. From the experimental results, we can get the following conclusions: • RQ2: The SRC variant shows limited improvement over BASE, demonstrating that simply leveraging LLMs for job recommendation is not a one-size-fits-all solution. Issues with fabricated and hallucinated generation are addressed through the Interactive Resume Completion (IRC) strategy, which shows substantial improvement over both BASE and SRC. This highlights the necessity The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8368 I am outgoing and have strong communication skills; ….. Two years of experience in UI work, interface design, product interaction, and web design; Proficient in Photoshop, Illustrator, Sketch; Resume 1. Having a college degree… , working in the interface …, and having experience in mobile user interface design; 2. Those with relevant experience in hand drawn animation ... and commonly used prototype design tools such as Axure and OmniGraffe; Job Description 1 1. College degree or above, with at least one year of ….; 2. … Job Description 2 Interaction History Simple Resume Completion Customer communication: …; Teamwork:…; Sense of responsibility:…; … become proficient in various design tools, such as Photoshop, Illustrator, Sketch, and Axure. Interactive Resume Completion 1. As a UI designer, with a college degree in the field of design art, at least two years of experience in interface design, familiar with mobile user interface design, and possessing high artistic literacy and strong visual expression; 2. I also possess rich imagination and logical thinking abilities, and can proficiently use various design software, such as Photoshop, Illustrator, Sketch, and commonly used prototype design tools such as Axure and OmniGraffe; 3. Experience in interaction and prototype designs ..., design low fidelity prototype images and achieving interactive effects.4. Experience in hand drawn animation and other related fields, and ... 1. Be responsible for user interface design of mobile phone client and website products, design of the interface, … and archive all resources of the project, promote and improve the design ability of the team, and assist GUI designers to complete Interaction design; 2. A college degree or above in design, art, or related fields …; 3. Having high artistic literacy and strong visual expression …; 4. Proficient in using various design software such as Photoshop, Illustrator, Sketch, and commonly used prototype design tools such as Axure and OmniGraffe. Target Job Description Similarity Calculation: for text content 𝑠𝑠1 and 𝑠𝑠2, calculate the sum of the lengths of all matching fragments 𝑇𝑇. Then the similarity score is: 2 × 𝑇𝑇/[𝑙𝑙𝑙𝑙𝑙𝑙𝑠𝑠1 + 𝑙𝑙𝑙𝑙𝑙𝑙𝑠𝑠2 ]. Resume Target Job Description 0.452026 Simple Resume Completion Target Job Description 0.395652 Interactive Resume Completion Target Job Description 0.610526 Interaction AI generated Relation History Text Similarity Text Text Content in Resume Content in Interaction Figure 5: A real recruitment scenario where users have two historical interactions. The process explains how the model successfully integrates pertinent information from user resumes and interactive job descriptions that better reflect the user’s abilities. of inferring users’ implicit characteristics based on their behaviors for more accurate resume generation. • RQ3: The proposed method LGIR significantly outperforms the variants across all data sets, which benefits from the GANs-based learning to align the generated resumes of few-shot users with high-quality representations. Further in-depth analysis of the role of GANs is explored in the subsequent few-shot analysis. Few-shot Analysis (RQ3) The ablation study reveals the strengths of LGIR in aligning the generated resumes of few-shot users with high-quality representations. It is interesting to investigate how LGIR handles the challenges associated with few-shot scenarios, so a few-shot analysis was conducted, comparing LGIR with the IRC variant across different shot levels. Users were equally divided into five groups based on their interaction numbers (for example, the group 40% denotes the user set that falls within the 20% −40% ranking range based on the number of interactions), and the recommendation performance of LGIR and IRC was compared across these groups. The results in Fig.4 show LGIR consistently outperformed IRC in most cases, validating the effectiveness of the GANs-based learning scheme. Especially, LGIR showed a more pronounced improvement in groups with fewer interactions, confirming that GANs-based learning can align the resumes of few-shot users with those of users who have rich interaction records. This indicates that LGIR can effectively mitigate the problems associated with few-shot scenarios that often limit the quality of resume generation. Case Study (RQ4) In a real recruitment scenario depicted in Fig.5, we delve deeper into the outputs of LLMs and explore how they assist LGIR in achieving state-of-the-art results. The figure presents the user’s resume, previous job interactions, target job description and two resume completion approaches: Simple Resume Completion (LLMs alone) and Interactive Resume Completion (LLMs guided by interactive history). We also highlight content relevant to a target job in the user’s resume (in yellow) and interaction history (in blue). The illustration reveals that the user’s interaction history contains clues relevant to the target job, absent in the user’s own resume. Using only the user’s resume with LLMs results in nonsensical content, reducing the proportion of valuable information in the resume. Conversely, the interactive approach successfully integrates pertinent information and generates resumes that better express the user’s abilities, even those they may not have articulated or recognized. Furthermore, we quantify this by calculating the pairwise similarity between texts, showing that interactive completion improved similarity from 0.45 to 0.61, a remarkable 35% enhancement. Therefore, exploiting the interactive behaviors of users helps LLMs accurately capture skills and preferences, contributing to better job recommendation results. Conclusion In this paper, we propose an LLM-based GANs Interactive Recommendation (LGIR) method for job recommendation. To alleviate the fabricated generation of LLMs, we infer users’ implicit characteristics from their behaviors for more accurate and meaningful resume completion. To address the few-shot problem encountered during resume generation, we propose the GANs-based method to refine the low-quality resumes of users. The proposed method outperforms state-of-the-art baselines, which demonstrates the superiority of utilizing LLMs with interactive resume completion and alignment for job recommendation. The ablation study highlights the significance of each component within the LGIR framework, and the case study further illustrates its superiority in capturing users’ skills and preferences. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8369 Acknowledgements This research is partially supported by the Agency for Science, Technology and Research (A*STAR) under its RIE 2025 – Industry Alignment Fund – Pre Positioning (IAFPP) funding scheme (Project No: M23L4a0001). This work is also partially supported by the MOE AcRF Tier 1 funding (RG90/20) awarded to Dr. Zhang Jie. This research is partially supported by National Natural Science Foundation of China (NSFC Grant No. 62122089). References Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Chen, Z. 2023. PALR: Personalization Aware LLMs for Recommendation. arXiv preprint arXiv:2305.07622. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; and Tang, J. 2022. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 320–335. Gao, Y.; Sheng, T.; Xiang, Y.; Xiong, Y.; Wang, H.; and Zhang, J. 2023. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524. Geng, S.; Liu, S.; Fu, Z.; Ge, Y.; and Zhang, Y. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, 299–315. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Gope, J.; and Jain, S. K. 2017. A survey on solving cold start problem in recommender systems. In 2017 International Conference on Computing, Communication and Automation (ICCCA), 133–138. IEEE. He, X.; and Chua, T.-S. 2017. Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, 355–364. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 639–648. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.-S. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, 173–182. Hou, Y.; Pan, X.; Zhao, W. X.; Bian, S.; Song, Y.; Zhang, T.; and Wen, J.-R. 2022. Leveraging Search History for Improving Person-Job Fit. In Database Systems for Advanced Applications: 27th International Conference, DASFAA 2022, Virtual Event, April 11–14, 2022, Proceedings, Part I, 38– 54. Springer. Hou, Y.; Zhang, J.; Lin, Z.; Lu, H.; Xie, R.; McAuley, J.; and Zhao, W. X. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845. Jiang, J.; Ye, S.; Wang, W.; Xu, J.; and Luo, X. 2020. Learning effective representations for person-job fit by feature fusion. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2549– 2556. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8): 30–37. Le, R.; Hu, W.; Song, Y.; Zhang, T.; Zhao, D.; and Yan, R. 2019. Towards effective and interpretable person-job fitting. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1883–1892. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. 2023. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149. Liu, P.; Zhang, L.; and Gulla, J. A. 2023. Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735. Muennighoff, N. 2022. SGPT: GPT Sentence Embeddings for Semantic Search. arXiv preprint arXiv:2202.08904. Qin, C.; Zhu, H.; Xu, T.; Zhu, C.; Jiang, L.; Chen, E.; and Xiong, H. 2018. Enhancing person-job fit for talent recruitment: An ability-aware neural network approach. In The 41st international ACM SIGIR conference on research & development in information retrieval, 25–34. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Sileo, D.; Vossen, W.; and Raymaekers, R. 2022. Zero-shot recommendation as language modeling. In European Conference on Information Retrieval, 223–230. Springer. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Wang, W.; Lin, X.; Feng, F.; He, X.; and Chua, T.-S. 2023. Generative recommendation: Towards next-generation recommender paradigm. arXiv preprint arXiv:2304.03516. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; and Xie, X. 2021. Self-supervised graph learning for recommendation. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, 726–735. Wu, L.; Zheng, Z.; Qiu, Z.; Wang, H.; Gu, H.; Shen, T.; Qin, C.; Zhu, C.; Zhu, H.; Liu, Q.; et al. 2023. A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8370 Yan, R.; Le, R.; Song, Y.; Zhang, T.; Zhang, X.; and Zhao, D. 2019. Interview choice reveals your preference on the market: To improve job-resume matching through profiling memories. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 914–922. Yang, C.; Hou, Y.; Song, Y.; Zhang, T.; Wen, J.-R.; and Zhao, W. X. 2022. Modeling Two-Way Selection Preference for Person-Job Fit. In Proceedings of the 16th ACM Conference on Recommender Systems, 102–112. Zhang, Y.; Li, Y.; Cui, L.; Cai, D.; Liu, L.; Fu, T.; Huang, X.; Zhao, E.; Zhang, Y.; Chen, Y.; et al. 2023. Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. arXiv preprint arXiv:2309.01219. Zhao, W. X.; Lin, Z.; Feng, Z.; Wang, P.; and Wen, J.-R. 2022. A revisiting study of appropriate offline evaluation for top-N recommendation algorithms. ACM Transactions on Information Systems, 41(2): 1–41. Zhu, C.; Zhu, H.; Xiong, H.; Ma, C.; Xie, F.; Ding, P.; and Li, P. 2018. Person-job fit: Adapting the right talent for the right job with joint representation learning. ACM Transactions on Management Information Systems (TMIS), 9(3): 1–17. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8371
2024
930
18,774
Structural Entropy Based Graph Structure Learning for Node Classification Liang Duan1,2, Xiang Chen1,2, Wenjie Liu1,2, Daliang Liu1,2, Kun Yue1,2*, Angsheng Li2,3 1Yunnan Key Laboratory of Intelligent Systems and Computing, Yunnan University, Kunming, China 2School of Information Science and Engineering, Yunnan University, Kunming, China 3School of Computer Science and Engineering, Beihang University, Beijing, China {duanl, kyue}@ynu.edu.cn, {chenx, liudl}@mail.ynu.edu.cn, [email protected], [email protected] Abstract As one of the most common tasks in graph data analysis, node classification is frequently solved by using graph structure learning (GSL) techniques to optimize graph structures and learn suitable graph neural networks. Most of the existing GSL methods focus on fusing different structural features (basic views) extracted from the graph, but very little graph semantics, like hierarchical communities, has been incorporated. Thus, they might be insufficient when dealing with the graphs containing noises from real-world complex systems. To address this issue, we propose a novel and effective GSL framework for node classification based on the structural information theory. Specifically, we first prove that an encoding tree with the minimal structural entropy could contain sufficient information for node classification and eliminate redundant noise via the graph’s hierarchical abstraction. Then, we provide an efficient algorithm for constructing the encoding tree to enhance the basic views. Combining the community influence deduced from the encoding tree and the prediction confidence of each view, we further fuse the enhanced views to generate the optimal structure. Finally, we conduct extensive experiments on a variety of datasets. The results demonstrate that our method outperforms the state-of-the-art competitors on effectiveness and robustness. Introduction Node classification aims to classify the nodes in a graph with limited labels, which is a fundamental problem in graph analysis and widely used in many applications (Song, Zhang, and King 2022). The mainstream solution is training graph neural networks (GNNs) to generate node embeddings for classification (Kipf and Welling 2017). Since the performance of GNNs is highly dependent on the quality of the input graph structure, various techniques of graph structure learning (GSL) have been proposed to enhance the graph structure and fine-tune the GNN parameters for better classification (Sun et al. 2022). Existing GSL methods mainly extract multiple graph structures (basic views) from the given graph to generate an optimal structure (final view), which should contain the information for classification while reduce redundant noise as much as possible (Sun et al. 2023). Despite the success of *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. GSL, most of these methods aim to learn the optimal structure based on the mutual information over different views. However, traditional mutual information is unsuitable for quantifying the structural information and theoretical analysis on what is the optimal graph structure for node classification is still unexplored (Li and Pan 2016; Li et al. 2018). Many GSL methods focus on combining different but simple structural features of the original graph to improve the performance of GNNs, and rarely consider the graph semantics like hierarchical communities (Zhu et al. 2021). As a result, these methods might be insufficient when tackling the complex and noisy graphs from real-world systems (Zou et al. 2023). To address these issues, it is essential to develop a novel theoretic principle for measuring the optimal graph structure and make full use of the structural information to improve the GSL for node classification. Recently, graph information bottleneck (GIB) has been proposed to optimize node embeddings by extracting the information from both graph structure and node features (Wu et al. 2020). GIB provides an interesting theoretic principle for GSL that an optimal graph structure should contain the minimal sufficient information for downstream tasks (Liu et al. 2022). Furthermore, GIB relies on the localdependence assumption for graph data and enhances the embedding of each node by its neighborhood information. Actually, real-world graphs usually contain hierarchical communities. This structural information is useful for node classification, since the nodes within the same community are more likely to have the same class label. How to incorporate the hierarchical community information with GSL to generate the optimal graph structure for node classification is still an underexplored problem. In this paper, we propose a structural information theory based GSL framework for node classification. Based on the structural entropy (Li and Pan 2016) and GIB, we provide a theoretic principle for GSL to find the optimal graph structure for node classification. We then prove that an encoding tree, as a hierarchical abstraction of a graph, could contain the information for classifying nodes and remove redundant noise by minimizing its structural entropy. We next design an efficient algorithm to construct the encoding tree from each basic view, such that the GSL could be guided for optimizing the graph structure. To fully use the information in the basic views, we also enhance each basic view by a simThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8372 ilarity graph with minimal structural entropy. With the enhanced views, we propose a fusion mechanism to generate the final view based on the community influence from the encoding trees and the prediction confidence from the enhanced views (Liu et al. 2022). We finally prove that an optimal structure could be obtained by maximizing the mutual information between every two encoding trees, and provide a two-fold objective to train our model effectively. To summarize, our main contributions are: • We propose a novel framework of graph structure learning based on the structural information theory for node classification tasks. • We design an efficient algorithm for constructing encoding tree to extract the hierarchical community information and enhance the basic views. • We provide a community influence based fusion mechanism to generate the optimal graph structure. • We conduct extensive experiments on a variety of datasets and the results show the superiority of our proposed method. Preliminary In this section, we present the basic concepts of node classification, graph structure learning and structural entropy. Node Classification Let G = (V, E) represent a graph, where V = {v1, ..., vn} is the set of nodes and E ⊆V × V is the set of edges. The original graph structure is represented in an adjacency matrix A ∈Rn×n, where aij ∈A denotes the weight between nodes vi and vj. All nodes are assigned with node feature matrix X ∈Rn×d and each xi ∈X is a d dimensional feature vector of node vi. Given a small portion of nodes VL = {v1, ..., vq} with labels YL = {y1, ..., yq|yi ∈ {1, ..., C}}, the node classification task is to predict the labels ˆYU = {ˆyq+1, ..., ˆyn} for the unlabeled nodes VU = V \ VL. At present, the mainstream solution is to build a GNN encoder f(X, A) on the graph structure and node features, and produce low dimensional node embeddings Z ∈Rn×dz(dz ≪d) for classification (Song, Zhang, and King 2022). Graph Structure Learning Given an input graph G, the traditional goal of GSL is to simultaneously learn an optimal graph structure A⋆and corresponding node embeddings Z⋆= f(X, A⋆) (Zhu et al. 2021). In this work, we focus on the graph structure learning technique for node classification tasks, in which the objective can be formulated as Lgsl = Lcls(Z⋆, YL) + µLreg(A⋆, Z⋆, A) (1) where the first term Lcls refers to the node classification objective with respect to the given labels YL, the second term Lreg imposes constraints on the learned graph structure and node embeddings, and µ ∈R is a hyperparameter that balances the two terms. Structural Entropy Structural entropy is an extension of Shannon entropy for measuring the uncertainty of a graph under a strategy of hierarchical partitioning (Li and Pan 2016). The optimal hierarchical structure of a graph, also called encoding tree, could be generated by minimizing the K-dimensional structural entropy (Zeng, Peng, and Li 2023). Encoding Tree The encoding tree T of a graph G(V, E) is defined with the following properties: (1) Each node α ∈T is associated with a nonempty subset of nodes Tα ⊆V . Intuitively, for the root node λ, Tλ contains all nodes in G, i.e., Tλ = V . For each leaf node α, Tα contains a single node v ∈V . (2) For each nonleaf node α ∈T , its ith child node is denoted as α<i>, and its all child nodes’ subsets Tα<i> are disjointed, i.e., Tα = ∪m i=1Tα<i>, where m is the number of α’s children. An encoding tree is a hierarchical abstraction of G, and widely used to extract the hierarchical community information from G. One-dimensional Structural Entropy The onedimensional structural entropy of G reflects the dynamical complexity of G based on random walk, defined as: H1(G) = − X v∈V dv vol(G) log2 dv vol(G) (2) where dv is the sum of the weights of v’s connected edges, and vol(G) = P v∈V dv is the volume of G. K-dimensional Structural Entropy Given an encoding tree T with height no more than K, the K-dimensional structural entropy of G is defined as follows: HK(G) = min T X α∈T ,α̸=λ HT (G; α) (3) HT (G; α) = − gα vol(G) log2 Vα Vα− (4) where gα is the sum of weights of edges from the nodes in Tα to those outside of Tα, Vα = P v∈Tα dv is the volume of Tα, and α−is the parent node of α. Methodology In this section, we introduce the framework of structural entropy based graph structure learning for node classification and the technical details of each component. Overview The framework of our GSL method is shown in Figure 1. We first extract two basic views from the given graph as the input of our model. Then, we build an encoding tree for each basic view. One advantage of the encoding tree is that it could retain the information for classifying nodes but eliminate the noise as much as possible by minimizing the structural entropy. We use the encoding tree to train a graph convolutional network (GCN) (Kipf and Welling 2017) on each basic view and generate node embeddings, from which we construct a kNN similarity graph to enhance the basic The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8373 Basic View View Tree View GCN GCN TCN (b) (c) (a) Structure Enhancement Structure Enhancement 1 2 ; ; ) max (I G min max ( ; ) ( ; ) L I Y I  + Z Z View 1 G L Y Fusion 1 1 min max ( ; ) ( ; ) L I Y I  + Z Z 1 en G I 2 2 min max ( ; ) ( ; ) L I Y I  + Z Z 2 en G I 2 G I -selector Similarity k 1( ) k H G en G Shared Weight Contrastive Loss Projection Head Fusion Projection Head Figure 1: (a) The framework of our method. (b) Basic view enhancement. (c) Maximization of the mutual information between graph and encoding tree. view. Another advantage of the encoding tree is that the hierarchical community information extracted from the graph could be used for fusing the basic views. Thus, we combine the community influence and prediction confidence of each enhanced view to obtain the final view. Moreover, we also build an encoding tree for the final view to guide the training of our proposed model, which guarantees that the learned graph structure is minimal and sufficient for node classification. Basic View Selection Leveraging the structural information theory (Li and Pan 2016) to measure the evolution of graph in GSL, we find that GSL for node classification is to reduce the uncertainty of the original graph structure. This means that adopting more useful basic views could reduce more uncertainty in the graph. Thus, we carefully choose two basic views G1 and G2 as the input of our method, following the basic view selection in CoGSL. Information Flow Constraint A key challenge of GSL is how to constrain the information flow from basic views to the final view as to learn an optimal graph structure for downstream tasks (Zhu et al. 2021). According to GIB, the optimal structure should contain sufficient information for classification yet eliminates the noise, also called minimal sufficient structure. We adopt GIB to constrain the information flow by maximizing the mutual information between the node embeddings and labels, while minimizing the mutual information between the node embeddings and the original graph: GIB(G, Y ; Z⋆) = max Z [I(Z; Y ) −βI(Z; G)] (5) where Y is the label set and β > 0 is a hyperparameter. The first term I(Z; Y ) can be optimized by classification loss, but the second term I(Z; G) is intractable to minimize. The traditional solution for Eq. 5 is to sample a subgraph Gs from the input graph to minimize I(Z; G), since Gs has less information than G. Suppose that Gs retains the information of node labels, and then we have min Z I(Z; G) ⇔min Gs H1(Gs) (6) where H1(·) is the one-dimensional structural entropy. Therefore, the goal of min I(Z; G) is to generate an enhanced graph that contains sufficient information for node classification while reducing its uncertainty (i.e., redundant information or noise) as much as possible. For this purpose, we give the following proposition. Proposition 1. Given a basic view G and a label set YL, the enhanced graph could retain the information for node classification and minimize its uncertainty, if the information flow from G to the final view satisfies: min Gs max Z I(Z; YL) + βI(Z; Gs) (7) s.t., H1(G) > H1(Gs), I(G; YL) = I(Gs; YL) (8) The above min-max principle aims to train an encoder such that the mutual information among node embeddings Z, labels YL and Gs can be maximized, while Eq. 8 guarantees that Gs can capture the minimal and sufficient information for node classification. Encoding Tree Construction In structural information theory, structural entropy is used to measure the uncertainty embedded in a graph (Li et al. 2018). Moreover, an encoding tree is a hierarchical abstraction of a graph, which could be used to represent the hierarchical community partition of the graph. Minimizing the uncertainty of a graph could be implemented by an encoding tree with the minimal structural entropy (Zou et al. 2023), stated as follows. Proposition 2. The encoding tree T ⋆of G with the minimal structural entropy could retain the information for node classification and eliminate redundant noise in G. According to Proposition 2, we incorporate the encoding tree into the min-max principle to train GNNs for node classification. Different from previous GSL methods that generate the enhanced graph by graph sampling, we adopt the encoding tree to enhance the graph with theoretical guarantee. According to Eq. 3, the encoding tree with minimum K-dimensional structural entropy could be found by T ⋆= arg min ∀T :height(T )≤K (HT (G)) (9) However, building an optimal encoding tree is intractable (Zou et al. 2023). For this, we design an efficient algorithm for encoding tree construction. Specifically, given a graph The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8374 G(V, E), let P = {P1, ..., Pc} be a partition of V , where each Pi ⊂V is called a community. We define three basic operators as follows: Definition 1. (Merging operator) Given any two communities Pi and Pj (1 ≤i < j ≤c), merging operator opm(Pi, Pj) merges Pi and Pj into a new community Px, i.e., Px = Pi∪Pj, and then removes Pi and Pj from P. After merging, P = {P1, ..., Pi−1, Pi+1, ..., Pj−1, Pj+1, Pc, Px}. According to Eq. 4, the difference of K-dimensional structural entropy ∆SEP i,j(G) before and after merging could be calculated by ∆SEP i,j(G) = 1 vol(G)[(Vi −gi) log2 Vi + (Vj −gj) log2 Vj −(Vx −gx) log2 Vx + (gi + gj −gx) log2 vol(G)] (10) Definition 2. (Compressing operator) Given a graph G and a corresponding partition P, compressing operator opc(P) compresses G into a smaller graph by transferring each community Pi ∈P to a node v′ i, and assigning the weight of edge between v′ i and v′ j to the sum of the weights of the edges from Pi to Pj. Definition 3. (Updating operator) Given an encoding tree T and a graph G with partition P, updating operator opu(T , P) is to update the encoding tree by taking all communities in P as the leaf nodes of T , i.e., inserting P into T and increasing the height of T . Initially, we adopt each node in the graph as a single community, and then iteratively execute the merging and compressing operators until the updating operator could construct a K-dimensional encoding tree. Actually, in the merging operation, we merge the communities with the maximal ∆SEP i,j(G) greedily until there are no communities satisfying ∆SEP i,j(G) > 0, which can achieve the minimal structural entropy. The complete procedure is shown in Algorithm 1. Graph Structure Enhancement To fully use the information contained in the basic views, we have to enhance the basic views before generating the final view. For the basic view G1, we train a GCN encoder f(X, A1) to generate the node embeddings Z1. When the embeddings are available, we use the cosine similarity s1 ij = (z1 i · z1 j )/(||z1 i || × ||z1 j ||) to measure the relation between node v1 i and v1 j . Intuitively, larger s1 ij means higher probability that v1 i and v1 j are in the same class. It is reasonable to construct a cosine similarity graph to represent the relations among all nodes, but this similarity graph is unsuitable for node classification since it might be fully connected (i.e., all nodes belong to one class). Consequently, we build a k-nearest neighbor (kNN) graph G1 k to enhance the basic view. Based on the structural information theory, we could find an optimal value for k by minimizing the one-dimensional structural entropy of G1 k, i.e., finding the value of k satisfying H1(G1 k−1) ≥H1(G1 k) ≤ H1(G1 k+1). Note that the optimal k guarantees the kNN Algorithm 1: Encoding Tree Construction Input: a graph G, an integer K > 1 Output: an encoding tree T 1: G1 ←G, T ←an encoding tree with height 1 2: for h = 1 to K do 3: Ph ←initialize each node in Gh as a community 4: while True do 5: P ′ i, P ′ j ←arg max ∆SEPh i,j (Gh) by Eq. 10 6: if ∆SEPh i,j (Gh) > 0 then 7: Ph ←opm(P ′ i, P ′ j), continue // Definition 1 8: else 9: Gh ←opc(Ph), break // Definition 2 10: end if 11: end while 12: end for 13: for h = K −1 down to 0 do 14: T ←opu(T , Ph) // Definition 3 15: end for 16: return T graph could retain the most useful information in the corresponding node embeddings. Thus, the value of k is selected based on the structural entropy and does not require users to provide. Combining G1 k with the basic view G1, we obtain the following enhanced view G1 en = G1 + ξG1 k (11) where ξ ∈[0, 1] is a combination coefficient. Similarly, we generate the enhanced view G2 en from G2. Final View Generation An important step in GSL is fusing all basic views to generate the final view (Zhu et al. 2021). Different from previous methods that use the average or attention mechanism (Zhao et al. 2021), we combine the community influence and prediction confidence to fuse the basic views. First, we define the community influence. Definition 4. (Community influence) Given an optimal encoding tree T of a graph G, the community influence of a leaf node α in T is εα = HT (G; α) P δ∈T HT (G; δ) (12) where δ is the node in the path from λ to α. Intuitively, HT (G; α) reflects the activity of the node α in its community Tα−, and larger εα indicates larger influence of α in Tα−. Then, for each node vi, we combine the community influence and prediction confidence to measure the importance a1 i of vi in the basic view G1, defined as a1 i = σ(π1 i ) · π1 i + σ(ε1 i ) · ε1 i σ(π1 i ) + σ(ε1 i ) (13) where σ(·) is an activation function, and π1 i is vi’s prediction confidence in G1. We use the same prediction confidence as CoGSL, and obtain a2 i of vi in G2 analogously. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8375 Next, we normalize the importance and obtain the weights of vi as follows: w1 i = a1 i /(a1 i + a2 i ), w2 i = a2 i /(a1 i + a2 i ) (14) Finally, we generate vi’s final view: G⋆ i = w1 i · G1 en,i + w2 i · G2 en,i (15) The above operations are repeatedly executed to fuse all nodes and generate the final view G⋆. Model Training We now discuss how to instantiate the min-max principle and use the community information to guide the training of our GSL model, as well as the generation of the minimal sufficient structure for node classification. Minimal Sufficient Structure We aim to make full use of the hierarchical community information to guide the training of our GSL model, since the information could be decoded from the encoding tree. Based on the definitions of minimal sufficient structure (Liu et al. 2022) and structural entropy (Li and Pan 2016), we have the following proposition. Proposition 3. Given the enhanced views G1 en and G2 en, final view G⋆and their encoding trees T 1, T 2 and T ⋆, and label set YL, G⋆is a minimal sufficient structure guided by community information if the following two principles are satisfied: I(G1 en; YL) = I(G2 en; YL) = I(G⋆; YL) = H(YL) (16) max{I(T 1; T 2) + I(T 1; T ⋆) + I(T 2; T ⋆)} (17) where H(·) denotes the Shannon entropy. Eq. 16 guarantees that the information of YL for node classification is totally contained in G1 en, G2 en and G⋆. Meanwhile, Eq. 17 connects the encoding trees of the basic and the final views. Maximizing the mutual information among these encoding trees could make these trees share their community information for generating the minimal sufficient structure. Training Objective We design a two-fold objective to optimize the parameters in our model. (1) Optimizing the parameters of GCNs over the enhanced and final views to improve the classification accuracy by the following cross-entropy loss: Lcls = 2 X i=1 Lcross(Πi, YL) + Lcross(Π⋆, YL) (18) where Π1, Π2 and Π⋆is the prediction confidence of G1 en, G2 en and G⋆, respectively. (2) Optimizing the parameters of basic view enhancers to constraint the information flow by the min-max principle and maximization of mutual information of encoding trees. For the min-max principle in Eq. 7, we adopt the crossentropy loss to maximize the first term I(Z; YL), and design a hierarchical contrastive loss for the second term I(Z; Gs). According to Proposition 2, Gs is replaced by an encoding tree T in our method. Thus, we first provide a tree convolutional network (TCN) to generate the community embedding from T . Actually, the community embedding of node α in T is the weighted sum of the embeddings of α’s children, formally defined as hα = m X i=1 " hT (G; α<i>) Pm j=1 hT (G; α<j>)hα<i> # (19) where m is the number of α’s children. The community embeddings of leaf nodes in T are the corresponding node embeddings Z. Inspired by InfoNCE (Oord, Li, and Vinyals 2018), we compare the node embeddings with different levels of community embeddings to make the nodes in the same community have similar embeddings. The corresponding hierarchical contrastive loss is Lhc(Z; T ) = − K X l=2 θl log2 n X i=1 sim(zi, h(i,l)) Pn j=1,j̸=i sim(zj, h(j,l)) (20) where θl = γ(1 −γ)l is a coefficient related to the level number l, sim(·) is the cosine similarity, and h(i,l) is the embedding of the l-th level’s community of node vi in T . For the enhanced view G1 en, its min-max principle loss is L1 mmp = L1 cross(Π1, YL) + Lhc(Z1; T 1) (21) Similarly, we can get the loss L2 mmp and L⋆ mmp for G2 en and G⋆respectively, as well as the total min-max principle loss Lmmp = L1 mmp + L2 mmp + L⋆ mmp. To maximize the mutual information between the two encoding trees T 1 and T 2, we extend Eq. 20 as Lmiet(T 1, T 2) = 1 2  Lhc(Z1; T 2) + Lhc(Z2; T 1)  (22) Consequently, the loss for the basic view enhancers is Lve =Lmmp + (Lmiet(T 1, T 2) + Lmiet(T 1, T ⋆) + Lmiet(T 2, T ⋆)) (23) To effectively train our model, we alternatively and iteratively perform the above two-fold objective. Experimental Study In this section, we conduct extensive experiments to evaluate the effectiveness and robustness of our method. Experimental Setup Datasets We choose eight open benchmark datasets for experiments, including (1) blog graph Polblogs (Pedregosa et al. 2011), (2) website networks from WebKB, Texas and Wisconsin (Bandyopadhyay et al. 2005), (3) citation networks, Citeseer (Kipf and Welling 2017), Wiki-CS (Mernyei and Cangea 2020) and MS Academic (Klicpera, Bojchevski, and Gunnemann 2019), and (4) non-graph datasets, Breast Cancer (Cancer) and Digits (Pedregosa et al. 2011). We construct a kNN graph as an initial adjacency matrix for each non-graph dataset, and adopt the original splits on training, validation and test sets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8376 Method Texas Wisconsin Cancer Digits Polblogs Citeseer Wiki-CS MS Academic GCN 54.4±5.6 50.7±1.6 93.5±0.4 90.3±0.5 95.2±0.2 69.7±0.6 71.0±0.8 91.3±0.5 GAT 56.0±1.4 52.1±0.9 93.8±0.9 90.8±1.1 94.3±0.3 71.9±0.1 72.8±0.3 89.3±0.2 GATv2 52.1±0.6 49.4±1.2 95.0±0.2 90.6±0.7 94.7±0.6 71.6±0.1 69.2±1.1 89.4±0.8 GraphSAGE 55.0±5.9 51.3±3.9 92.7±0.4 88.2±0.2 93.7±2.1 70.4±0.8 72.5±0.4 91.2±0.2 DGI 43.8±3.6 49.4±3.1 86.9±1.5 89.1±0.4 91.7±0.8 72.0±0.5 61.0±0.3 90.5±1.2 gCooL 59.0±3.8 52.4±2.8 95.1±0.5 91.3±0.9 95.4±0.3 67.9±1.9 74.1±0.5 91.2±0.4 MGEDE 57.1±1.1 50.0±0.4 94.8±0.1 91.5±0.2 94.9±0.1 68.7±0.2 68.9±0.6 90.1±0.2 IDGL 49.5±9.1 54.3±1.9 94.5±0.6 92.8±0.1 94.4±0.6 72.6±0.4 72.7±0.8 Pro-GNN 55.1±3.5 58.0±3.1 93.4±0.6 90.5±0.8 95.0±0.1 67.6±0.4 68.9±0.8 GEN 51.1±6.7 54.5±4.1 94.1±0.8 91.8±0.9 95.3±0.4 72.5±0.8 71.5±0.7 91.8±0.5 CoGSL 57.7±3.5 55.7±1.2 94.8±0.2 92.5±0.5 95.5±0.1 72.7±0.3 74.7±0.4 92.1±0.2 SE-GSL 59.2±7.6 58.0±4.0 92.6±0.3 82.7±0.5 95.1±0.5 71.4±1.3 53.2±1.6 90.8±0.1 PROSE 58.4±2.1 58.2±1.9 95.4±0.4 92.8±0.5 95.5±0.4 73.3±0.1 75.0±0.7 91.8±0.2 Ours 61.3±3.0 58.6±1.2 95.8±0.5 93.7±0.3 95.9±0.1 74.1±0.6 75.6±0.3 92.7±0.1 Table 1: Effectiveness comparison on F1-micro (% ± σ). (bold: best, ’-’: exceeding GPU memory or cannot finish in 12 hours) Comparison Methods We compare our method with three categories of node classification methods. (1) Classical GNN methods: GCN (Kipf and Welling 2017), GAT (Velickovic et al. 2018), GATv2 (Brody, Alon, and Yahav 2022) and GraphSAGE (Hamilton, Ying, and Leskovec 2017). (2) Information theory (IT) based methods: DGI (Velickovic et al. 2019), gCooL (Li, Jing, and Tong 2022) and MGEDE (Yang et al. 2023). (3) Graph structural learning (GSL) based methods: IDGL (Chen, Wu, and Zaki 2020), Pro-GNN (Jin et al. 2020), GEN (Wang et al. 2021), CoGSL (Liu et al. 2022), SEGSL (Zou et al. 2023) and PROSE (Wang et al. 2023). Metrics We evaluate the accuracy of a node classification algorithm by a standard metric F1-micro ranging from 0 100%, and a higher score means a more accurate classifier. Implementation We implement our method in PyTorch. For fair comparison, we use the same dimensionality of node embeddings and optimizer for all methods (except MGEDE), and set other parameters to the values recommended in the original papers. All experiments are conducted on a machine with Intel 13900KF CPU, 128GB RAM and RTX4090 GPU, running Windows 11. Each test is repeated for 10 times, and the average is reported here. Experimental Results Exp-1: Effectiveness Evaluation In the first set of tests, we evaluate the effectiveness of our method by comparing with other node classification methods. The encoding tree height K is fixed to 3. The results are reported in Table 1. The results tell us that: (a) our method outperforms the other comparison methods on all datasets. (b) PROSE is the second best method and GSL based methods generally perform better than the other two categories of methods, since GSL could enhance the graph structure for classification. (c) gCooL is the best IT based method and GAT is the best classical GNN method. SE-GSL performs worse on Digits and Wiki-CS, since it requires large amount of labeled training data and is unsuitable for node classification with a small set of labels. By adopting the encoding tree to extract the hierarchical community information for the model training, our method improves F1-micro over the second best method PROSE. These verify the effectiveness of our method. Exp-2: Attacks on Edges In the second set of tests, we follow (Liu et al. 2022) to evaluate the robustness of our method by random edge deletions and additions. We choose PROSE (the second best method), gCooL (the best IT based method) and GAT (the best classical GNN method) for comparison. The curves of ’Ours v1’, ’Ours v2’ and ’Ours both’ are the results of the first, second and all basic views of our method are attacked, respectively. The results on Texas and MS Academic are reported in Figure 2. The results tell us that: (a) our method outperforms other methods under different perturbations. (b) GSL methods are better than other methods. (c) all methods become worse with the increase of perturbations. (d) ’Our both’ is competitive with ’Ours v1’ and ’Ours v2’ that only one basic view is attacked. These verify the robustness of our method. Exp-3: Impacts of Encoding Tree In the third set of tests, we analyze the impacts of encoding tree by comparing with other two community structure extraction methods kNN and Louvain (Blondel et al. 2008) and varying the tree height K from 2 to 5. The results are reported in Figure 3. The results tell us that: (a) encoding tree is more effective than the other methods to extract the community information. (b) kNN performs as well as Louvain, which means that community information is indeed useful for GSL. (c) the F1-micro scores are stable with the increase of K, which verifies the insensitivity of our method to the tree height. (d) we fix K = 3 by default for better efficiency, since constructing a high encoding tree is time-consuming. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8377 0 10 20 30 Deletion Rate (%) 49 52 55 58 61 F1-micro (%) Ours_v1 Ours_v2 Ours_both PROSE GAT gCool (a) Texas 0 25 50 75 Addition Rate (%) 49 52 55 58 61 F1-micro (%) Ours_v1 Ours_v2 Ours_both PROSE GAT gCool (b) Texas 0 10 20 30 Deletion Rate (%) 84 87 90 93 F1-micro (%) Ours_v1 Ours_v2 Ours_both PROSE GAT gCool (c) MS Academic 0 25 50 75 Addition Rate (%) 75 80 85 90 95 F1-micro (%) Ours_v1 Ours_v2 Ours_both PROSE GAT gCool (d) MS Academic Figure 2: F1-micro results of different methods under edge attacks. Cancer Digits Citeseer Wiki-CS 70 75 80 90 95 F1-micro (%) kNN Louvain Encoding Tree (a) Different community structure extraction methods 2 3 4 5 70 74 80 93 96 F1-micro (%) Cancer Digits Wiki-CS Citeseer (b) Impacts of K Figure 3: Impacts of encoding tree. Fusion Cancer Polblogs Citeseer Wiki-CS Average 95.0 95.6 73.4 74.9 Attention 95.2 95.3 71.8 70.7 Confidence 95.3 94.7 73.3 75.2 Ours 95.8 95.9 74.1 75.6 Table 2: F1-micro results of different fusion mechanisms. Exp-4: Impacts of Fusion Mechanism In the last set of tests, we evaluate our fusion mechanism by comparing with other three mechanisms: average, attention and prediction confidence. The results are reported in Table 2. The results tell us that: (a) our fusion mechanism performs better than others on all datasets. (b) ’Average’ performs as well as ’Confidence’, but ’Attention’ performs worse, since it requires large training data to fine-tune parameters. By combining the community influence and prediction confidence, our fusion mechanism outperforms the competitors, which verifies the effectiveness of our method. Related Work Node Classification Node classification is a primary task in graph analysis. The mainstream solution is training GNNs to aggregate neighborhood information for better node embeddings, e.g., GCN (Kipf and Welling 2017), GraphSAGE (Hamilton, Ying, and Leskovec 2017), GAT (Velickovic et al. 2018) and GATv2 (Brody, Alon, and Yahav 2022). Some methods incorporate the information theory with GNNs, e.g., DGI (Velickovic et al. 2019), gCool (Li, Jing, and Tong 2022) and MGEDE (Yang et al. 2023). Recently, GSL techniques are used to enhance the node embeddings and become the dominant solution (Zhu et al. 2021), which motivates our study of GSL based node classification. Graph Structure Learning GSL is to simultaneously optimize the graph structure and node embeddings (Song, Zhang, and King 2022). IDGL (Chen, Wu, and Zaki 2020) recalibrates edge weights by node embedding similarities, Pro-GNN (Jin et al. 2020) treats the graph structure as a trainable parameter in GNN training, and GEN (Wang et al. 2021) optimizes the graph structure by stochastic block model. Moreover, information theory has been adopted in GSL, e.g., SE-GSL (Zou et al. 2023) adopts structural entropy to enhance connectivity among uncertain nodes, CoGSL (Liu et al. 2022) uses mutual information to learn the minimal sufficient structure, and PROSE (Wang et al. 2023) uses a progressive strategy to learn graph structures. Most of these methods focus on extracting multiple and simple structural features, and neglect to use the graph semantics, such as hierarchical community information. Different from these methods, we adopt the encoding tree to hierarchically abstract the graph and enhance the basic views in GSL. Structural Entropy Guided Neural Network As an advanced theory in graph analysis, structural entropy has gained substantial traction and been widely used in bioinformatics (Li, Yin, and Pan 2016) and community detection (Liu et al. 2019). Recent works combine structural entropy with neural networks, e.g., HRN (Wu et al. 2022), SR-MARL (Zeng, Peng, and Li 2023) and SEGA (Wu et al. 2023). Although these methods successfully exploit the structural entropy to optimize neural networks, how to incorporate this theory with GSL to measure and find the optimal structure for node classification is still understudied, and we are among the first attempts. Conclusion In this work, we propose a structural entropy based approach to improving GSL for node classification. We first prove that an encoding tree with minimal structural entropy could extract hierarchical community information for classification while removing redundant noise in the graph. We then provide a community influence based fusion mechanism to generate the final view. Finally, we efficiently construct encoding trees for all views and apply them to guide the training of our GSL model. Extensive experiment results show the effectiveness and robustness of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8378 Acknowledgments This work was supported by the Key Program of National Natural Science Joint Foundation of China (U23A20298), National Natural Science Foundation of China (61932002), Program of Yunnan Key Laboratory (202205AG070003) and Yunnan Fundamental Research Project (202301AT070193). References Bandyopadhyay, S.; Maulik, U.; Holder, L. B.; Cook, D. J.; and Getoor, L. 2005. Link-based Classification. Advanced Methods for Knowledge Discovery From Complex Data, 189–207. Blondel, V. D.; Guillaume, J.-L.; Lambiotte, R.; and Lefebvre, E. 2008. Fast Unfolding of Communities in Large Networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10): P10008. Brody, S.; Alon, U.; and Yahav, E. 2022. How Attentive are Graph Attention Networks? In ICLR. Chen, Y.; Wu, L.; and Zaki, M. 2020. Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings. In NeurIPS, 19314–19326. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive Representation Learning on Large Graphs. In NIPS. Jin, W.; Ma, Y.; Liu, X.; Tang, X.; Wang, S.; and Tang, J. 2020. Graph Structure Learning for Robust Graph Neural Networks. In SIGKDD, 66–74. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR. Klicpera, J.; Bojchevski, A.; and Gunnemann, S. 2019. Predict then Propagate: Graph Neural Networks Meet Personalized PageRank. In ICLR. Li, A.; and Pan, Y. 2016. Structural Information and Dynamical Complexity of Networks. IEEE Transactions on Information Theory, 62(6): 3290–3339. Li, A.; Yin, X.; and Pan, Y. 2016. Three-dimensional Gene Map of Cancer Cell Types: Structural Entropy Minimisation Principle for Defining Tumour Subtypes. Scientific Reports, 6(1): 20412. Li, A.; Yin, X.; Xu, B.; Wang, D.; Han, J.; Wei, Y.; Deng, Y.; Xiong, Y.; and Zhang, Z. 2018. Decoding Topologically Associating Domains with Ultra-low Resolution Hi-C Data by Graph Structural Entropy. Nature Communications, 9(1): 3265. Li, B.; Jing, B.; and Tong, H. 2022. Graph Communal Contrastive Learning. In WWW, 1203–1213. Liu, N.; Wang, X.; Wu, L.; Chen, Y.; Guo, X.; and Shi, C. 2022. Compact Graph Structure Learning via Mutual Information Compression. In WWW, 1601–1610. Liu, Y.; Liu, J.; Zhang, Z.; Zhu, L.; and Li, A. 2019. REM: From Structural Entropy to Community Structure Deception. In NeurIPS, volume 32. Mernyei, P.; and Cangea, C. 2020. Wiki-CS: A Wikipediabased Benchmark for Graph Neural Networks. arXiv preprint arXiv:2007.02901. Oord, A. v. d.; Li, Y.; and Vinyals, O. 2018. Representation Learning with Contrastive Predictive Coding. arXiv preprint arXiv:1807.03748. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. 2011. Scikit-Learn: Machine Learning in Python. Journal of Machine Learning Research, 12: 2825–2830. Song, Z.; Zhang, Y.; and King, I. 2022. Towards an Optimal Asymmetric Graph Structure for Robust Semi-supervised Node Classification. In SIGKDD, 1656–1665. Sun, Q.; Li, J.; Peng, H.; Wu, J.; Fu, X.; Ji, C.; and Philip, S. Y. 2022. Graph Structure Learning with Variational Information Bottleneck. In AAAI, volume 36, 4165–4174. Sun, Q.; Li, J.; Yang, B.; Fu, X.; Peng, H.; and Philip, S. Y. 2023. Self-organization Preserved Graph Structure Learning with Principle of Relevant Information. In AAAI, volume 37, 4643–4651. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2018. Graph Attention Networks. In ICLR. Velickovic, P.; Fedus, W.; Hamilton, W. L.; Lio, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep Graph Infomax. In ICLR. Wang, H.; Fu, Y.; Yu, T.; Hu, L.; Jiang, W.; and Pu, S. 2023. PROSE: Graph Structure Learning via Progressive Strategy. In SIGKDD, 2337–2348. Wang, R.; Mou, S.; Wang, X.; Xiao, W.; Ju, Q.; Shi, C.; and Xie, X. 2021. Graph Structure Estimation Neural Networks. In WWW, 342–353. Wu, J.; Chen, X.; Shi, B.; Li, S.; and Xu, K. 2023. SEGA: Structural Entropy Guided Anchor View for Graph Contrastive Learning. arXiv preprint arXiv:2305.04501. Wu, J.; Li, S.; Li, J.; Pan, Y.; and Xu, K. 2022. A Simple Yet Effective Method for Graph Classification. In IJCAI, 3580–3586. Wu, T.; Ren, H.; Li, P.; and Leskovec, J. 2020. Graph Information Bottleneck. In NeurIPS, 20437–20448. Yang, Z.; Zhang, G.; Wu, J.; Yang, J.; Sheng, Q. Z.; Peng, H.; Li, A.; Xue, S.; and Su, J. 2023. Minimum Entropy Principle Guided Graph Neural Networks. In WSDM, 114–122. Zeng, X.; Peng, H.; and Li, A. 2023. Effective and Stable Role-based Multi-Agent Collaboration by Structural Information Principles. In AAAI. Zhao, J.; Wang, X.; Shi, C.; Hu, B.; Song, G.; and Ye, Y. 2021. Heterogeneous Graph Structure Learning for Graph Neural Networks. In AAAI, volume 35, 4697–4705. Zhu, Y.; Xu, W.; Zhang, J.; Liu, Q.; Wu, S.; and Wang, L. 2021. Deep Graph Structure Learning for Robust Representations: A Survey. arXiv preprint arXiv:2103.03036, 14. Zou, D.; Peng, H.; Huang, X.; Yang, R.; Li, J.; Wu, J.; Liu, C.; and Yu, P. S. 2023. SE-GSL: A General and Effective Graph Structure Learning Framework through Structural Entropy Optimization. In WWW. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8379
2024
931
18,775
Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion Cunhang Fan1, Yujie Chen1, Jun Xue1, Yonghui Kong1, Jianhua Tao2,3*, Zhao Lv1* 1 Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University 2 Department of Automation, Tsinghua University 3 Beijing National Research Center for lnformation Science and Technology, Tsinghua University {cunhang.fan, kjlz}@ahu.edu.cn, {e22201148, e21201068, e22201093}@stu.ahu.edu.cn, [email protected] Abstract In recent years, knowledge graph completion (KGC) models based on pre-trained language model (PLM) have shown promising results. However, the large number of parameters and high computational cost of PLM models pose challenges for their application in downstream tasks. This paper proposes a progressive distillation method based on masked generation features for KGC task, aiming to significantly reduce the complexity of pre-trained models. Specifically, we perform pre-distillation on PLM to obtain high-quality teacher models, and compress the PLM network to obtain multigrade student models. However, traditional feature distillation suffers from the limitation of having a single representation of information in teacher models. To solve this problem, we propose masked generation of teacher-student features, which contain richer representation information. Furthermore, there is a significant gap in representation ability between teacher and student. Therefore, we design a progressive distillation method to distill student models at each grade level, enabling efficient knowledge transfer from teachers to students. The experimental results demonstrate that the model in the pre-distillation stage surpasses the existing state-ofthe-art methods. Furthermore, in the progressive distillation stage, the model significantly reduces the model parameters while maintaining a certain level of performance. Specifically, the model parameters of the lower-grade student model are reduced by 56.7% compared to the baseline. Introduction Knowledge graphs (KGs) are graph-structured knowledge bases, typically composed of triples (head entity, relation, tail entity), abbreviate as (h, r, t). Well known are YAGO (Suchanek, Kasneci, and Weikum 2007), Freebase (Bollacker et al. 2008), Wikidata (Vrandeˇci´c and Kr¨otzsch 2014) etc. KGs have proved to be useful in various downstream tasks such as intelligent question answering (Jia et al. 2021; Saxena, Tripathi, and Talukdar 2020), recommendation systems (Wang et al. 2019; Cao et al. 2019), semantic search (Xiong, Power, and Callan 2017; Berant and Liang 2014) etc. Despite the significant advances that KGs have made for various applications, they still suffer from the problem of incompleteness as the *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. information in the real world continues to grow. Therefore, for the automatic construction of KGs, knowledge graph completion techniques are crucial. Existing knowledge graph completion (KGC) tasks can generally be divided into two categories: structure-based and description-based methods. Structure-based methods use the topology and triple structure information of the knowledge graph to represent feature vectors of entity relationships, including TransE(Bordes et al. 2013), ConvE(Dettmers et al. 2018), and R-GCN(Schlichtkrull et al. 2018). Descriptionbased methods, use pre-trained language models and introduce semantic descriptions of entities and relations to learn representations, such as commonly used models like KG-BERT (Yao, Mao, and Luo 2017), PKGC (Lv et al. 2022), and LP-BERT (Li et al. 2022). It is evident that with the rise of pre-trained language models (PLM), descriptionbased methods have gradually taken the lead. By utilizing entity and relation semantic descriptions as auxiliary information and deeply mining the potential knowledge in PLM, description-based methods solve the problem of inductive KGC tasks that structure-based methods cannot handle, while achieving significant improvements in transductive KGC tasks. However, while description-based approaches improve performance, they also bring with them the problems of large model parameter numbers and high computational costs, limiting their application in downstream tasks such as real-time recommendation systems etc. Therefore, model lightweighting is essential. Knowledge distillation (Hinton, Vinyals, and Dean 2015), which involves the transfer of latent knowledge from a large teacher model to a small student model using soft labels, is a common method for model compression. It has been widely applied in the fields of computer vision (Zhao et al. 2022) and speech recognition (Kurata and Audhkhasi 2018). In the field of KGC, there are also related research works (Zhu et al. 2022; Wang et al. 2021b) that employ ensemble models consisting of multiple structure-based KGC models as multi-teacher models to transfer knowledge to student models in order to reduce embedding dimensions. However, to our knowledge, description-based KGC method does not have a corresponding knowledge distillation method, nor can structure-based KGC distillation strategy be directly applied to the description-based KGC method, because intuitively the model architectures of the description-based KGC The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8380 method and the structure-based KGC method are too different to be directly migrated. Therefore, we believe that the description-based method needs a simple and efficient knowledge distillation framework to fill this gap. In this paper, we propose a novel progressive distillation strategy based on masked generation feature (PMD), which can achieve a substantial reduction in model parameters while minimizing the impact on model performance. Traditional feature distillation only learns the representation information of the input entity set. In contrast, masked generation feature distillation potentially learns the representation information of the inferred entity set through inference generation. This approach addresses the problem of single representation information in the teacher model during traditional feature distillation. However, In the case of a limited number of parameters, how the student model can efficiently learn the rich representation information in the teacher model is a problem. To address this problem, we propose an progressive distillation strategy. The strategy aims to enhance inter-model migration efficiency through two approaches: gradually decreasing the mask ratio and reducing the number of model parameters. The objective is to align the amount of mask feature information in the teacher model with the representation capability of the student model. Specifically, we divide the progressive distillation strategy into two stages: in the pre-distillation stage, we use the masked generation feature distillation method to enhance the performance of baseline and to serve as a teacher model to guide the senior student model, and in the progressive distillation stage, we design a multi-grade student model and distills it grade-by-grade. This process focuses on transferring knowledge regarding rich masked generation representations and global triplet information. We conduct extensive experiments on two representative datasets, and the experimental results demonstrate the effectiveness of PMD. The model in the pre-distillation stage achieves state-of-the-art (SOTA) performance on the WN18RR dataset, the model in the progressive distillation stage can reduce the parameter count by up to 56.7% compared to baseline while maintaining a certain level of performance. Furthermore, we further validate the significance of masked generation feature distillation and the progressive distillation strategy through ablation experiments. Our contributions can be summarized as follows: • We propose a progressive distillation strategy based on masked generation features, greatly reducing model complexity and filling the gap in the field of knowledge distillation with description-based KGC methods. • We find that the traditional feature distillation strategy suffers from the problem of a single feature representation of the teacher model, so we propose that masked generation feature distillation motivates the teacher model to transfer rich representation information. • By conducting extensive experiments on two widely used datasets, WN18RR and FB15K-237, the results show that PMD achieves SOTA performance on the WN18RR dataset. The number of progressive distillation model parameters can be reduced by up to 56.7% from baseline. Related Work Knowledge Graph Completion In recent years, KGC method has developed rapidly. The key idea is to map entities and relations in KGs to a continuous vector space as an embedding representation. Among them, structure-based methods focus on representing the feature vectors of a triple through the structural information of the triple itself or the topology of the KGs. For example, TransE (Bordes et al. 2013) models the triple as a relational translation in Euclidean space; Complex (Trouillon et al. 2016) embeds entities and relations in a complex space to deal with asymmetric relations, while RotatE (Sun et al. 2018) models the triple as a relational rotation in a complex space. With the development of deep learning, CNN-based and GNN-based approaches have been proposed in the industry, such as ConvE (Dettmers et al. 2018) and ConvKB (Dai Quoc Nguyen, Nguyen, and Phung 2018), which use CNN to capture the local structural information of each triple; SACN (Shang et al. 2019) and CompGCN (Vashishth et al. 2019) extract topological information in KGs to represent the triple. With the rise of PLM (BERT (Kenton and Toutanova 2019), GPT (Radford et al. 2018), etc.), a large number of PLM-based KGC methods have emerged, such as KG-BERT (Yao, Mao, and Luo 2017), StAR (Wang et al. 2021a), etc. To represent entity and relational embeddings, they introduce natural language descriptions of entities and relations as auxiliary information to mine the potential entity-relational knowledge in pre-trained language models (Petroni et al. 2019). Knowledge Distillation Knowledge Distillation (KD) (Hinton, Vinyals, and Dean 2015) is one of the most common techniques in model compression and is widely used in computer vision (Zhao et al. 2022; Yang et al. 2022) and natural language processing (Sun et al. 2019; Sanh et al. 2019; Wang et al. 2023). The core idea of KD is to use the soft label of a large model (teacher model) to guide the learning of a small model (student model). This has the advantage of reducing the computational and storage resource consumption of the model while ensuring performance, thus making the model lighter. In addition to the role of model compression, KD can also improve model performance, and recent studies (Abnar, Dehghani, and Zuidema 2020; Kuncoro et al. 2019) have found that KD can transfer inductive biases between neural networks.The self-distillation strategy (Pham et al. 2022) stands out for its effective knowledge transfer approach, enhancing performance through distillation within its own network. Compared with the traditional logits distillation, masked generation feature distillation method we propose can ensure that the teacher model can transfer knowledge more efficiently, so that students can learn more enriched teacher knowledge. At the same time, combined with the progressive distillation strategy, it can ensure that the model parameters is significantly reduced while maintaining the model performance as much as possible. Methodology In this section, we introduce PMD in detail, as depicted in Figure.1. First, we will give a brief overview of knowledge graphs and the definition of link prediction task . Then, we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8381 MGFD Pre-Distillation Stage Text Triple 20% [MASK] Dataset Mask Rate 10% 10% 5% 5% 0% 0% [MASK] [MASK] Progressive Distillation Stage 12 Layers 12 Layers 12 Layers 12 Layers 6 Layers 9 Layers 9 Layers 3 Layers 3 Layers Robin Williams. Robin McLaurin Williams is an American actor.... stand-up comedy and feature film acting. performance actor film Dead Poets Society. Dead Poets Society is a 1989 American drama film ... Robin Williams ...The film was critically forward flow supervision from labels supervision from score supervision from MFGM Scoring Module Scoring Module Scoring Module Scoring Module Scoring Module Scoring Module Scoring Module Scoring Module Scoring Module Scoring Module Label Label Figure 1: This figure illustrates the overall architecture of the PMD. (i) MGFD applies masking operations to input tokens and sets an appropriate masking rate based on student model parameter count (ii) In pre-distillation stage, the performance of the initial model is improved. (iii) In progressive distillation Stage involves the design of multi-grade student models with gradually reduced parameter count and mask rate. (iv) Each student model is trained under three kinds of supervision as depicted. will describe the working principle and implementation details of the masked generation feature distillation . Finally, we will explain the architecture and underlying principles of the progressive distillation framework in detail . Definitions and Notation KGs is a directed relational graph consisting of entities and relations. It can be defined as G = {E, R, T }, where E is the set of entities, R is the set of relations and T is the set of triples, denoted as T = {(h, r, t) ⊆E × R × E}.The link prediction task aims to complete missing triplets based on the existing KGs information. Specifically, under the widely adopted entity ranking evaluation protocol, tail entity prediction infers the tail entity given a head entity and a relation, head entity prediction is similar. In this paper, inverse triplets are set up for each triplet (Dettmers et al. 2018), so only tail entity prediction needs to be performed in the experiments. Masked Generation Feature Distillation BERT (Zhang and Hashimoto 2021) model leverages the technique of masked language modeling to acquire valuable inductive biases. Inspired by this idea, we propose a masked generation feature distillation (MGFD), speculating that the concept of masking can also facilitate the transfer of more inductive biases from the teacher model in knowledge distillation. When triplets and their text description pass through deep networks, they acquire higher-order semantic information. In traditional feature distillation, the semantic information only consists of the global semantic information of neighboring tokens and the semantic information of the current token (i.e.input token set). In contrast, our proposed MGFD not only incorporates the semantic information within the input token set but also utilizes the inferential operations of deep networks to obtain semantic information from the inferred token set. This enrichment leads to a more comprehensive representation of the generated features, enabling the student model to learn more abundant representations. Consequently, this approach addresses the problem of inefficient knowledge transmission from the teacher model in the context of KGC tasks. Specifically, during the data input stage, we apply a masking operation to the input data of the teacher model and the student model, which includes triplets and their text description, based on a predetermined masking rate determined by the size of the teacher model’s parameters. The masked tokens are encoded by the teacher model, resulting in masked feature vectors that encapsulate rich semantic information. Meanwhile, during the training process, the student model encodes the masked tokens and generates corresponding student feature vectors at the masked positions. Ultimately, the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8382 two feature vectors are trained using Mean Squared Error (MSE) to progressively align the student’s feature vectors with the teacher’s. This approach facilitates the transfer of knowledge from the teacher model to the student model, enabling efficient knowledge migration. Through the above process, the student model learns rich representation information and achieves efficient knowledge transfer from the teacher model. The formula is as follows: Mask(FS | λ) −→Mask(FT | λ) (1) Where FS is the student feature vector, FT is the teacher feature vector, λ is the masking rate of the input sequence, and Mask is a masking operation on part of the input sequence. −→denotes the learning process. LMGF D = MSE(Mask(FS | λ) −Mask(FT | λ)) (2) It should be noted that we only calculate the distillation loss for the masked tokens. Progressive Distillation Framework In this section, we present the progressive distillation framework. Specifically, inspired by the idea of layer-by-layer distillation to transfer knowledge from deep models to shallow models (Xue et al. 2023).By gradually reducing the mask ratio and model parameters, it ensures that teacher model knowledge is effectively transferred to student model. This solves the issue of mismatched masking-generated feature information from the teacher model and the expressive capacity of the student model. Specifically, the progressive distillation framework can be divided into two stages: pre-distillation stage and progressive distillation stage. In the pre-distillation stage, we use MGFD to inspire the baseline model’s potential and generate a high-quality teacher model. In the progressive distillation stage, we repeatedly compress the teacher model to obtain multi-grade student models. During the distillation process, each higher-grade model is paired with scoring module and MGFD module. The scoring module assigns scores to the triples based on their plausibility, while the MGFD modules gradually reduce the mask rate as the grade decreases. In the specific training process, PMD performs knowledge transfer through the following key processes. Firstly, it is well known that for most machine learning tasks, truth labels are crucial and contain a large amount of standard information. For the KGC task is no exception, in the process of tail-entity prediction, the matching of correct tail-entity labels often results in high scores in the scoring module, allowing the model to learn the key triple feature information in the dataset. Therefore, true label distillation is essential in the distillation process. score = cos(ehr, et) = ehr · et ∥ehr∥∥et∥ (3) LCE = CrossEntropy(score, L) (4) where ehr and et represent the head entity relationship feature vector and the tail entity feature vector, respectively. L Dataset Ne Nr NT rain NV alid NT est WN18RR 40,943 11 86,835 3034 3134 FB15K-237 14,541 237 272,115 17,535 20,466 Table 1: Statistics of the datasets. is the true label of the training dataset. LCE is computed by CrossEntropy. Compared to the absolute standard information in the true label, the potential prior knowledge in the teacher model is also crucial. The teacher model encodes the global features of the triplets through the PLM model and evaluates the global information of the triplets through the scoring module, which contains much implicit knowledge. Through the MSE function, the student model approaches the evaluation results of the teacher model as closely as possible, which stimulates the learning ability of the student model and enables it to learn the prior knowledge in the teacher model. LSCORE = MSE(scoreS −scoreT ) (5) where scoreS and scoreT denote the output of the scoring module for the student and teacher models respectively. In addition to the information from the triplets in the input sequence, the information of the inferred entity set that matches the triplets is also crucial. The mentioned MGFD in section precisely addresses this key issue. By performing random masking operations, the teacher model generates corresponding inference feature vectors, which contain representation information of the inferred entity set. This compels the student model to learn additional information, thereby enhancing the expressive capability of the student model.The specific formula is given in (2). Overall, total loss comprises the three components above. α and β are used to balance the model’s ability to capture both the global and local information of the triplets, which can be expressed using the following formula: L = (1 −α −β) ∗LCE + α ∗LSCORE + β ∗LMGF D (6) Experiments Experimental Setup Datasets. We experiment on two common KGC benchmark datasets WN18RR(Dettmers et al. 2018) and FB15k237(Toutanova and Chen 2015). WN18RR is a subset of WordNet(Miller 1995), while FB15K-237 is a subset of Freebase. WN18RR and FB15K-237 resolve the test set leakage problem in WN18 and FB15K by eliminating inverse relations. The statistical data is shown in Table 1. Baselines. We select a representative SimKGC (Wang et al. 2022a) model as the baseline for our distillation framework. The effectiveness of our framework migration is demonstrated by achieving performance breakthroughs on hard-to-breakthrough high metric models. Evaluation Metrics. For tail entity prediction, given an (h, r, ?) pair, we predict and rank all possible entities and obtain the rank of t. The head entity prediction experiment is similar. We use four automatic evaluation metrics: (1) MRR The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8383 Method P WN18RR FB15k-237 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 structure-based methods TransE(Bordes et al. 2013) 24.3 4.3 44.1 53.2 27.9 19.8 37.6 44.1 RotatE(Sun et al. 2018) 47.6 42.8 49.2 57.1 33.8 24.1 37.5 53.3 ConvE(Dettmers et al. 2018) 43.0 40.0 44.0 52.0 32.5 23.7 35.6 50.1 CompGCN(Vashishth et al. 2019) 47.9 44.3 49.4 54.6 35.5 26.4 39.0 53.5 description-based methods KG-BERT(Yao, Mao, and Luo 2017) 110M 21.6 4.1 30.2 52.4 42.0 MTL-KGC(Kim et al. 2020) 110M 33.1 20.3 38.3 59.7 26.7 17.2 29.8 45.8 C-LMKE(Wang et al. 2022b) 110M 61.9 52.3 67.1 78.9 30.6 21.8 33.1 48.4 KGLM(Youn and Tagkopoulos 2022) 355M 46.7 33.0 53.8 74.1 28.9 20.0 31.4 46.8 LP-BERT(Li et al. 2022) 355M 48.2 34.3 56.3 75.2 31.0 22.3 33.6 49.0 StAR(Wang et al. 2021a) 355M 40.1 24.3 49.1 70.9 29.6 20.5 32.2 48.2 Baseline(Wang et al. 2022a) 210M 67.1 58.5 73.1 81.7 33.3 24.6 36.2 51.0 PMD12(ours) 210M 67.8 58.8 73.7 83.2 33.3 24.3 36.3 51.8 PMD9(ours) 176M 67.2 58.2 73.2 82.5 32.6 23.5 35.4 51.0 PMD6(ours) 133M 65.9 56.5 72.3 81.9 32.4 23.4 35.4 50.7 PMD3(ours) 91M 62.8 52.9 69.5 80.4 32.3 23.3 35.2 50.5 Table 2: Main results for WN18RR and FB15K-237 datasets, ”12”, ”9”, and ”6” refer to the number of layers in the Transformer Encoder. ”H@k” represents ”Hits@k”. ”P” represents Parameters, ”M” is short for million. L Baseline∗ LKD PKD PMD MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 12 66.9 58.4 81.7 67.1 58.8 81.5 67.0 58.7 81.2 67.8 58.8 83.2 9 63.1 53.6 79.5 66.2 57.5 81.5 66.7 58.1 81.3 67.2 58.2 82.5 6 62.1 52.7 78.5 64.5 55.3 80.8 65.4 56.4 81.4 65.9 56.5 81.9 3 61.2 52.4 74.9 61.8 51.5 79.8 62.6 52.5 80.1 62.8 52.9 80.4 Table 3: The comparison experiment of the distillation strategy on the WN18RR dataset, ”LKD(Hinton, Vinyals, and Dean 2015)” is the logits distillation, and ”PKD(Sun et al. 2019)” is the teacher model’s middle feature layer, and the feature distillation is performed by layer skipping. “L” is Layer. (Mean Reciprocal Rank), the average inverse rank of the test triples. (2) Hits@k (k ∈{1, 3, 10}), the proportion of correct entities ranked in the top k. Higher MRR and Hits@k values indicate better performance. Hyperparameters. The encoders of PLM are initialized as bert-base-uncased, During the distillation process,The number of layers in the student model is [12, 9, 6, 3] respectively, The masking rates are [20%, 10%, 5%, 0%] respectively. The weight values of the loss function, α and β, are searched in a grid with intervals of 0.05 within the range of [0, 0.5]. We perform a grid search on the learning rate range {3 × 10−5, 5 × 10−5}. We use the AdamW optimizer with linear learning rate decay. The model is trained in batch size of 512 on 2 RTX 3090 GPUs. The code of our method has been released in https://github.com/cyjie429/PMD Main Result We reuse the data results from StAR(Wang et al. 2021a) regarding TransE and obtained the experimental data for other models from their respective papers’ best results. In Table 2, on the WN18RR dataset, the PMD12 during the pre-distillation stage achieve a significant improvement in all metrics without increasing the model parameters, reaching the SOTA level. We attribute this to the MGFD module, which allows the student model to learn rich representation information, thereby enhancing the model’s robustness and significantly improving its overall performance. For the FB15K-237 dataset, PMD12 shows improvements in Hits@3 and Hits@10 metrics, but there is a decrease in Hits@1. We believe this is mainly due to two reasons. Firstly, on the FB15K-237 dataset, there are only 14,541 entities and 237 relations, with an average in-degree of 37 for entities. This implies that an entity can correspond to multiple relations. Therefore, in the MGFD, masking triples may lead to incorrect inferences by the teacher model due to the multitude of relationships, resulting in what is known as teacher giving wrong answers. Consequently, the student model learns incorrect representation information, leading The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8384 Method Layers MR ↓ MRR ↑ Hits@1 ↑ Hits@3 ↑ Hits@10 ↑ 12 132.1 67.0 58.4 72.7 81.7 9 140.0 63.1 53.6 69.6 79.5 6 130.7 62.1 52.7 68.5 78.5 Baseline∗ 3 244.3 61.2 52.4 65.8 74.9 12 145.6 67.3 59.3 72.5 81.3 9 150.4 66.7 58.6 72.1 80.6 6 166.9 65.4 56.9 70.6 80.3 PMD (w/o MGFD) 3 165.1 62.3 52.8 68.4 78.9 12 110.3 67.8 58.8 73.7 83.2 9 107.2 67.2 58.2 73.2 82.5 6 120.0 65.9 56.5 72.3 81.9 PMD (ours) 3 133.6 62.8 52.9 69.5 80.4 Table 4: Main results with and without (w/o) the MGFD module on the WN18RR dataset. ↓indicates that the lower the indicator, the better the performance. ↑indicates that the higher the indicator, the better the performance to a decrease in Hits@1. Secondly, the baseline model’s performance on FB15K237 is not satisfactory, which means that the teacher model did not learn strong inductive biases. This slight issue of erroneous transfer occurs as a result. In Table 2, we compare the parameters of popular PLMbased KGC methods. From the experimental results, we observe the following. PMD9 surpasses the baseline in all metrics except for Hits@1, even with a reduced parameter count. This indicates that the PMD method can ensure model performance while achieving model network compression. Then, PMD3 achieves better expressive power compared to commonly used high-parameter models (i.e. 110M, 355M) with a reduced model network size of only 91M (i.e. 56.7% reduction compare to the baseline). This demonstrates the effectiveness of our proposed PMD strategy in maintaining good performance despite a significant reduction in model parameters. The difference in parameters still remains substantial between the structure-based methods and the description-based methods. However, the two approaches address distinct problems. Description-based methods, employing pre-trained language models, can tackle inductive KGC tasks, which involve predicting unseen entities. In contrast, structure-based methods are confined to performing KGC tasks within known entity sets. Ablation Q1: Is PMD More Efficient Than Common Distillation Strategies? Yes! We choose a common and powerful distillation strategy to do a comparative experiment, the specific experiment is shown in Table 3. From Table 3, it can be found that because the representation capacity of the model is constrained by the number of model parameters, when the number of model parameters decreases sharply, the model’s performance will also decline. However, knowledge distillation methods can mitigate this performance degradation to some extent. By comparing LKD, PKD and PMD, we find that the use of knowledge distillation methods can greatly alleviate the substantial drops in Hits@1 and Hits@10 metrics. However, when comparing LKD, PKD, and PMD, there still exists a noticeable gap, particularly in terms of the Hits@10 metric. Comparing the 12-layer models in the predistillation stage, only our PMD achieves performance improvement across all metrics. This allows the model to surpass its own capabilities without increasing the model’s workload. We believe that the main reason is the effectiveness of the MGFD module. In comparison to the original training strategy, MGFD introduces a greater amount of uncertain reasoning information through masked inference, which includes both positive and negative information. However, because the teacher model is a well-trained strong model, it tends to lean towards the positive side when it comes to reasoning information. This also enables the student model to learn richer knowledge, thereby enhancing the robustness of the model. Q2: Are Both Progressive Distillation Module and MGFD Module Useful? Yes! Specifically, the difference between PMD (w/o MGFD) and PMD(MGFD) lies in whether or not to perform a mask operation on the input sequence, and the rest of the implementation process is exactly the same. As shown in Table 4, when we remove the MGFD module and use only the progressive distillation strategy, the model achieves an improvement in precision on the Hits@1 metric, while the Hits@3 and Hits@10 metrics experience a certain degree of decline. This indicates that the progressive distillation strategy effectively transfers the global representation information from the teacher model to the student model. However, it also introduces the biases present in the teacher model, leading to a decrease in the model’s robustness. Moreover, when compare to the baseline model, PMD (without MGFD) outperforms the baseline on most metrics while greatly preserving the performance of the teacher model. This further validates the feasibility and effectiveness of the progressive distillation strategy. On the other hand, when the MGFD module is employed, taking PMD12 as an example, compared to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8385 3 6 9 12 60 62 64 66 68 3 6 9 12 48 50 52 54 56 58 60 3 6 9 12 67 68 69 70 71 72 73 74 3 6 9 12 79.5 80.0 80.5 81.0 81.5 82.0 82.5 83.0 83.5 62.8 65.9 67.2 67.8 62.4 65.4 66.5 67.3 61.8 65.1 66.9 67.4 60.1 64 66.5 67.8 52.9 56.5 58.2 58.8 52.5 56.2 57.9 58.9 51.2 55.3 58 58.8 48.6 53.4 56.9 58.8 69.5 72.3 73.2 73.7 68.8 71.6 72.5 72.7 68.8 72.1 72.7 73 67.9 71 73 73.7 80.4 81.9 82.5 83.2 79.6 81.1 81.8 81.6 80 81.8 82.5 82 79.7 82.2 83.3 83.2 MRR(%) The number of layers Hits@1(%) The number of layers ours 5% 10% 20% ours 5% 10% 20% Hits@3(%) The number of layers ours 5% 10% 20% ours 5% 10% 20% Hits@10(%) The number of layers Figure 2: Comparison experiments between the diminishing mask rate and the fixed mask rate. PMD12(w/oMGFD), sacrificing a mere 0.4 on the Hits@1 metric results in performance gains of 1.3 on Hits@3 and 1.9 on Hits@10. We believe that such a trade-off is highly valuable, and the same trend holds when the number of layers changes. It addresses the issue of reduced model robustness when using only the progressive distillation strategy. Therefore, we believe that by striking a balance between the two, we can improve the overall performance of the model. We compare the results of fixed masking rate and decreasing masking rate in our experiment, as shown in Figure 2. We find that using a decreasing masking rate is the most stable and consistently maintains the best performance across all metrics. The fixed 20% mask rate performs well on the hits@10 metric, but drops dramatically on the rest of the metrics. This demonstrates that only a progressive distillation strategy with simultaneous decreases in mask rate and model parameters can reduce information loss in the teacher model and thus ensure a certain level of model performance while the model parameters plummet. Q3: What Effects Do Different Mask Rates in the MGFD Module Have? The higher the masking rate, the stronger the model’s robustness, but the lower its accuracy. We only explored the masking rate of MGFD in the pre-distillation stage for the PMD12 model, without increasing the model’s parameters, in order to eliminate any performance degradation resulting from reducing the model’s parameters. The experimental results, as shown in Figure.3, As the masking rate gradually increases from 0% to 50%, the Hits@10 metric shows a progressive upward trend (i.e., increasing from 81.67 to 84.27 and then decreasing to 83.75), while the Hits@1 metric exhibits a gradual downward trend (i.e., decreasing from 58.44 to 53). This indicates that during the knowledge transfer process, the MGFD module enables the student model to learn rich representations of masked features generated by the teacher model, thereby improving the model’s robustness. However, excessively high masking rates can prevent the teacher model from generating accurate triple features based on existing information, resulting in a decrease in precision. Therefore, based on the experimental results, we fi58.4458.69 59.1358.91 58.84 58.01 57.5 56.46 55.52 54.23 53 81.6781.78 82.59 82.78 83.22 83.75 83.92 84.2784.22 83.8783.75 0.0 0.1 0.2 0.3 0.4 0.5 53 54 55 56 57 58 59 60 Hits@1(%) Hits@10(%) Mask Rate Hits@1(%) 81.5 82.0 82.5 83.0 83.5 84.0 84.5 Hits@10(%) Figure 3: Hits@1 and Hits@10 indicators of the PMD12 with mask rates from 0% to 50%. nally consider a 20% masking rate as optimal, as it ensures both model robustness and precision improvement. Conclusion In this paper, we propose PMD method, aiming to significantly reduce the complexity of KGC models. To address the issue of limited representation information in traditional feature distillation methods, we design the MGFD approach, where rich representation information is generated by the teacher model and transferred to the student model. To tackle the problem of mismatched expressive power between the teacher and student models, we introduce a progressive distillation strategy that gradually reduces the masking ratio and model parameters, enabling efficient knowledge transfer between teacher and student. Extensive experimental results and ablation studies validate the effectiveness of PMD. In the future, to more effectively transfer knowledge from the teacher model to the student model, we will explore adaptive selection of mask rate and adaptive selection of mask positions during the distillation stage. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8386 Acknowledgments This work is supported by the STI 2030—Major Projects (No. 2021ZD0201500), the National Natural Science Foundation of China (NSFC) (No.62201002), Distinguished Youth Foundation of Anhui Scientific Committee (No. 2208085J05), Special Fund for Key Program of Science and Technology of Anhui Province (No. 202203a07020008), Open Fund of Key Laboratory of Flight Techniques and Flight Safety, CACC (No, FZ2022KF15) References Abnar, S.; Dehghani, M.; and Zuidema, W. 2020. Transferring inductive biases through knowledge distillation. arXiv:2006.00555. Berant, J.; and Liang, P. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1415–1425. Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 1247–1250. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26. Cao, Y.; Wang, X.; He, X.; Hu, Z.; and Chua, T.-S. 2019. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In The world wide web conference, 151–161. Dai Quoc Nguyen, T. D. N.; Nguyen, D. Q.; and Phung, D. 2018. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In Proceedings of NAACL-HLT, 327–333. Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. Jia, Z.; Pramanik, S.; Saha Roy, R.; and Weikum, G. 2021. Complex temporal question answering on knowledge graphs. In Proceedings of the 30th ACM international conference on information & knowledge management, 792–802. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of naacL-HLT, volume 1, 2. Kim, B.; Hong, T.; Ko, Y.; and Seo, J. 2020. Multi-task learning for knowledge graph completion with pre-trained language models. In Proceedings of the 28th International Conference on Computational Linguistics, 1737–1743. Kuncoro, A.; Dyer, C.; Rimell, L.; Clark, S.; and Blunsom, P. 2019. Scalable Syntax-Aware Language Models Using Knowledge Distillation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3472–3484. Kurata, G.; and Audhkhasi, K. 2018. Improved knowledge distillation from bi-directional to uni-directional LSTM CTC for end-to-end speech recognition. In 2018 IEEE Spoken Language Technology Workshop (SLT), 411–417. IEEE. Li, D.; Yang, S.; Xu, K.; Yi, M.; He, Y.; and Wang, H. 2022. Multi-task pre-training language model for semantic network completion. arXiv:2201.04843. Lv, X.; Lin, Y.; Cao, Y.; Hou, L.; Li, J.; Liu, Z.; Li, P.; and Zhou, J. 2022. Do pre-trained models benefit knowledge graph completion? a reliable evaluation and a reasonable approach. Association for Computational Linguistics. Miller, G. A. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11): 39–41. Petroni, F.; Rockt¨aschel, T.; Riedel, S.; Lewis, P.; Bakhtin, A.; Wu, Y.; and Miller, A. 2019. Language Models as Knowledge Bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2463–2473. Pham, M.; Cho, M.; Joshi, A.; and Hegde, C. 2022. Revisiting self-distillation. arXiv:2206.08491. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training. Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv:1910.01108. Saxena, A.; Tripathi, A.; and Talukdar, P. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In Proceedings of the 58th annual meeting of the association for computational linguistics, 4498–4507. Schlichtkrull, M.; Kipf, T. N.; Bloem, P.; Van Den Berg, R.; Titov, I.; and Welling, M. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, 593– 607. Springer. Shang, C.; Tang, Y.; Huang, J.; Bi, J.; He, X.; and Zhou, B. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 3060–3067. Suchanek, F. M.; Kasneci, G.; and Weikum, G. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, 697–706. Sun, S.; Cheng, Y.; Gan, Z.; and Liu, J. 2019. Patient Knowledge Distillation for BERT Model Compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4323–4332. Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; and Tang, J. 2018. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8387 Toutanova, K.; and Chen, D. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality, 57–66. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, ´E.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, 2071–2080. PMLR. Vashishth, S.; Sanyal, S.; Nitin, V.; and Talukdar, P. 2019. Composition-based Multi-Relational Graph Convolutional Networks. In International Conference on Learning Representations. Vrandeˇci´c, D.; and Kr¨otzsch, M. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10): 78–85. Wang, B.; Shen, T.; Long, G.; Zhou, T.; Wang, Y.; and Chang, Y. 2021a. Structure-augmented text representation learning for efficient knowledge graph completion. In Proceedings of the Web Conference 2021, 1737–1748. Wang, H.; Zhao, M.; Xie, X.; Li, W.; and Guo, M. 2019. Knowledge graph convolutional networks for recommender systems. In The world wide web conference, 3307–3313. Wang, K.; Liu, Y.; Ma, Q.; and Sheng, Q. Z. 2021b. Mulde: Multi-teacher knowledge distillation for low-dimensional knowledge graph embeddings. In Proceedings of the Web Conference 2021, 1716–1726. Wang, L.; Zhao, W.; Wei, Z.; and Liu, J. 2022a. SimKGC: Simple Contrastive Knowledge Graph Completion with Pretrained Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 4281–4294. Wang, X.; He, Q.; Liang, J.; and Xiao, Y. 2022b. Language models as knowledge embeddings. arXiv:2206.12617. Wang, Y. C.; Ge, X.; Wang, B.; and Kuo, C.-C. J. 2023. GreenKGC: A Lightweight Knowledge Graph Completion Method. In In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 10596—-10613. Xiong, C.; Power, R.; and Callan, J. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of the 26th international conference on world wide web, 1271–1279. Xue, J.; Fan, C.; Yi, J.; Wang, C.; Wen, Z.; Zhang, D.; and Lv, Z. 2023. Learning from yourself: A self-distillation method for fake speech detection. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Yang, Z.; Li, Z.; Jiang, X.; Gong, Y.; Yuan, Z.; Zhao, D.; and Yuan, C. 2022. Focal and global knowledge distillation for detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4643–4652. Yao, L.; Mao, C.; and Luo, Y. 2017. KG-BERT: BERT for knowledge graph completion. arXiv:1909.03193. Youn, J.; and Tagkopoulos, I. 2022. KGLM: Integrating Knowledge Graph Structure in Language Models for Link Prediction. arXiv:2211.02744. Zhang, T.; and Hashimoto, T. B. 2021. On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 5131– 5146. Zhao, B.; Cui, Q.; Song, R.; Qiu, Y.; and Liang, J. 2022. Decoupled knowledge distillation. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 11953–11962. Zhu, Y.; Zhang, W.; Chen, M.; Chen, H.; Cheng, X.; Zhang, W.; and Chen, H. 2022. Dualde: Dually distilling knowledge graph embedding for faster and cheaper reasoning. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 1516–1524. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8388
2024
932
18,776
StockMixer: A Simple Yet Strong MLP-Based Architecture for Stock Price Forecasting Jinyong Fan, Yanyan Shen* Shanghai Jiao Tong University {weizhili, shenyy}@sjtu.edu.cn Abstract Stock price forecasting is a fundamental yet challenging task in quantitative investment. Various researchers have developed a combination of neural network models (e.g., RNNs, GNNs, Transformers) for capturing complex indicator, temporal and stock correlations of the stock data. While complex architectures are highly expressive, they are often difficult to optimize and the performances are often compromised by the limited stock data. In this paper, we propose a simple MLPbased architecture named StockMixer which is easy to optimize and enjoys strong predictive performance. StockMixer performs indicator mixing, followed by time mixing, and finally stock mixing. Unlike the standard MLP-based mixing, we devise the time mixing to exchange multi-scale time patch information and realize the stock mixing by exploiting stock-to-market and market-to-stock influences explicitly. Extensive experiments on real stock benchmarks demonstrate our proposed StockMixer outperforms various state-ofthe-art forecasting methods with a notable margin while reducing memory usage and runtime cost. Code is available at https://github.com/SJTU-Quant/StockMixer. Introduction Stock price forecasting is a fundamental task in the field of quantitative investment. Due to the fact that the movements of different stock prices in a market are not independent with each other, stock price forecasting is practically formulated as a multivariate time series forecasting problem. As the stock market is highly volatile and chaotic, achieving high forecasting accuracy remains an open question. Numerous efforts have been devoted to improving forecasting performance for profitable stock investment. Early attempts apply basic machine learning methods to uncover complex patterns in stock data, including decision trees (Nugroho, Adji, and Fauziati 2014; Kamble 2017), support vector machines (SVM) (Xie et al. 2013), and k-nearest neighbors (KNN) (Alkhatib et al. 2013). With the advent of deep learning, recent literature focuses on developing neural architectures that are expressive and flexible to exploit various inductive biases based on the intuitive (probably insightful) understandings about the stock market. Generally, there are three kinds of correlations that have been investigated. *corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. • Indicator correlations. Typically, there are several financial indicators serving as raw features for each stock per trading day, e.g., the basic open, high, low, and closing prices. It is desirable to model correlations and dependencies among raw indicators and extract high-level latent features that are informative for future stock trends. • Temporal correlations. Stock price movements are fundamentally caused by the continuous demand-and-supply balancing. The temporal trends in surrounding trading days are noticeably correlated, e.g., moving in the same or reverse directions. The existence of temporal correlations makes the future trends predictable. • Stock correlations. As staying in the same market, stocks are correlated. For instance, stocks in the same industry may all have an upstream movement in one trading day due to a bullish event for the industry. Being aware of the stock correlations is thus beneficial for the forecasting. Existing deep learning methods (Zou et al. 2022) take parts or all of the correlations into account. A typical model architecture consists of a specialized neural module for modeling each individual correlation, followed by a fusion module that combines information from preceding modules for the final prediction. Specifically, Recurrent Neural Networks (RNNs) (Qin et al. 2017; Nelson, Pereira, and De Oliveira 2017; Feng et al. 2018) are used for modeling temporal correlations; Graph Neural Networks (GNNs) (Feng et al. 2019; Li et al. 2021; Sawhney et al. 2021) are good at exchanging stock-wise information; Transformer-based models (Yoo et al. 2021; Wang et al. 2022; Li et al. 2023a) use the attention mechanism to emphasize crucial patterns from correlated subjects. Nevertheless, a hybrid neural architecture increases the model complexity and may further hurt the model’s generalization ability. The reasons are three-fold. First, stock price data is of a limited size, i.e., about 250 trading days per year and each day delivers only one multivariate time series for training, introducing overfitting risks into the model. Second, a hybrid model involving diverse information exchanging scopes (e.g., local or global) and behaviors (e.g., using gating or attention) is not easy to optimize which may compromise the final performance. Third, some components may learn inaccurate inductive bias that misleads the model training. For instance, GNNs assume smoothness between related stocks but their underlying patterns can be The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8389 heterogeneous. To these ends, we are interested in developing a simple neural architecture that is easy to optimize and enjoys strong predictive performance by modeling the above-mentioned correlations effectively. Recently, Multi-layer Perception (MLP) architectures have shown promising performance in computer vision tasks than state-of-the-art neural networks that use convolutions and attention mechanisms (Tolstikhin et al. 2021; Touvron et al. 2022; Yu et al. 2022). The architectural simplicity and linear computational efficiency of the MLP architecture inspire us to adapt it to the problem of stock price forecasting. A straightforward way is to perform MLP-based mixing three times for modeling indicator, temporal, and stock correlations, respectively. Specifically, indicator mixing uses matrix multiplication and activation functions to model interactions among indicators within each stock-time pair, while time and stock mixing are performed within the stock-indicator and indicator-time pairs accordingly. However, the standard MLP-based mixing suffers from poor performance according to our experiments. Through deep analysis, we identify two key technical challenges. First, due to the high dynamics of the stock market, the time correlations in surrounding trading days are not simply point-wise correlations. At one extreme, the closing prices of a stock within a time window like 5 days are constantly changing, similar to iid samples drawn from an underlying distribution. The time mixing that exchanges timepoint information is thus insufficient for modeling temo correlations. Second, MLP-based mixing over stocks essentially performs information exchange among stocks based on the learned weight matrix. As pointed out in the previous works (Sawhney et al. 2021; Huynh et al. 2023), stock correlations are complex, and the direct stock-to-stock mixing may compromise the model performance. For instance, two stocks in the same industry may randomly have similar trends or divergent trends over time. To sum up, it is time to seek effective time mixing and stock mixing schemes that overcome the above two challenges. In this paper, we propose a simple yet strong MLP-based architecture named StockMixer for stock price forecasting. Specifically, we design three mixing blocks for modeling the indicator, temporal, and stock correlations effectively. Our insights to tackle the two challenges are the following. For time mixing, the local temporal patterns on surrounding days are correlated to a certain extent. For example, the rising-up and falling-down variation tendencies are not independent and are driven by the latent stock value. Hence, we patch time steps at multiple scales and extract patch tendencies to be mixed. For stock mixing, we recognize that stock correlations are typically influenced by overall stock market conditions or states. For instance, in a bull market, stocks tend to become more correlated and move together as investors become more optimistic. We thus use two MLP structures to learn latent stock states from all stocks and use the states to influence individual stocks. This leads to a more robust modeling of stock correlations, from individual stocks to the whole market, and then back to the stocks. Based on the sophisticated designs for time mixing and stock mixing, our proposed StockMixer enjoys structural complexity as MLPmixer and achieves more promising predictive performance than state-of-the-art methods. To summarize, this paper has the following contributions: • We propose a lightweight and effective MLP-based architecture for stock price forecasting. It consists of indicator mixing, time mixing and stock mixing to capture complex correlations in the stock data. • We demonstrate the deficiencies of the standard MLPbased mixing. We introduce patch-based multi-scale time mixing and market-aware stock mixing that exploits the characteristics of stock patterns. • We conduct extensive experiments on three real stock benchmarks NASDAQ, NYSE and S&P500. The experimental results show our proposed StockMixer outperforms state-of-the-art methods in terms of various evaluation metrics. Related Work In this section, we review the related work from the literature of stock price forecasting and MLP-based Architectures. Stock Price Forecasting. Stock price forecasting has undergone a long period of development on top of pricevolume indicators from historical data. At the very beginning, conventional mathematical algorithms only focus on numerical features (Piccolo 1990; Wang and Leu 1996; Tseng, Yu, and Tzeng 2002) based on financial technical analysis. With the advances of deep neural network (DNN), parts of the works following previous paradigm employ recurrent neural networks (Nelson, Pereira, and De Oliveira 2017; Qin et al. 2017) or convolutional neural networks (Tsantekidis et al. 2017) to model a single stock price and predict its short-term trend. To enhance the capability of handling fine-grained transition signals, some efforts have explored other techniques such as selfattention mechanism (Li, Shen, and Zhu 2018), adversarial training (Feng et al. 2018), and gated causal convolutions (Wang et al. 2021). Later studies achieve state-of-theart performance considering the inter-stock relationships. For instance, RSR (Feng et al. 2019) proposes a temporal graph convolution model, which created the compositions of temporal encoder, relation embedding and a prediction layer. LSTM-RGCN (Li et al. 2021) handles both positive and negative correlations among stocks to alleviate the oversmoothing problem when predicting overnight stock price movements. STHAN-R (Sawhney et al. 2021) augments the corporate relevance based on Wiki data and uses hypergraph convolution to propagate higher-order neighbor’s information. The latest method ESTIMATE (Huynh et al. 2023) also uses hypergraph to capture the non-pairwise correlations with temporal generative filters for individual patterns per stock. MLP-based Architectures. Recently, MLP has been reinvestigated in the computer vision domain (Tolstikhin et al. 2021; Liu et al. 2021; Touvron et al. 2022). With linear computation complexity and simpler architectures, MLPMixer (Tolstikhin et al. 2021) attains similar performance The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8390 Figure 1: Overview of the proposed StockMixer. as CNN and Transformer by operating patches and significantly enhancing the amount of inductive bias upon common MLPs. Similar efforts have recently been put in the time series domain. Dlinear (Zeng et al. 2023) and its followup works (Das et al. 2023) confirm the feasibility of this simple architecture in temporal prediction tasks. A series of works (Zhang et al. 2022; Li et al. 2023b; Ekambaram et al. 2023) utilize the MLP-Mixer backbone that significantly empowers the learning capability of simple MLP structures to improve the performance of time series forecasting. However, since stock data lacks periodicity and changes dynamically in temporal and stock correlations, the aforementioned MLP-based methods perform even worse than the basic models on stock datasets. Methodology Problem Definition Following the setup of existing works (Feng et al. 2018; Huynh et al. 2023), we input the normalized stock historic patterns with multiple indicators (such as stock open price, close price or 5-day average close price) and output closing price of the next day to calculate the 1-day return ratio. The notations are as follows. Given all data of the stock market composed of N stocks X = {X1, X2, . . . , XN}, each stock Xi ∈RT ×F contains historical data with lookback window length T, where F denotes the indicator dimension at one time step. Our task is to predict closing price pt i on trading day t and calculate the 1-day return ratio rt i = pt i−pt−1 i pt−1 i . Denoting our model parameters as θ, the process can be expressed as: X ∈RN×T ×F θ−→p ∈RN×1 →r ∈RN×1. (1) Standard MLP-based Mixing As a lightweight method towards image classification, MLPMixer only relies on the linear layers repeatedly implemented in the token or feature channel, residual connection, data scale transformation (such as reshape, transposition) as well as appropriative activation function. Except for a significant enhancement for the computation speed, we value it more in exchanging information between various dimensions. This ability enables close communication between indicators, time, and stocks in the stock market, promoting the expressive power of the model. Residual connection keeps a trade-off between inputs and mixed feature while layer normalization eliminates the impact of data offset to a certain extend. For each original representation x ∈Ra×b, we compute a new embedding y ∈Ra×b with mixed features on dimension a as: y = x + W2σ(W1LayerNorm(x)), (2) where x is the inputs feature and y is the output of the block. W1 ∈Rh×a, W2 ∈Ra×h are trainable weights of fully connected layers and h is a tunable hidden dimension always equal to a. σ denotes the non-linear activation function, which makes significant impact on predicted performance. Previous CV models chose GeLU (Howard et al. 2019) that performs better on images, and through experiments we find that ReLU and HardSwish (Avenash and Viswanath 2019) achieve superior performance on temporal data. The StockMixer Approach Figure 1 illustrates the overview of StockMixer, mainly including two parts: indicator&time mixing and stock mixing. The former extracts the respective representations of each stock, acting as an efficient encoder. The latter gathers the learned representations of all stocks in current market to capture the complex stock correlations. Finally, we combine these two representations to predict close price on trading day. Here, adhering to the arrangement of data dimension, we design the module sequence as indicator mixing, time mixing and stock mixing, deriving our StockMixer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8391 Figure 2: Standard mixing (left) vs time mixing (right). Indicator Mixing. Historical stock price has shown to be a strong indicator of future stock trends and widely used across financial literature. Previous works fed the sequence of financial indicators into recurrent neural network and ignored the correlations between indicators. For instance, the difference between the open and closing prices of a stock on the same day may imply its future trend, so it is needed to exchange the information of indicators at each time step before calculating the temporal representations. Our indicator mixing is consistent with standard MLP-based mixing. For each stock, we transpose its time and indicator to perform feature mixing on the indicator dimension and formulate the indicator mixing as: ˆxT = xT + W2σ(W1LayerNorm(xT )), (3) where xT ∈RF ×T denotes transposed original embedding of a single stock and ˆx ∈RT ×F is the result of indicator mixing and we take it as the inputs of the following time mixing. Time Mixing. Unlike standard MLP-based mixing in computer vision emphasizing the equality of patches, information exchange in temporal domain relies more on chronological order. Specifically, information from earlier time steps can influence later time steps, but not vice versa. However, standard MLP-based mixing employs the fully connected structure to exchange information against the characteristic of temporal data. To address this issue, we propose a structural modification on the fully connected hidden layer resembling a self-attention mask in Figure 2. When communicating the temporal information, any t could only see itself and its previous time step’s content instead of adopting all equally. This modification ensures that information from later time steps does not leak into the earlier ones, more in line with temporal nature. Replacing the weights with upper triangular matrix realises the process: h = ˆx + U2σ(U1LayerNorm(ˆx)), (4) where x ∈RT ×F denotes the indicator mixing representation and U1 ∈RHt×T , U2 ∈RT ×Ht respectively signify the learnable weights of the first and second fully connected layer, that only the upper triangular part of a matrix is trainable to achieve the effect of mask. Ht means the hidden dimension of time and here we set Ht = T as customary. Although there exist researches (Zeng et al. 2023; Li et al. 2023b; Ekambaram et al. 2023) deploying MLP into Long Time Series Forecasting (LTSF), they rely heavily on stable and periodic datasets (e.x. electricity and transportation), which are easier to learn stable and sufficient representations. Due to the timeliness of the shares market, only recent lookback window is effective for price prediction. It is necessary to utilize the patterns more fully and overcome the sensitivity of linear models to small fluctuations caused by the absence of periodicity. In order to mine information from short sequences as much as possible while enhancing the robustness of the linear layer against time deviation, we segment original time sequence into subsequence-level patches and mix features at different scales. Such segmentation causes dimension extension adverse to mixture, so we map the representations of all time steps in a patch into one overall look. Specifically, we obtain the corresponding single pattern by avgpool or one-dimensional convolution from raw inputs of a stock x ∈RT ×F as: x(k) = Avgpool(x)kernel=k, k ∈{T 2 , T 4 , . . . , 1}, (5) and x(k) ∈R T k ×F represents the compressed sequence when the patch size is k. Then we send the x(k) through indicator mixing and time mixing, obtain its mixing embedding h(k) ∈R T k ×F . After that, we assign a fully connected layer after a concatenation operation to all patch size to aggregate the final temporal representation h. The process is: h(k) = TimeMixing(IndicatorMixing(x(k))), (6) h = FC(concat(h(k))), k ∈{T 2 , T 4 , . . . , 1}. (7) Concretely, in this work, if the length of the input series is 16, we set k ∈{1, 2, 4} and thus h ∈R(T + T 2 + T 4 ). For the convenience of subsequent narration, we denote the embedding dimension d = (T + T 2 + T 4 ). By combining information at different scales, the model can obtain multi-level, rich and diverse feature representations on limited series, and improve its generalization ability on unprecedented data. Stock Mixing. Based on the previous operations, we obtain the temporal representations of all stocks H = {h1, h2, . . . , hN} ∈RN×d. Next, we describe our Stock Mixing to construct inter stock relation without any prior knowledge like knowledge graphs or industry information. Obviously, it is conceivable that the strong information exchange capability of MLP-Mixer could be applied for relation capture. By setting the hidden dimension of standard mixing a = N in Equation 2, the interaction process can be understood as N market characteristics aggregated by all stocks, in turn, affect these N stocks. This modeling method is no different from message-passing on a fully connected graph which involves an edge between any two stocks. In that case, some insignificant relationships or coincidences will also be considered and the mode of message passing we capture is extremely vulnerable. Due to the small sizes of the stock datasets, the absence of transferability and robustness causes serious overfitting problem. Drawing insights from that, we hope to preserve the most important and informative market states to improve the performance and interpretability of the model. For one stock, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8392 we do not have to take all other stocks’ information into consideration, but decompose the process of direct information exchange among stocks into stock-to-market and market-tostock similar to hypergraph. To achieve this, we replace the hidden dimension of standard mixing related to stocks with a hyperparameter m to reach the effect of self-learnable hypergraph. The stock mixing can be formulated as: ˆH = H + M2σ(M1LayerNorm(H)), (8) where M1 ∈Rm×N compresses N representations into m while the M2 ∈RN×m restores the zoomed information. This process is similar to the process in which node information in a hypergraph is first aggregated onto hyperedges and then the impact of hyperedges on each entity is calculated, except that this process is induced by the model itself. H ∈RN×h represents the individual embeddings of N stocks in the market and ˆH denotes the influence on each stock from extracted relationships. Finally, we compute the concatenation between stock’s own representation H and its corresponding marketinfluenced representation ˆH and then send it to a fully connected layer for dimension reduction to obtain the final prediction. Loss Function We use the 1-day return ratio of a stock as the ground-truth rather than the normalized price used in previous work using a combination of a pointwise regression and pairwise ranking-aware loss to minimize the MSE between the predicted and actual return ratios while maintaining the relative order of top ranked stocks with higher expected return for investment as: L =LMSE + α N X i=1 N X j=1 max(0, −(ˆrt i −ˆrt j)(rt i −rt j)), (9) where ˆrt and rt are the predicted and actual ranking scores, respectively, and α is a weighting parameter. Experiments Experimental Setup Datasets. We evaluate our approach based on three realworld datasets from the US stock market. The statistics of the datasets are in Table 1. These datasets all contain relatively complete sector-industry relations or Wiki companybased relations, making it easy to compare with other graphbased methods. NASDAQ and NYSE (Feng et al. 2019) filter transaction records between 01/02/2013 and 12/08/2017 from corresponding markets. Datasets removed abnormal patterns and penny stocks while maintain their representative properties that NASDAQ is more volatile whereas NYSE is more stable. S&P500 (Huynh et al. 2023) gathers historic price data and the information about industries in the S&P 500 index from the Yahoo Finance database. NASDAQ NYSE S&P500 # Stocks 1026 1737 474 Start Time 13-01-02 13-01-02 16-01-04 End Time 17-12-08 17-12-08 22-05-25 Train Days 756 756 1006 Val Days 252 252 253 Test Days 273 273 352 Table 1: Statistics of datasets. Implementation Details. Our model is implemented with PyTorch. For fair comparison, all samples are generated by moving a 16-day lookback window along trading days. Regarding temporal scale factors, k ∈{1, 2, 4} is set for all datasets and only 1 Stock Mixing is employed in the model. We use grid search to find optimal market hyperparameters m, and finalize m = 20, 25, 8 for NASDAQ, NYSE and S&P500, respectively. For methods that require market information, we construct graphs or hypergraphs according to the preprocessing process in the original paper. The loss factor α is 0.1 and the learning rate is 1e −3. We conducted all the experiments on a server equipped with Intel(R) Xeon(R) Silver 4110 CPU, 128GB Memory, and a Nvidia GeForce RTX 2080 Ti GPU (12GB Memory). Each experiment was repeated 3 times and the average performance was reported. Metrics. Previous studies applied distinct pointers, making it troublesome for comprehensive comparison of various methods. To thoroughly evaluate the performance of the techniques, we employ four most frequently used and most stable metrics, among which are two rank-based evaluation metrics, one accuracy-based and the other return-based. Information Coefficient (IC) is a coefficient that shows how close the prediction is to the actual result, computed by the average Pearson correlation coefficient. Rank Information Coefficient (RIC) is the coefficient based on the ranking of the stocks’ short-term profit potential, computed by the average Spearman coefficient. The above two metrics evaluate stock selection ability of model and they are strongly related to rank loss. Precision@N evaluates the precision of the top N predictions. For example, when N is 10, and the labels of 4 among these top 10 predictions are positive, then the Precision@10 is 40%. Sharpe Ratio (SR) takes into account both return and risk and calculates the average return per unit of volatility in relation to the risk-free rate: SR = Rt−Rf θ , where Rt represents the return, Rf represents the risk-free rate, and θ represents the standard deviation of the returns. Baselines. We compare the performance of our architecture with that of several state-of-the-art baselines, as follows: (1) LSTM (Hochreiter and Schmidhuber 1997) applies vanilla LSTM on temporal price data for ranking. (2) ALSTM (Feng et al. 2018) integrates the adversarial training and stochasticity simulation in an enhanced LSTM to better learn the market dynamics. (3) RGCN (Li et al. 2021) adopts Relational Graph Convolutional Networks (RGCN) to model multi-relations. (4) GAT (Veliˇckovi´c et al. 2017) utilizes graph attention networks (GAT) to aggregate The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8393 Model NASDAQ NYSE S&P500 IC RIC prec@N SR IC RIC prec@N SR IC RIC prec@N SR RNN LSTM 0.032 0.354 0.514 0.892 0.024 0.256 0.512 0.857 0.031 0.186 0.531 1.332 ALSTM 0.035 0.371 0.522 0.941 0.023 0.276 0.519 0.764 0.029 0.181 0.532 1.298 GNN RGCN 0.034 0.382 0.516 1.054 0.025 0.275 0.517 0.932 0.028 0.175 0.528 1.359 GAT 0.035 0.377 0.530 1.233 0.025 0.297 0.521 1.070 0.034 0.191 0.541 1.484 RSR-I 0.038 0.398 0.531 1.238 0.026 0.284 0.519 0.098 0.033 0.200 0.542 1.437 HGNN STHAN-SR 0.039 0.451 0.543 1.416 0.029 0.344 0.542 1.228 0.037 0.227 0.549 1.533 ESTIMATE 0.037 0.444 0.539 1.307 0.030 0.327 0.536 1.115 0.035 0.241 0.553 1.547 MLP Linear 0.019 0.188 0.505 0.517 0.015 0.163 0.497 0.625 0.016 0.156 0.520 0.674 StockMixer 0.043 0.501 0.545 1.465 0.029 0.351 0.539 1.454 0.041 0.262 0.551 1.586 Table 2: Comparison results on stock metrics (measured by t-test with p-value < 0.01). The methods for comparison are mainly divided into four types: RNN (Recurrent Neural Network), GNN (Graph Neural Network), HGNN (HyperGraph Neural Network) and MLP (Multi-Layer Perceptron). Bold & underlines show best & second best (SOTA) results, respectively. stock embeddings encoded by GRU on the stock graph. (5) RSR (Feng et al. 2019) combines Temporal Graph Convolution with LSTM to learn the stocks’ interaction in a timesensitive manner. The original proposes two ways, RSR-E using similarity as relation weight as well as RSR-I with neural net for relation weight, and we choose RSR-I with better performance as the baseline. (6) STHAN-SR (Sawhney et al. 2021) models the relations with hypergraph attention combined temporal Hawkes attentive LSTM to tailor spatiotemporal network architecture to rank stocks. (7) ESTIMATE (Huynh et al. 2023) implements a memory-based mechanism onto an LSTM network in an attempt to learn individual patterns and employs hypergraph attentions to capture the non-pairwise correlations, which pass message by the wavelet basis instead of the Fourier basis. (8) Linear only uses simple fully connected layers to predict the final price. Overall Comparison Table 2 shows the performance of all the comparison methods. Most of the baselines’ results on the benchmarks are reported using their original settings and all of them adopt the same optimization loss in ensuring fair. We have the following key observations: 1) For univariate methods, whether LSTM or enhanced ALSTM performs worse than all other hybrid architectures, which proves the necessity and effectiveness of relationships in the stock market. As it should be, RNN-based encoders are faster the rest due to no additional calculations on relations and perform better with small capacity of stocks(e.g., S&P500) because of the fewer relationships in the market. 2) Hypergraph architectures appreciably have better capability of modeling complicated inter-stock dependencies, since classic graph tends to define pairwise correlations between any two entities. However, tendency of real share prices do not depend on several strongly correlated enterprise but current market attributes. Thus, to some extent, hyperedges gathering the industry information reflect parts of these market attributes. 3) Simple linear model lacks adequate inductive bias and naturally fail while other MLP-based methods for time series perform even worse without considering the characteristics of stock Ablation NASDAQ NYSE Model Component IC RIC IC RIC LSTM 0.032 0.354 0.024 0.256 w.o.Indicator Mixing 0.040 0.465 0.027 0.291 w.o.Time Mixing 0.018 0.164 0.016 0.161 w.o.Stock Mixing 0.037 0.376 0.026 0.285 LSTM + Stock Mixing 0.041 0.476 0.030 0.307 STHAN-SR 0.039 0.451 0.029 0.344 StockMixer 0.043 0.501 0.029 0.351 Table 3: Ablation study over three components (indicator mixing, time mixing, and stock mixing) on NASDAQ and NYSE. data. Relatively short lookback window overlooks periodicity and time deviation leads to market value varying, which are the main cause of severe overfitting. 4) Balancing the lightweight of MLP models with the excellent performance of hybrid networks, our proposed StockMixer obtains the best results across most metrics and fetches an average relative performance gain of 7.6%, 10.8% and 10.9% in regard of two rank metrics and the risk adjusted returns (p < 0.01). Meanwhile, simple but strong design brings parameter quantity second only to RNN and much less computation time compared with graph message passing. In addition, slight performance degradation is observed on NYSE with most stocks (1737), which may indicate that insufficient inductive bias gradually come into force in dealing with larger candidate pools. Ablation Study Model Component. We attempt to verify the effect of three mixing blocks by removing the one of them respectively and compare with two typical baselines, STHAN-SR and LSTM. We also replace the previous market module with our stock mixing and implement these settlements in NASDAQ and NYSE. The results are shown in Table 3. As shown, different components jointly contribute to the performance. Among three parts, the mixing of time dimension The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8394 Figure 3: Effects of different activation function on NASDAQ. matters most, for the poor learning of isolated representation can lead to meaningless relationship modeling. This also explains why StockMixer as well as earlier architecture adopts a framework of time before space. Without incorporating indicator features, MLP-Mixer has slightly worse performance, which confirmed the importance of mixing indicator features into stock movement. From the model performance of ablation experiments, the order of impact on the model is time, stock and indicator. The variant replacing the indicator mixing with vanilla LSTM can draw with the state-of-the-art STHAN-SR that integrates the stock mixing without prior knowledge is quite competent at shares market relationship. We can see that the MLP-based encoder credible alternative to RNN and brings a higher performance gain to LSTM. The most likely reason is that RNN makes the hidden representation lack cross-indicator correlation. Activation Function. We investigate the impact of different activation functions on model performance. Due to space limit, we depict the results on NASDAQ in Figure 3, similar regularities can be observed on other datasets. In the original module, the GELU function with excellent performance in multiple computer vision, natural language processing, and voice tasks did not achieve the best performance in such sequence prediction tasks as stocks. It can be seen that both Sigmoid and tanh perform mediocrely while ReLU and HardSwish obviously improve model performance over all metrics. This ablation verified the impact of non-linear function on mixing block, and we also conduct the similar experiments to explore the effect of Layer Normalization, where is no significant difference. Hyperparameter Sensitivity The results in Figure 4 show the hyperparameter sensitivity. Due to the space limit, we focus on the most important (a) Window length (b) Market (c) Scale factor Figure 4: Sensitivity to parameters T, m and k. hyperparameters and select IC as metric. Lookback window length T. We analyze the prediction performance of StockMixer when varying the length T of the lookback window in Figure4a. Across all datasets, moderate window length gains the best performance. Too short window length drops quickly due to the lack of information while the overlong sequential patterns also fails as the lack of early information gain and increased learning costs for stocks. Market dimension m. We consider different m of the hidden dimension of stock mixing in Figure 4b and observe that datasets achieve their best performance at distinct m. As shown, result on S&P500 degrades significantly when w exceeds 10 while that on NYSE does well at around 30. Markets with high capacity prefer larger m that the surge in stock numbers brings more complex market representations Multi-Scale factor k. We analyze the variance in the profitability depending on the number of scale factors from the ranking in Figure 4c. It is seen that StockMixer performs generally well, while the best results are obtained for k = 3. Conclusion In this paper, we proposed StockMixer, a simple yet strong architecture with enhanced MLP blocks for stock price forecasting. Instead of using different sub-networks to model indicator, temporal and stock correlations, StockMixer consists of a lightweight combination of indicator, time, and stock mixing blocks. Especially, time mixing takes more scales into consideration which construct preferable temporal encoder and provides improvements for temporal data. In market view, stock mixing decomposes the standard mixing block to into information exchange of stock-to-market and market-to-stock, which is a more robust modeling of stock correlations. Through extensive experiments, we show that StockMixer outperforms all popular benchmarks with an average relative performance gain of 7.6%, 10.8% and 10.9% in regard of three metrics, validating that this architecture offers a powerful alternative to other current methods. In future, we aim to optimize the hyperparameter selection process and adapt StockMixer to more stock markets. Acknowledgements This work is supported by the National Key Research and Development Program of China (2022YFE0200500), Shanghai Municipal Science and Technology Major Project The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8395 (2021SHZDZX0102), and SJTU Global Strategic Partnership Fund (2021SJTU-HKUST). References Alkhatib, K.; Najadat, H.; Hmeidi, I.; and Shatnawi, M. K. A. 2013. Stock price prediction using k-nearest neighbor (kNN) algorithm. International Journal of Business, Humanities and Technology, 3(3): 32–44. Avenash, R.; and Viswanath, P. 2019. Semantic Segmentation of Satellite Images using a Modified CNN with HardSwish Activation Function. In VISIGRAPP (4: VISAPP), 413–420. Das, A.; Kong, W.; Leach, A.; Sen, R.; and Yu, R. 2023. Long-term Forecasting with TiDE: Time-series Dense Encoder. arXiv preprint arXiv:2304.08424. Ekambaram, V.; Jati, A.; Nguyen, N.; Sinthong, P.; and Kalagnanam, J. 2023. TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting. arXiv preprint arXiv:2306.09364. Feng, F.; Chen, H.; He, X.; Ding, J.; Sun, M.; and Chua, T.-S. 2018. Enhancing stock movement prediction with adversarial training. arXiv preprint arXiv:1810.09936. Feng, F.; He, X.; Wang, X.; Luo, C.; Liu, Y.; and Chua, T.S. 2019. Temporal relational ranking for stock prediction. ACM Transactions on Information Systems (TOIS), 37(2): 1–30. Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural computation, 9(8): 1735–1780. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. 2019. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, 1314–1324. Huynh, T. T.; Nguyen, M. H.; Nguyen, T. T.; Nguyen, P. L.; Weidlich, M.; Nguyen, Q. V. H.; and Aberer, K. 2023. Efficient integration of multi-order dynamics and internal dynamics in stock movement prediction. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 850–858. Kamble, R. A. 2017. Short and long term stock trend prediction using decision tree. In 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), 1371–1375. IEEE. Li, H.; Shen, Y.; and Zhu, Y. 2018. Stock price prediction using attention-based multi-input LSTM. In Asian conference on machine learning, 454–469. PMLR. Li, L.; Duan, L.; Wang, J.; He, C.; Chen, Z.; Xie, G.; Deng, S.; and Luo, Z. 2023a. Memory-Enhanced Transformer for Representation Learning on Temporal Heterogeneous Graphs. Data Science and Engineering, 8(2): 98–111. Li, W.; Bao, R.; Harimoto, K.; Chen, D.; Xu, J.; and Su, Q. 2021. Modeling the stock relation with graph network for overnight stock movement prediction. In Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence, 4541–4547. Li, Z.; Rao, Z.; Pan, L.; and Xu, Z. 2023b. Mts-mixers: Multivariate time series forecasting via factorized temporal and channel mixing. arXiv preprint arXiv:2302.04501. Liu, H.; Dai, Z.; So, D.; and Le, Q. V. 2021. Pay attention to mlps. Advances in Neural Information Processing Systems, 34: 9204–9215. Nelson, D. M.; Pereira, A. C.; and De Oliveira, R. A. 2017. Stock market’s price movement prediction with LSTM neural networks. In 2017 International joint conference on neural networks (IJCNN), 1419–1426. Ieee. Nugroho, F. S. D.; Adji, T. B.; and Fauziati, S. 2014. Decision support system for stock trading using multiple indicators decision tree. In 2014 The 1st International Conference on Information Technology, Computer, and Electrical Engineering, 291–296. IEEE. Piccolo, D. 1990. A distance measure for classifying ARIMA models. Journal of time series analysis, 11(2): 153– 164. Qin, Y.; Song, D.; Chen, H.; Cheng, W.; Jiang, G.; and Cottrell, G. 2017. A dual-stage attention-based recurrent neural network for time series prediction. arXiv preprint arXiv:1704.02971. Sawhney, R.; Agarwal, S.; Wadhwa, A.; Derr, T.; and Shah, R. R. 2021. Stock selection via spatiotemporal hypergraph attention network: A learning to rank approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 497–504. Tolstikhin, I. O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. 2021. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34: 24261–24272. Touvron, H.; Bojanowski, P.; Caron, M.; Cord, M.; ElNouby, A.; Grave, E.; Izacard, G.; Joulin, A.; Synnaeve, G.; Verbeek, J.; et al. 2022. Resmlp: Feedforward networks for image classification with data-efficient training. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 5314–5321. Tsantekidis, A.; Passalis, N.; Tefas, A.; Kanniainen, J.; Gabbouj, M.; and Iosifidis, A. 2017. Forecasting stock prices from the limit order book using convolutional neural networks. In 2017 IEEE 19th conference on business informatics (CBI), volume 1, 7–12. IEEE. Tseng, F.-M.; Yu, H.-C.; and Tzeng, G.-H. 2002. Combining neural network model with seasonal time series ARIMA model. Technological forecasting and social change, 69(1): 71–87. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Wang, H.; Li, S.; Wang, T.; and Zheng, J. 2021. Hierarchical Adaptive Temporal-Relational Modeling for Stock Trend Prediction. In IJCAI, 3691–3698. Wang, H.; Wang, T.; Li, S.; Zheng, J.; Guan, S.; and Chen, W. 2022. Adaptive long-short pattern transformer for stock investment selection. In Proceedings of the Thirty-First The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8396 International Joint Conference on Artificial Intelligence, 3970–3977. Wang, J.-H.; and Leu, J.-Y. 1996. Stock market trend prediction using ARIMA-based neural networks. In Proceedings of International Conference on Neural Networks (ICNN’96), volume 4, 2160–2165. IEEE. Xie, B.; Passonneau, R.; Wu, L.; and Creamer, G. G. 2013. Semantic frames to predict stock price movement. In Proceedings of the 51st annual meeting of the association for computational linguistics, 873–883. Yoo, J.; Soun, Y.; Park, Y.-c.; and Kang, U. 2021. Accurate multivariate stock movement prediction via data-axis transformer with multi-level contexts. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2037–2045. Yu, T.; Li, X.; Cai, Y.; Sun, M.; and Li, P. 2022. S2-mlp: Spatial-shift mlp architecture for vision. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 297–306. Zeng, A.; Chen, M.; Zhang, L.; and Xu, Q. 2023. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence, volume 37, 11121–11128. Zhang, T.; Zhang, Y.; Cao, W.; Bian, J.; Yi, X.; Zheng, S.; and Li, J. 2022. Less is more: Fast multivariate time series forecasting with light sampling-oriented mlp structures. arXiv preprint arXiv:2207.01186. Zou, J.; Zhao, Q.; Jiao, Y.; Cao, H.; Liu, Y.; Yan, Q.; Abbasnejad, E.; Liu, L.; and Shi, J. Q. 2022. Stock Market Prediction via Deep Learning Techniques: A Survey. arXiv preprint arXiv:2212.12717. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8397
2024
933
18,777
Dense Projection for Anomaly Detection Dazhi Fu1,2, Zhao Zhang3, Jicong Fan1,4* 1 The Chinese University of Hong Kong, Shenzhen, China 2 University of Electronic Science and Technology of China, Chengdu, China 3 Hefei University of Technology, Hefei, China 4 Shenzhen Research Institute of Big Data, Shenzhen, China [email protected], [email protected], [email protected] Abstract This work presents a novel method called dense projection for unsupervised anomaly detection (DPAD). The main idea is maximizing the local density of (normal) training data and then determining whether a test data is anomalous or not by evaluating its density. Specifically, DPAD uses a deep neural network to learn locally dense representations of normal data. Since density estimation is computationally expensive, we minimize the local distances of the representations in an iteratively reweighting manner, where the weights are updated adaptively and the parameters are regularized to avoid model collapse (all representations collapse to a single point). Compared with many state-of-the-art methods of anomaly detection, our DPAD does not rely on any assumption about the distribution or spatial structure of the normal data and representations. Moreover, we provide theoretical guarantees for the effectiveness of DPAD. The experiments show that our method DPAD is effective not only in traditional one-class classification problems but also in scenarios with complex normal data composed of multiple classes. Introduction Anomaly detection (Chandola, Banerjee, and Kumar 2009; Pang et al. 2021; Ruff et al. 2021; Cai and Fan 2022; Xiao, Sun, and Fan 2023) is an important problem in many areas such as machine learning, computer vision, medical imaging, and other fields (Fan and Wang 2014; Fan, Wang, and Zhang 2017). Basically, anomaly detection is a task that aims to identify anomalous data from normal data within a given dataset. To better simulate real-world scenarios, anomalous data is often considered to be unknown in the training stage, making this task typically an unsupervised learning problem. In the past decades, numerous anomaly detection methods have been proposed. In general, we can categorize them into three main types: density-based methods, reconstruction-based methods, and one-class classification methods, though there are other types such as the perturbation learning based method proposed by (Cai and Fan 2022). Density-based methods assume that normal data occur in high-density regions, while anomalies are located in lowdensity or sparse regions, and utilize probabilistic models to *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. model the distribution of normal data. Thus, popular density estimation methods such as kernel density estimation (KDE) Parzen (1962) and Gaussian mixture models (GMM) can be applied to anomaly detection. K-nearest-neighbors (kNN) is also a density-based method where the average distance from test data to its nearest k neighbors is measured as the anomaly score. This method relies heavily on the choice of k and may not be effective in handling highdimensional data. kNN+ (Sun et al. 2022), utilizing a pretrained neural network to learn feature embeddings of normal data, assumes that the test anomalies are relatively far away from the normal data and detects anomalies by using kNN in the embedding space, which makes it effective when faced with complex data. Breunig et al. (2000) proposed a method called local outlier factor (LOF), which relies on the concept that anomalous data often lie in a region of lower density than its surrounding data points. Zong et al. (2018) proposed deep autoencoding Gaussian mixture models (DAGMM) that combines deep auto-encoders with GMM, where the output energy generated by the GMM is used as an anomaly score. Deecke et al. (2019) provided an anomaly detection method ADGAN based on adversarial networks (GAN (Goodfellow et al. 2014)). ADGAN utilizes a generator to learn the distribution of normal data and a discriminator to detect anomalous data. Reconstruction-based methods use neural networks such as auto-encoder (AE) to learn low-dimensional representation to reconstruct input data and utilize the reconstruction error as a metric to discern anomalies from normal instances. Auto-encoder and its various variants (Hinton and Salakhutdinov 2006; Vincent et al. 2008; Pidhorskyi, Almohsen, and Doretto 2018; Wang et al. 2021) consist of an encoder and a decoder, where the encoder compresses the input data into a latent effective representation, while the decoder reconstructs the original data from the compressed representation. These methods often rely on the assumption that normal data can be reconstructed effectively, while anomalous data exhibits significantly higher reconstruction errors. However, in practice, some anomalous samples can be well-reconstructed by auto-encoders, especially when the model is complex. One-class classification methods train classifiers using only normal data. For instance, the one-class support vector machine (OC-SVM), proposed by (Sch¨olkopf et al. 2001), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8398 assumes that normal data can be separated from the rest of the data by a hyperplane in a high-dimensional feature space and tries to maximize the margin between the hyperplane and origin. Tax and Duin (2004) proposed support vector data description (SVDD) that aims to obtain a hypersphere with the smallest volume that encloses the normal data points while keeping the abnormal data points outside the hypersphere. To handle more complex data, Ruff et al. (2018) proposed Deep SVDD, which is based on an integration of deep learning (LeCun, Bengio, and Hinton 2015) and SVDD. Deep SVDD utilizes deep neural networks to learn effective feature embeddings from the normal data while aiming to enclose the normal data within a hypersphere with minimum volume. To ensure that any example reconstructed from the learned representation is normal data, Perera, Nallapati, and Xiang (2019) proposed one-class GAN (OCGAN), which trains an auto-encoder and discriminator adversarially. Goyal et al. (2020) presented a method called deep robust one-class classification (DROCC), which assumes that normal data resides in a low-dimensional manifold structure. It constructs anomalous samples in the training stage and classifies a point as anomalous if it is outside the union of balls around training data. This approach has been shown to be effective on various datasets. Hu et al. (2020) proposed H-Regularization with 2-Norm instance level normalization (HRN), including new loss function (called one-class loss), holistic regularization, and normalization, which can directly learn from a single class of data. Chen et al. (2022) proposed a method called interpolated Gaussian descriptor (IGD). It learns effective normality description based on representative normal data instead of fringe edge normal data. It is worth noting that, density-based methods are not effective in handling high-dimensional data, reconstructionbased methods often suffer from overfitting, and one-class classification methods may not obtain their assumed reliable decision boundaries such as hypersphere. To address these limitations all at once, in this work, we propose a new density-based method called Dense Projection for Anomaly Detection (DPAD). The main idea of DPAD is to train a neural network to learn a locally dense low-dimensional representation of normal data by reducing the distance between the representations of similar data (see Figure 1), and then density-based methods such as KNN can be applied to the representation to detect anomalies. Our contributions are summarized as follows: • We propose a novel density-based method called DPAD for unsupervised anomaly detection. DPAD does not rely on any assumption about the shape of the decision boundary between normal data and anomalous data and is able to handle high-dimensional data effectively • We propose to increase the local density of the region where normal data resides by reducing the distance between similar normal data locally. • We thoroughly evaluate the effectiveness of dimensionality reduction plus KNN in unsupervised anomaly detection. • In addition to experiments on classical one-class classification, we conduct challenging experiments where normal data are composed of multiple classes to further investigate the performance of DPAD and other methods. Related Work Before elaborating on our DPAD, we discuss the connection and difference between our DPAD and existing dimensionality reduction methods and DeepSVDD (Ruff et al. 2018). Dimensionality Reduction + kNN Dimensionality reduction (DR) methods are commonly used to address challenges such as the curse of dimensionality, data redundancy, and high computational complexity (Fan et al. 2018; Sun, Han, and Fan 2023). The best-known DR method is the principal component analysis (PCA) (Jolliffe and Cadima 2016). PCA is a linear DR method and is not effective in handling data with nonlinear structures. There have been many nonlinear DR methods, e.g., LLE (Roweis and Saul 2000), Isomap (Tenenbaum, Silva, and Langford 2000), AE (Hinton and Salakhutdinov 2006), t-SNE (Van der Maaten and Hinton 2008), and UMAP (McInnes, Healy, and Melville 2018). Particularly, AE is more useful in feature extraction while t-SNE and UMAP are more useful in 2D visualization. AE solves the following problem minf,g Ex∼D[∥x −g(f(x))∥ℓ] where f : RD →Rd and g : Rd →RD are the encoder and decoder respectively, and d < D. ∥· ∥ℓdenotes a norm such as the Euclidean norm. We find that DR methods are very helpful to unsupervised anomaly detection. Specifically, the performance of traditional methods such as kNN performed in the lowdimensional embedding space given by DR methods, e.g. AE+kNN, are much better than their performance in the original high-dimensional data space. Note that our DPAD also reduces the dimensionality of data but it is different from existing DR methods. Existing DR methods aim to preserve the local or global structure of data while our DPAD aims to find a low-dimensional representation with maximum local density. Therefore, the goal of DR in DPAD is consistent with anomaly detection, which means DPAD has the potential to outperform DR+kNN. DeepSVDD DeepSVDD (Ruff et al. 2018) aims to enclose the representations of normal data within a hypersphere with minimum volume and solve the following problem minimize W 1 n n X i=1 ∥ϕ(xi; W) −c∥2 + λ 2 L X l=1 Wl 2 F where c is a pre-defined hyper-spherical center, W = {W1, . . . , WL} denotes the parameters of layer l ∈ {1, . . . , L} of neural network ϕ(x; W), and λ is a hyperparameter that controls weight decay regularizer. Deep SVDD is able to compress the volume of normal data. This is a global compression and the ideal decision boundary is the hypersphere centered at c. However, in practice, when the dimension of the data is high, the number of data points is small, or the structure of the data is complex, it is difficult to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8399 Figure 1: DPAD trains a neural network to learn dense low-dimension representations of training data. The black and red points represent normal data and anomalous data respectively. After training, we can use kNN or other density estimation methods to judge whether a new data point is anomalous or not. obtain a compact hypersphere, or in other words, it is difficult to include all normal samples into a small hypersphere. In contrast, our DPAD is a local compression method and is able to adapt to data with complex structures. Proposed Method Let D = {x1, x2, . . . , xn} be a set of D-dimensional training data, in which all or at least most of the samples are normal. Our goal is to learn a model from D to determine whether a new sample is normal or not. We propose to find a projection f : RD →Rd, where d < D, to maximize the density of the data, i.e., maximize f density({f(x)}x∈D) subject to f ∈C. (1) In (1), C is some constraint set to avoid mapping all samples to a single point. Note that estimating the density is computationally expensive. Instead, we replace the density with the local distances between the data points and solve minimize W n X i=1 X j∈Ni ∥fW(xi) −fW(xj)∥2 subject to W ∈CW (2) where fW is an L-layer neural network parameterized by W = {W1, W2, . . . , WL}, CW is some constraint set for the network parameters, and Ni denotes a local neighborhood of xi. Nevertheless, determining {Ni}n i=1 still suffers from the curse of dimensionality, is sensitive to noise and outliers, and requires additional efforts or domain knowledge. To tackle these issues, we propose to determine {Ni}n i=1 adaptively and dynamically. Specifically, we solve minimize W n X i=1 n X j=1 ∥fW(xi) −fW(xj)∥2 · eW ij subject to W ∈CW (3) where eW ij = exp  −γ ∥fW(xi) −fW(xj)∥2 and γ > 0 is a hyperparameter. The role of eW ij is explained as follows. • When the projected samples fW(xi) and fW(xj) are close to each other, eW ij is close to 1, provided that γ is not too large. Then (3) will make effort on minimizing the distance between fW(xi) and fW(xj). • When the projected samples fW(xi) and fW(xj) are far away from each other, eW ij is close to 0, provided that γ is not too small. Then (3) will make less or even no effort on minimizing the distance between fW(xi) and fW(xj). • The setting of γ is important but not crucial because it can be absorbed into fW and is thus learned adaptively and implicitly. However, the setting of γ affects the network training because it determines the initial weights {eW ij } once the network parameters are initialized. Now let’s discuss the constraint set CW. Recall that the constraint is to avoid the case that all projected samples collapse to single points, which lose the original information of the data although the density attains the maximum. A trivial case is that all weights are zero. Therefore, we need to ensure that the norms of the weight matrices are far from zero. Thus, the constraint in (3) is designed as R(Wl) ≥αi, l = 1, 2, . . . , L, (4) where αi are positive constants far from zero. For instance, R(Wl) can be the Frobenius norm ∥Wl∥F , ℓ1 norm ∥Wl∥1, or spectral norm ∥Wl∥2. As mentioned in (Yoshida and Miyato 2017), if the weight matrices used in neural networks have large spectral norms, it can cause the neural networks to be sensitive to the perturbation of training data and test data, leading to poor generalization ability. Hence, we may choose R(Wl) = ∥Wl∥2, which however is difficult to minimize since its computation is based on singular value decomposition. Note that ∥Wl∥2 ≤∥Wl∥F holds for any Wl. Thus, minimization for ∥Wl∥F , which is much easier, implicitly reduces ∥Wl∥2 and hence improves the generalization ability. To further facilitate the optimization, we use The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8400 regularizations instead of constraints on W. Then the final optimization problem is formulated as follows minimize W n X i=1 n X j=1 ( ∥fW(xi) −fW(xj)∥2 × exp  −γ ∥fW(xi) −fW(xj)∥2 ) + λ L X l=1 ∥Wl∥F −1 , (5) where λ > 0 is a hyperparameter, fW(x) = WL(h(· · · h(W2h(W1x)) · · · )), and h denotes the activation function. Without loss of generality, we assumed that all activations are the same, for convenience. Remark 1. It should be pointed out that in the neural network fW, we cannot include the bias terms. The reason is that unbounded bias terms, which may be learned by the training, can make the activation functions saturated (e.g., sigmoid) or infinite (e.g., ReLU), which further results in model collapse, namely, all data points are mapped to the same point. Our dimensionality reduction (DR) method is novel. As shown by (5), it is very different from existing DR methods that aim to compress data with low reconstruction error (e.g., PCA(Jolliffe and Cadima 2016) and autoencoder) or preserve local structures of data (e.g. LLE (Roweis and Saul 2000) and t-SNE (Van der Maaten and Hinton 2008)). Our DR method aims to improve the density (or compactness) of the data in the low-dimensional space, which, shown by the experiments, is useful for anomaly detection. When the network fW is well-trained, we can use a density estimation based method such as KDE and LOF (Breunig et al. 2000), to conduct anomaly detection. However, KDE and LOF are time-consuming when n is large and our method with LOF or KDE is not as effective experimentally as it with kNN. Therefore, we propose to use kNN to detect anomalies. To be more precise, given a test sample xnew, we compute znew = fW(xnew). (6) For znew, we find its nearest k neighbors {fW(xnew,1), fW(xnew,2), · · · , fW(xnew,k)} ⊆{fW(x) : x ∈D}. After that, we compute the average distance from these k neighbors to znew and utilize this distance to measure the anomaly of znew: anomaly score = k X j=1 ∥znew −fW(xnew,j)∥2 . (7) In general, we train a neural network to learn dense representations of normal data (1) with our objective function (5). As for test data, we utilize the trained neural network to generate a representation of the test data (6). Subsequently, we find its nearest K representations generated by training data , and calculate the sum of distance from test representation to its nearest K neighbors as an anomaly score (7). For convenience, we call our method (5) Dense Projection based Anomaly Detection (DPAD). Optimization Training Settings In the training stage, to ensure that the distance between any two representations of training data is fully considered and optimized, we refrain from using mini-batch which may lead the model to repeatedly consider the distance between representations generated by training data in the same batch, thereby overlooking the distances between representations of the training data from different batches which may be more similar to each other. Moreover, the setting of hyperparameter γ controls the initialization of weights for eW ij and thus determines whether the model will shrink the distance between fW(xi) and fW(xj) at the beginning of training. An excessively large value of γ would lead the model to attempt increasing the distances between all points to minimize the objective function, as we observe that the objective function decreases with increasing distance ||fW(xi) −fW(xj)|| when ||fW(xi) −fW(xj)|| ≥1/γ. To handle this problem, eW ij is excluded from the backpropagation process so it will be only a parameter or weight of distance and we set γ to a relatively small numerical value. The optimization details are presented in Algorithm 1. Algorithm 1: Training and testing processes of DPAD Input: D = {x1, x2, . . . , xn}, m, γ ≥0, λ ≥0, k ≥1 Training stage of DPAD: for B = 1, . . . , m do eW ij = exp  −γ ∥fW(xi) −fW(xj)∥2 .detach() Dist sum= Pn i=1 Pn j=1  ∥fW(xi) −fW(xj)∥2 eW ij  Loss= Dist sum+λ PL l=1 ∥Wl∥F −1 W = W−Gradient-Step(Loss) end for Testing stage of DPAD: Input test data xnew Compute znew = fW(xnew) Find the nearest k neighbors of znew from {fW(x) : x ∈D} : {fW(xnew,1), fW(xnew,2), · · · , fW(xnew,k)} Anomaly Score = Pk j=1 ∥znew −fW(xnew,j)∥2 Space and Time Complexity Suppose Wl ∈Rdl×dl−1, l = 1, 2, . . . , L, and consider a mini-batch of b samples, where dL = d and d0 = D. The time complexity per iteration (including the forward and backward propagations) is O(b PL l=1 dl−1dl) and the space complexity is O(b PL+1 l=1 dl−1 + PL l=1 dl−1dl). In the testing stage, for a test sample, the time complexity is O(PL l=1 dl−1dl + dn), in which the first part is from the computation of fW(xnew) and the second part is from kNN. In sum, the time and space complexities of the proposed method DPAD are both linear with the number of training data. Therefore, DPAD can be applied to large datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8401 Theoretical Analysis First we provide a Lipschitz constant τf for fW. Lemma 1. Given the neural network fW(x) = WL(h(· · · h(W2h(W1x)) · · · )), denote ρ the Lipschitz constant of h and suppose ∥Wl∥2 ≤ βl. Let τf = ρL−1 QL l=1 βl. Then for any x1 and x2, the following inequality holds ∥fW(x1) −fW(x2)∥≤τf∥x1 −x2∥. (8) The lemma, proved in shows the sensitivity of fW to the distances between any two data points in D. The following lemma shows the upper bound of the spectral norm of a random Gaussian matrix. Lemma 2. (Bandeira and Van Handel 2016) Given an d × d random Gaussian matrix N with Nij ∼N(0, σ2 ij), the following inequality holds ∥N∥2 ≤max i sX j σ2 ij + max ij |σij| p log d (9) Based on Lemma 1 and Lemma 2, we have the following theorem (proved in the appendix), which provides a lower bound for the weight eW ij at the random initialization stage of the fW. Theorem 1. Let W(0) be the initialized parameters drawn from N(0, σ2). Denote dl × dl−1 the shape of Wl and let ¯dl = max(dl, dl−1), l = 1, 2, . . . , L. Then the following inequality holds: eW(0) ij ≥exp  −γρ(2L−2)σ2L ∥xi −xj∥2 × L Y l=1 ( p ¯dl + q log ¯dl)2  . (10) The theorem indicates that the initialized fW is able to preserve the local similarity of the original data in D provided that the network is not too complex. Therefore, the problem that the network reduces the distance of representations of dissimilar data at the beginning of training will not occur. Experiments Datasets and Baselines We choose CIFAR-10 (Krizhevsky, Hinton et al. 2009) and Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017) as our image datasets, Arrhythmia (Rayana 2016), Abalone (Dua, Graff et al. 2017), Campaign (Han et al. 2022), and MAGIC Gama (Han et al. 2022) as our tabular datasets to test the proposed method DPAD. The statistics about the datasets are in Table 1. We compare DPAD with classical methods, dimensionality reduction methods followed by kNN, and state-of-the-art AD methods. It is noteworthy that we also used LOF as a detection method following DR, but the performance was much worse than kNN. Datasets # Samples # Features # Classes CIFAR-10 60000 32 × 32 × 3 10 Fashion-MNIST 70000 28 × 28 10 Arrhythmia 452 274 2 Abalone 1920 8 2 Campaign 41188 62 2 MAGIC Gama 19020 10 2 Table 1: Statistics of the datasets • Classical methods: kNN, k-Means (MacQueen et al. 1967), LOF (Breunig et al. 2000), OCSVM (Sch¨olkopf et al. 2001), isolation forest (IF) (Liu, Ting, and Zhou 2008), KDE (Parzen 1962), and DAE (Vincent et al. 2008). • Dimensionality reduction methods: PCA (Jolliffe and Cadima 2016), t-SNE (Van der Maaten and Hinton 2008), and UMAP (McInnes, Healy, and Melville 2018). • State-of-the-art methods: E2E-AE and DAGMM (Zong et al. 2018), DCN (Caron et al. 2018), ADGAN (Deecke et al. 2019), DSVDD (Ruff et al. 2018), OCGAN (Perera, Nallapati, and Xiang 2019), TQM (Wang, Sun, and Yu 2019), GOAD (Bergman and Hoshen 2020), DROCC (Goyal et al. 2020), HRN (Hu et al. 2020), SCADN (Yan et al. 2021), NeuTraL AD (Qiu et al. 2021), GOCC (Shenkar and Wolf 2021), and IGD (Chen et al. 2022). Implementation and Evaluation Details In this section, we introduce experimental settings and describe the implementation details of the proposed method. For the two mentioned image datasets, we use Le-Net-based CNN as our basic network structure. We conduct 10 oneclass classification tasks, choosing one of the 10 classes as the normal class every time. To further evaluate the performance of our method, we conducted an additional set of challenging experiments, where we selected 9 out of the 10 classes as the normal classes for training, while the testing samples remained the same as before. For the compared methods in the experiment, we obtain their performance directly from their paper except for k-Means, DROCC, and DR+kNN methods for which we run the officially released code or our code respectively to obtain the results. We run the proposed methods 5 times with 100 epochs optimization to get the final average result. To maintain consistency with previous methods, we use the AUC metric to evaluate the performance on image datasets and use the F1 score to evaluate the performance on tabular datasets. Results on Image Datasets Table 2 and Table 3 provide a summary and comparison of our method with other methods in terms of their AUC performance on every class of CIFAR-10 and Fashion-MNIST datasets. Based on the performance, we draw the following observations: • In comparison with classical methods like OCSVM and IF, our approach consistently achieves higher AUC The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8402 Normal class Airplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck (no DR) kNN 91.2 98.6 88.5 93.6 89.4 90.0 81.7 98.4 88.5 96.5 (no DR) k-Means 90.3 98.6 88.5 93.8 88.1 90.5 82.4 98.1 89.8 97.0 (no DR) LOF (Breunig et al. 2000) 66.6 45.3 64.1 51.6 67.5 51.7 67.7 52.9 69.3 41.6 OCSVM (Sch¨olkopf et al. 2001) 61.6 63.8 50.0 55.9 66.0 62.4 74.7 62.6 74.9 75.9 KDE (Parzen 1962) 61.2 64.0 50.1 56.4 66.2 62.4 74.9 62.6 75.1 76.0 IF (Liu, Ting, and Zhou 2008) 66.1 43.7 64.3 50.5 74.3 52.3 70.7 53.0 69.1 53.2 DAE (Vincent et al. 2008) 41.1 47.8 61.6 56.2 72.8 51.3 68.8 49.7 48.7 37.8 DAGMM (Zong et al. 2018) 41.4 57.1 53.8 51.2 52.2 49.3 64.9 55.3 51.9 54.2 ADGAN (Deecke et al. 2019) 63.2 52.9 58.0 60.6 60.7 65.9 61.1 63.0 74.4 64.2 DSVDD (Ruff et al. 2018) 61.7 65.9 50.8 59.1 60.9 65.7 67.7 67.3 75.9 73.1 OCGAN (Perera, Nallapati, and Xiang 2019) 75.7 53.1 64.0 62.0 72.3 62.0 72.3 57.5 82.0 55.4 TQM (Wang, Sun, and Yu 2019) 40.7 53.1 41.7 58.2 39.2 62.6 55.1 63.1 48.6 58.7 DROCC* (Goyal et al. 2020) 79.2 74.9 68.3 62.3 70.3 66.1 68.1 71.3 62.3 76.6 HRN (Hu et al. 2020) 77.3 69.9 60.6 64.4 71.5 67.4 77.4 64.9 82.5 77.3 AE+kNN* 77.7 62.7 59.5 57.6 65.3 58.3 75.5 62.8 79.7 66.4 PCA+kNN* 68.7 44.7 68.1 51.0 77.0 49.6 73.4 51.3 69.0 43.7 t-SNE+kNN* 78.4 72.1 68.3 66.7 70.3 68.8 75.5 70.3 82.0 72.6 UMAP+kNN* 75.6 66.7 63.0 60.1 64.9 64.0 73.4 63.8 77.9 67.2 DPAD 78.0 (0.3) 75.0 (0.2) 68.1 (0.5) 66.7 (0.4) 77.9 (0.8) 68.6 (0.3) 81.2 (0.4) 74.8 (0.2) 79.1 (1.0) 76.1 (0.2) Table 2: Average AUC(%) of one-class anomaly detection on CIFAR-10. * means we reproduced the results using the officially released code. The best two results are marked in bold. Normal class T-shirt Trouser Pullover Dress Coat Sandal Shirt Sneaker Bag Ankleboot (no DR) kNN 91.2 98.6 88.5 93.6 89.4 90.0 81.7 98.4 88.5 96.5 (no DR) k-Means 90.3 98.6 88.5 93.8 88.1 90.5 82.4 98.1 89.8 97.0 (no DR)LOF (Breunig et al. 2000) 80.6 94.6 82.4 88.6 91.0 88.6 78.6 96.4 75.8 97.4 OCSVM (Sch¨olkopf et al. 2001) 86.1 93.9 85.6 85.9 84.6 81.3 78.6 97.6 79.5 97.8 KDE (Parzen 1962) 68.7 91.0 86.0 91.9 84.6 88.5 58.7 94.1 69.3 90.1 IF (Liu, Ting, and Zhou 2008) 91.0 97.8 87.2 93.2 90.5 93.0 80.2 98.2 88.7 95.4 DAE (Vincent et al. 2008) 86.7 97.8 80.8 91.4 86.5 92.1 73.8 97.7 78.2 96.3 DAGMM (Zong et al. 2018) 42.1 55.1 50.4 57.0 26.9 70.5 48.3 83.5 49.9 34.0 ADGAN (Deecke et al. 2019) 89.9 81.9 87.6 91.2 86.5 89.6 74.3 97.2 89.0 97.1 DSVDD (Ruff et al. 2018) 79.1 94.0 83.0 82.9 87.0 80.3 74.9 94.2 79.1 93.2 OCGAN (Perera, Nallapati, and Xiang 2019) 85.5 93.4 85.0 88.1 85.8 88.5 77.5 93.9 82.7 97.8 TQM (Wang, Sun, and Yu 2019) 92.2 95.8 89.9 93.0 92.2 89.4 84.4 98.0 94.5 98.3 DROCC* (Goyal et al. 2020) 88.1 97.7 87.6 87.7 87.2 91.0 77.1 95.3 82.7 95.9 HRN (Hu et al. 2020) 92.7 98.5 88.5 93.1 92.1 91.3 79.8 99.0 94.6 98.8 AE+KNN* 86.9 98.4 78.9 93.3 83.1 92.2 79.3 98.4 86.5 94.5 PCA+kNN* 92.8 99.0 90.0 95.4 91.1 92.6 85.1 98.7 91.3 96.9 t-SNE+kNN* 95.2 98.3 92.2 97.1 91.6 98.0 84.1 96.7 98.0 97.9 UMAP+kNN* 94.3 98.0 92.1 96.9 92.5 97.4 85.6 97.3 98.8 98.2 DPAD 93.7 (0.2) 98.7 (0.0) 90.3 (0.0) 94.7 (0.3) 92.2 (0.1) 93.9 (0.8) 82.3 (0.1) 98.7 (0.1) 94.2 (0.6) 98.1 (0.2) Table 3: Average AUC(%) of one-class anomaly detection on Fashion-MNIST. * means we reproduced the results using the officially released code. The best two results are marked in bold. scores for all classes in both two datasets. An interesting phenomenon is that IF outperforms all other deep methods in some classes except for DPAD. • For DR methods, UMAP+kNN outperforms most methods in most classes in Fashion-MNIST, and DR methods are excellent when handling data with a simple structure The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8403 (a) DeepSVDD AUC : 83.0 (b) AE+KNN AUC : 78.9 (c) DPAD AUC : 90.3 Figure 2: t-SNE visualization of the learned embedding space of “Pullover” class of Fashion-MNIST. Note that points marked in green, blue, red correspond to training data, test normal data, and test anomalous data respectively. like Fashion-MNIST. But it’s worth noting that among all other methods, DPAD has the smallest gap with DR methods in most classes. When it comes to complex data like CIFAR-10, DPAD outperforms all DR methods with significant differences. • As for deep learning based methods, DPAD outperforms several methods such as DSVDD and OCGAN in all classes and gets the highest two scores in most classes. Although DPAD doesn’t achieve the top 2 best performances in some specific classes, as shown in Table 4, in terms of the average performance over all classes, our method is the best state-of-the-art method. In contrast to DSVDD and DROCC which assume normal samples in the embedding space lie in a hyper-sphere, our method does not make assumptions about specific shapes formed by training data which is capable of yielding better performance in cases of complex data structures. We employ t-SNE (Van der Maaten and Hinton 2008) to visualize the representations formed by the neural network in our method and DSVDD, and encoder in AE+kNN. Specifically, We visualize the training data, normal test data, and anomalous test data with different colors. Figure 2 shows the visualization of the class “Pullover” in FashionMNIST. From this figure, we have the following observations. • First, our method indeed compacts the training data in the embedding space, and there is not only a significant overlap between the normal test data and the training data but also a clear separation between training data and anomalous test data. We conclude that our method obtains a clear decision boundary to distinguish normal data and anomalous data. • Second, Compared to DSVDD and AE, our method can learn a better decision boundary to distinguish normal data and abnormal data, which is consistent with the mentioned experimental results. Table 4 shows the average performance on CIFAR-10 and Fashion-MNIST over all 10 classes. The two latest methods SCADN and IGD (Scratch) are also compared in the table, though their performance on each single class was not reported in their papers. From the table, we draw the following observation: Datasets CIFAR-10 F-MNIST (no DR) kNN* 59.5 91.6 (no DR) k-Means* 62.0 91.7 (no DR) LOF (Breunig et al. 2000) 57.8 87.4 OCSVM (Sch¨olkopf et al. 2001) 64.7 87.0 IF (Liu, Ting, and Zhou 2008) 59.7 91.5 KDE (Parzen 1962) 64.9 82.3 DAE (Vincent et al. 2008) 53.5 88.1 DAGMM (Zong et al. 2018) 53.1 51.7 ADGAN (Deecke et al. 2019) 62.4 88.4 DSVDD (Ruff et al. 2018) 64.8 84.7 OCGAN (Perera et al. 2019) 65.6 87.8 TQM (Wang, Sun, and Yu 2019) 52.1 92.7 DROCC* (Goyal et al. 2020) 69.9 89.0 HRN (Hu et al. 2020) 71.3 92.8 SCADN (Yan et al. 2021) 66.9 — IGD (Chen et al. 2022) 74.3 92.0 AE+kNN* 65.2 89.1 PCA+kNN* 58.7 93.3 t-SNE+kNN* 72.3 94.9 UMAP+kNN* 67.7 95.2 DPAD 74.5 93.7 Table 4: Average AUCs(%) over all 10 classes on CIFAR10 and Fashion-MNIST. Note that the best two results are marked in bold. • On Fashion-MNIST, classical methods and dimensionality reduction methods demonstrate excellent performance, with UMAP+kNN surpassing all state-of-the-art methods. We attribute this phenomenon to the comparatively simple data structure of Fashion-MNIST. Despite DPAD not achieving optimal performance, it remains the state-of-the-art method whose performance is closest to that of UMAP+kNN. • On CIFAR-10, due to its complex data structure, SOTA methods demonstrate superior performance compared to classical methods and dimensionality reduction methods. DPAD outperforms other methods, which verified its effectiveness in handling data with high complexity. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8404 Datasets Abalone Arrhythmia Campaign MAGIC-gamma (no DR) kNN* 61.5± 0.0 63.8± 0.0 72.1± 0.0 75.2± 0.0 (no DR) k-Means* 61.8± 0.0 62.8± 0.0 72.0± 0.0 70.6± 0.0 (no DR) LOF* (Breunig et al. 2000) 33.0 ± 1.0 51.0 ± 1.0 64.0± 0.0 68.0± 0.0 OCSVM* (Sch¨olkopf et al. 2001) 48.0 ± 0.0 46.0 ± 0.0 67.0± 0.0 67.0± 0.0 E2E-AE (Zong et al. 2018) 33.0 ± 3.0 45.0 ± 3.0 DCN (Caron et al. 2018) 40.0 ± 1.0 38.0 ± 3.0 DAGMM (Zong et al. 2018) 20.0 ± 3.0 49.0 ± 3.0 DSVDD (Ruff et al. 2018) 62.0 ± 1.0 54.0 ± 1.0 61.7 ± 6.4* 65.5 ± 0.3* DROCC* (Goyal et al. 2020) 68.0 ± 2.0 32.3 ± 1.8 65.5 ± 0.9 58.0 ± 1.4 GOAD (Bergman and Hoshen 2020) 61.0 ± 2.0 52.0 ± 2.3 64.5 ± 0.7* 61.6 ± 0.1* NeuTraL AD* (Qiu et al. 2021) 62.1 ± 2.8 60.3 ± 1.1 63.2 ± 8.0 69.6 ± 2.8 GOCC* (Shenkar and Wolf 2021) 66.1 ± 4.3 61.8 ± 1.8 74.1 ± 2.5 66.7 ± 0.4 PCA+kNN* 56.7± 0.0 25.0± 0.0 67.2± 0.0 72.9± 0.0 t-SNE+kNN* 61.9± 0.0 13.7± 0.0 67.4± 0.0 76.6± 0.0 UMAP+kNN* 61.7± 0.0 11.4± 0.0 66.9± 0.0 74.8± 0.0 DPAD 66.7 ± 1.5 66.7 ± 0.0 73.4 ± 1.5 74.0 ± 0.5 Table 5: Average F1-scores(%) with the standard deviation of each method on four tabular datasets. * means we reproduced the results using the officially released code. The best two results are marked in bold. Results on Tabular Datasets In Table 5, we summarize the F1-scores of all methods on four tabular datasets. It can be observed that DPAD significantly outperforms several baseline methods such as OCSVM, DCN, and DAGMM. Note that for Campaign and MAGIC-gamma, we run the officially released code or our own code to get the results. When faced with lowdimensional data such as Campaign and MAGIC-gamma, classical methods and DR methods can even get better results than some deep learning based methods such as DSVDD and DROCC. Compared with methods designed for tabular data such as NeuTraL AD and GOCC, our DPAD is more effective. Moreover, Arrhythmia is a more challenging dataset with fewer samples and more attributes, and DPAD exhibits a performance improvement of 4% over the secondbest method while the performance of DR methods is the worst indicating they fail when faced with complex datasets. Experiment with Multi-Class Normality In real anomaly detection scenarios, the normal data may consist of multiple classes with small associations. To evaluate the performance of our method under such practical conditions, we conduct experiments on Fashion-MNIST and CIFAR-10 datasets by selecting one class as an anomalous class and the remaining nine classes as normal classes. Therefore, we conducted 10 experiments for each dataset. In this setup, the normal samples come from different classes and have relatively small associations, making it a more challenging task than traditional one-class classification. We compare our method with OCSVM, SVDD, DROCC, HRN, and dimensionality reduction methods. Table 6 shows the average performance. We have the following observations: • Compared to traditional one-class classification tasks, all methods experience a significant decrease in average Datasets CIFAR-10 F-MNIST (no DR) kNN 52.1 71.6 (no DR) k-Means 48.8 68.8 (no DR) LOF(Breunig et al. 2000) 50.0 50.0 OCSVM (Sch¨olkopf et al. 2001) 49.0 57.2 DSVDD (Ruff et al. 2018) 52.3 65.9 DROCC (Goyal et al. 2020) 54.3 54.8 HRN (Hu et al. 2020) 50.3 41.1 PCA+kNN 52.2 74.8 t-SNE+kNN 51.3 78.7 UMAP+kNN 51.2 74.4 AE+kNN 51.4 69.0 DPAD 66.1 70.2 Table 6: Average AUCs(%) of 9-1 experiments on CIFAR10 and Fashion-MNIST. Note that we run the officially released code to get results and the best result is marked in bold. AUC which demonstrates that the 9-1 experiments are indeed more challenging than the 1-9 experiments shown in previous tables. • Although dimensionality reduction methods perform well on Fashion-MNIST, their average AUCs are around 50 on CIFAR-10, indicating they failed to handle data with complex structures. DPAD achieves the best performance on CIFAR-10, indicating it is more effective on anomaly detection in complex real scenarios than other state-of-the-art methods. Its success mainly stems from the ability to learn a decision boundary locally without any assumption on the shape of the decision boundary. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8405 Ablation Study We study the contributions of the two components of our method. Table 7 gives the ablation results on FashionMNIST and CIFAR-10. We can see that both eW ij and CW are necessary. Datasets CIFAR-10 Fashion-MNIST DPAD without eW ij and CW 57.8 87.0 DPAD without eW ij 68.5 91.3 DPAD without CW 68.5 89.2 DPAD 74.5 93.7 Table 7: Average AUCs(%) of different components of DPAD on the image datasets. To show that the performance is not significantly dependent on values of γ and λ, we choose different values of them to see the difference in one-class classification experiments on Fashion-MNIST. Table 8 shows the results, where the differences are tiny if γ and λ are in some reasonable ranges respectively. Nevertheless, substantial performance degradation is evident when γ is 100, corroborating that an excessively large γ inhibits the learning of dense representations, and then impacts performance. Besides, results show that our method is not sensitive to the value of λ. Datasets Fashion-MNIST γ = 0.001 92.3 γ = 0.01 93.3 γ = 0.1 91.3 γ = 1 λ = 1 90.5 γ = 10 91.4 γ = 100 89.2 λ = 0 89.2 λ = 0.01 92.5 λ = 0.1 92.5 γ = 0.01 λ = 10 92.6 λ = 100 91.9 λ = 1000 92.0 Table 8: Average AUCs(%) of different values of hyperparameter γ and λ on Fashion-MNIST. Conclusions We have presented a novel and simple method DPAD for unsupervised anomaly detection. The main idea is to learn dense representations of normal data using neural networks and detect anomalous data based on its local density. Compared with other methods, DPAD does not rely on any assumption on the shape of normal data and the decision boundary formed by representations of normal data and only tries to gather representations of similar normal data. For this reason, DPAD is not only effective on classical one-class classification tasks but also outperforms other methods when normal data consists of multiple classes with small associations. Our experimental results demonstrate that DPAD is as effective as state-of-the-art AD methods on both image and tabular datasets and has significant improvements in a few cases. Proof of Theoretical Results Proof for Lemma 1 Given the architecture of fW, we have ∥fW(x1) −fW(x2)∥ =∥WL(h(· · · h(W2h(W1x1)) · · · )) −WL(h(· · · h(W2h(W1x2)) · · · ))∥ ≤∥WL∥2∥h(· · · h(W2h(W1x1)) · · · ) −h(· · · h(W2h(W1x2)) · · · )∥ ≤ρ∥WL∥2∥· · · h(W2h(W1x1)) · · · −h(· · · W2h(W1x2)) · · · ∥ ... ≤ρL−1 L Y l=1 ∥Wl∥2  ∥x1 −x2∥ ≤ρL−1 L Y l=1 βl  ∥x1 −x2∥ =τf∥x1 −x2∥. (11) Proof for Theorem 1 For our fW, the weight matrices W ∈Rdl×dl−1 are initialized from N(0, σ2). According to Lemma 2, we have ∥Wl(0)∥2 ≤ p ¯dlσ + q log ¯dlσ (12) where ¯dl = max(dl, dl−1). The inequation shows an upper bound of the spectral norm of Wl when it is initialized by a Gaussian distribution with variance σ2. Now for Lemma 1, we have τf = ρL−1 L Y l=1 ( p ¯dlσ + q log ¯dlσ). (13) It follows from Lemma 1 that ∥fW(xi) −fW(xj)∥≤ρL−1σL ∥xi −xj∥ × L Y l=1 ( p ¯dl + q log ¯dl). (14) Thus we can get an upper bound for any eW(0) ij : eW(0) ij = exp  −γ ∥fW(xi) −fW(xj)∥2 ≥exp  −γρ2L−2σ2L ∥xi −xj∥2 × L Y l=1 ( p ¯dl + q log ¯dl)2  . (15) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8406 Acknowledgments This work was partially supported by the National Natural Science Foundation of China under Grants No.62106211 and No.62072151, the General Program JCYJ20210324130208022 of Shenzhen Fundamental Research, the research funding T00120210002 of Shenzhen Research Institute of Big Data, the Guangdong Key Lab of Mathematical Foundations for Artificial Intelligence, Anhui Provincial Natural Science Fund for the Distinguished Young Scholars (2008085J30), Open Foundation of Yunnan Key Laboratory of Software Engineering (2023SE103), CCF-Baidu Open Fund and CAAI-Huawei MindSpore Open Fund, and the funding UDF01001770 of The Chinese University of Hong Kong, Shenzhen. References Bandeira, A. S.; and Van Handel, R. 2016. Sharp nonasymptotic bounds on the norm of random matrices with independent entries. Bergman, L.; and Hoshen, Y. 2020. Classificationbased anomaly detection for general data. arXiv preprint arXiv:2005.02359. Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; and Sander, J. 2000. LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data, 93–104. Cai, J.; and Fan, J. 2022. Perturbation Learning Based Anomaly Detection. In Koyejo, S.; Mohamed, S.; Agarwal, A.; Belgrave, D.; Cho, K.; and Oh, A., eds., Advances in Neural Information Processing Systems, volume 35, 14317– 14330. Curran Associates, Inc. Caron, M.; Bojanowski, P.; Joulin, A.; and Douze, M. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), 132–149. Chandola, V.; Banerjee, A.; and Kumar, V. 2009. Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3): 1–58. Chen, Y.; Tian, Y.; Pang, G.; and Carneiro, G. 2022. Deep one-class classification via interpolated gaussian descriptor. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 383–392. Deecke, L.; Vandermeulen, R.; Ruff, L.; Mandt, S.; and Kloft, M. 2019. Image anomaly detection with generative adversarial networks. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18, 3–17. Springer. Dua, D.; Graff, C.; et al. 2017. UCI machine learning repository. Fan, J.; Chow, T. W.; Zhao, M.; and Ho, J. K. 2018. Nonlinear dimensionality reduction for data with disconnected neighborhood graph. Neural Processing Letters, 47: 697– 716. Fan, J.; Wang, W.; and Zhang, H. 2017. AutoEncoder based high-dimensional data fault detection system. In 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), 1001–1006. Fan, J.; and Wang, Y. 2014. Fault detection and diagnosis of non-linear non-Gaussian dynamic processes using kernel dynamic independent component analysis. Information Sciences, 259: 369–379. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Goyal, S.; Raghunathan, A.; Jain, M.; Simhadri, H. V.; and Jain, P. 2020. DROCC: Deep robust one-class classification. In International conference on machine learning, 3711–3721. PMLR. Han, S.; Hu, X.; Huang, H.; Jiang, M.; and Zhao, Y. 2022. Adbench: Anomaly detection benchmark. Advances in Neural Information Processing Systems, 35: 32142–32159. Hinton, G. E.; and Salakhutdinov, R. R. 2006. Reducing the dimensionality of data with neural networks. science, 313(5786): 504–507. Hu, W.; Wang, M.; Qin, Q.; Ma, J.; and Liu, B. 2020. HRN: A holistic approach to one class learning. Advances in neural information processing systems, 33: 19111–19124. Jolliffe, I. T.; and Cadima, J. 2016. Principal component analysis: a review and recent developments. Philosophical transactions of the royal society A: Mathematical, Physical and Engineering Sciences, 374(2065): 20150202. Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. nature, 521(7553): 436–444. Liu, F. T.; Ting, K. M.; and Zhou, Z.-H. 2008. Isolation forest. In 2008 eighth ieee international conference on data mining, 413–422. IEEE. MacQueen, J.; et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, 281–297. Oakland, CA, USA. McInnes, L.; Healy, J.; and Melville, J. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. Pang, G.; Shen, C.; Cao, L.; and Hengel, A. V. D. 2021. Deep learning for anomaly detection: A review. ACM computing surveys (CSUR), 54(2): 1–38. Parzen, E. 1962. On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3): 1065–1076. Perera, P.; Nallapati, R.; and Xiang, B. 2019. Ocgan: Oneclass novelty detection using gans with constrained latent representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2898– 2906. Pidhorskyi, S.; Almohsen, R.; and Doretto, G. 2018. Generative probabilistic novelty detection with adversarial autoencoders. Advances in neural information processing systems, 31. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8407 Qiu, C.; Pfrommer, T.; Kloft, M.; Mandt, S.; and Rudolph, M. 2021. Neural transformation learning for deep anomaly detection beyond images. In International Conference on Machine Learning, 8703–8714. PMLR. Rayana, S. 2016. ODDS Library [http://odds. cs. stonybrook. edu]. Stony Brook University, Department of Computer Science, Stony Brook, NY. Roweis, S. T.; and Saul, L. K. 2000. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500): 2323–2326. Ruff, L.; Kauffmann, J. R.; Vandermeulen, R. A.; Montavon, G.; Samek, W.; Kloft, M.; Dietterich, T. G.; and M¨uller, K.R. 2021. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE, 109(5): 756–795. Ruff, L.; Vandermeulen, R.; Goernitz, N.; Deecke, L.; Siddiqui, S. A.; Binder, A.; M¨uller, E.; and Kloft, M. 2018. Deep one-class classification. In International conference on machine learning, 4393–4402. PMLR. Sch¨olkopf, B.; Platt, J. C.; Shawe-Taylor, J.; Smola, A. J.; and Williamson, R. C. 2001. Estimating the support of a high-dimensional distribution. Neural computation, 13(7): 1443–1471. Shenkar, T.; and Wolf, L. 2021. Anomaly detection for tabular data with internal contrastive learning. In International Conference on Learning Representations. Sun, Y.; Han, Y.; and Fan, J. 2023. Laplacian-Based ClusterContractive t-SNE for High-Dimensional Data Visualization. ACM Trans. Knowl. Discov. Data, 18(1). Sun, Y.; Ming, Y.; Zhu, X.; and Li, Y. 2022. Out-ofdistribution detection with deep nearest neighbors. In International Conference on Machine Learning, 20827–20840. PMLR. Tax, D. M.; and Duin, R. P. 2004. Support vector data description. Machine learning, 54: 45–66. Tenenbaum, J. B.; Silva, V. d.; and Langford, J. C. 2000. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500): 2319–2323. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Vincent, P.; Larochelle, H.; Bengio, Y.; and Manzagol, P.-A. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, 1096–1103. Wang, J.; Sun, S.; and Yu, Y. 2019. Multivariate triangular quantile maps for novelty detection. Advances in Neural Information Processing Systems, 32. Wang, S.; Wang, X.; Zhang, L.; and Zhong, Y. 2021. AutoAD: Autonomous hyperspectral anomaly detection network based on fully convolutional autoencoder. IEEE Transactions on Geoscience and Remote Sensing, 60: 1–14. Xiao, F.; Sun, R.; and Fan, J. 2023. Restricted Generative Projection for One-Class Classification and Anomaly Detection. arXiv preprint arXiv:2307.04097. Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Yan, X.; Zhang, H.; Xu, X.; Hu, X.; and Heng, P.-A. 2021. Learning semantic context from normal samples for unsupervised anomaly detection. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 3110–3118. Yoshida, Y.; and Miyato, T. 2017. Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:1705.10941. Zong, B.; Song, Q.; Min, M. R.; Cheng, W.; Lumezanu, C.; Cho, D.; and Chen, H. 2018. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8408
2024
934
18,778
Knowledge-Enhanced Historical Document Segmentation and Recognition En-Hao Gao1,2, Yu-Xuan Huang1,2, Wen-Chao Hu1,2, Xin-Hao Zhu1,2, Wang-Zhou Dai1,3 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2School of Artificial Intelligence, Nanjing University, China 3School of Intelligence Science and Technology, Nanjing University, China {gaoeh, huangyx, huwc, zhuxh, daiwz}@lamda.nju.edu.cn Abstract Optical Character Recognition (OCR) of historical document images remains a challenging task because of the distorted input images, extensive number of uncommon characters, and the scarcity of labeled data, which impedes modern deep learning-based OCR techniques from achieving good recognition accuracy. Meanwhile, there exists a substantial amount of expert knowledge that can be utilized in this task. However, such knowledge is usually complicated and could only be accurately expressed with formal languages such as firstorder logic (FOL), which is difficult to be directly integrated into deep learning models. This paper proposes KESAR, a novel Knowledge-Enhanced Document Segmentation And Recognition method for historical document images based on the Abductive Learning (ABL) framework. The segmentation and recognition models are enhanced by incorporating background knowledge for character extraction and prediction, followed by an efficient joint optimization of both models. We validate the effectiveness of KESAR on historical document datasets. The experimental results demonstrate that our method can simultaneously utilize knowledge-driven reasoning and data-driven learning, which outperforms the current state-of-the-art methods. 1 Introduction Document image analysis is a common task that aims to extract text from document images. Typically, it involves two steps, image segmentation and recognition. Segmentation is devised to identify and isolate regions containing the desired texts. After segmentation, recognition transforms the segmented images into textual form. With the rapid development of OCR technologies, the analysis of modern document images can now be well addressed (Long, He, and Yao 2021), as these images often have neat arrangement, clear handwriting, and abundant labeled data. However, different from modern ones, the analysis of historical documents, including handwritten manuscripts and early prints, remains a challenging, unresolved issue. Three main challenges hinder the segmentation and recognition of historical document images. Firstly, text lines are often distorted and densely packed, leading to substantial challenges for image segmentation. Secondly, historiCopyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. cal documents often include a wide range of character categories. For instance, while modern Chinese documents typically use around 3,500 characters, historical counterparts may contain over 10,000 characters. This extensive character dictionary requires a large amount of labeled data for the recognition model training. Thirdly, annotating historical documents is time-consuming and requires a high level of expert knowledge. This results in a scarcity of labeled images, consequently leading to inferior performance of modern data-driven segmentation and recognition models. However, humans are able to make successful segmentation and recognition from historical manuscripts by utilizing background knowledge, which is also a promising way for enhancing machine learning performance (Raedt et al. 2020). For instance, during the segmentation of Chinese document images, characters typically have a squarelike shape. Furthermore, characters within the same text line are expected to exhibit similar aspect ratios and sizes. Firstorder logic (FOL) rules provide a precise way to express such knowledge. However, it is non-trivial to inject these rules into the learning process of common deep learning models, since the application of FOL typically relies on logical reasoning, a discrete process that is difficult to integrate with gradient-based numerical optimization methods. In order to leverage human knowledge to empower document image analysis, we adopt the ABductive Learning (ABL) framework (Zhou 2019; Zhou and Huang 2022). This novel paradigm bridges data-driven machine learning and knowledge-driven logical reasoning while preserving the expressive power of both. In ABL, the machine learning model initially converts raw data into primitive logic facts, named pseudo-labels. The reasoning component then revises pseudo-labels that are inconsistent with the FOL knowledge base by abductive reasoning (Magnani 2009), a.k.a. abduction. Subsequently, these knowledge-refined pseudo-labels are utilized to update the machine learning model, and the above routine repeats iteratively. In this paper, we propose KESAR (Knowledge-Enhanced document Segmentation And Recognition) to tackle the above challenges based on the ABL framework. It first trains the segmentation model with structural knowledge, where the predicted character regions and affinities (area between adjacent characters) are refined by the knowledge base via abduction. Then, to address the issue of label scarcity, it The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8409 leverages the proposed abductive matching mechanism to train the recognition model that is used to predict text for single-character images. In this process, a dynamic programming algorithm is utilized to conduct abductive matching efficiently. Finally, it employs joint optimization, allowing the segmentation and recognition models to mutually enhance their performance instead of being trained separately. In this process, we propose the Over-Segmentation and Recombination (OSR) algorithm, which enables the segmentation model to improve its performance by leveraging the recognition model’s ability to differentiate characters. To show the effectiveness of KESAR, we conduct extensive experiments on three datasets. These datasets include a substantial number of challenging images, featuring severe distortions, varying scales, and multiple sources of noise. Ablation studies demonstrate the importance of each component of our method, and empirical evaluations show that KESAR outperforms state-of-the-art OCR methods in both segmentation and recognition tasks. 2 Related Work Recently, deep learning-based scene text detection methods have achieved remarkable results. They can be broadly classified into two categories: segmentation-based and regression-based methods. Typically, segmentation-based methods involve the integration of pixel-level predictions, followed by post-processing algorithms to derive the bounding boxes. CRAFT (Baek et al. 2019) predicts the probabilities of character regions and affinities for each pixel. PSENet (Wang et al. 2019b) proposes a progressive scale expansion mechanism, learning and enlarging text kernels to cover all text instances. Based on PSENet, PAN (Wang et al. 2019c) implements a pixel aggregation process by predicting the pixel similarities. Besides, regression-based methods try to predict the contours of text lines directly. FCENet (Zhu et al. 2021) regresses text lines on the Fourier domain and reconstructs contours during the inference stage. ABCNet (Liu et al. 2020) utilizes Bezier curves to parameterize polygon annotations, equipping the model with the ability to detect text lines of arbitrary shapes. However, the former types of models typically resort to weakly supervised training, potentially leading to inaccurate results given limited data. Meanwhile, the new contour representations heavily depend on highly specialized network architectures. In contrast, by introducing human knowledge, our method can sufficiently exploit the supervised information from limited labeled data, thus improving data efficiency. Besides, it imposes little limitation on the model’s specific form. Text recognition is another important component of document image analysis, which aims at recognizing text through a cropped text image. Some recognition approaches attempt to rectify irregular images to regular ones before recognition with an exemplary work of STN (Jaderberg et al. 2015). In contrast, DAN (Wang et al. 2020) and Robust Scanner (Yue et al. 2020) represent encoder-decoder-based methods, using the attention mechanism to capture neighborhood information, yielding promising results in irregular text recognition. Other approaches such as CA-FCN (Liao et al. 2019) and CCN (Xing et al. 2019) address recognition by segmenting each character to circumvent issues with irregular layouts. However, successful text recognition in these previous works usually requires substantial labeled data. This might be feasible for modern documents but presents significant challenges when dealing with historical documents. The incorporation of human knowledge has long been considered an effective approach to addressing data scarcity. In recent years, advancements have been made in leveraging symbolic reasoning to enhance the performance of machine learning models such as neural networks, especially when certain domain knowledge is available. For instance, some approaches express logical domain knowledge as constraints within the neural network’s loss function to guide the training process (Xu et al. 2018; Yang, Lee, and Park 2022). Other approaches endeavor to learn domain knowledge within neural networks using specialized layers (Wang et al. 2019a). Additionally, some methods interpret neural network outputs as probability distributions over symbols, subsequently invoking a symbolic system to derive solutions (Manhaeve et al. 2018; Tsamoura, Hospedales, and Michael 2021). Many of these methods use continuous functions to approximate logical constraints and discrete operators, which results in bias in the approximated inference and requires large amounts of training data. Our method is based on Abductive Learning (ABL) (Zhou 2019; Dai et al. 2019; Zhou and Huang 2022), a framework that bridges machine learning and symbolic reasoning via logical abduction. ABL has also demonstrated the capability to build a knowledge base from data (Huang et al. 2023a) or knowledge graph (Huang et al. 2023b). Following ABL, our method is capable of fully leveraging the deep learning capability for feature extraction from raw images, while also preserving the complete expressive power of logical reasoning for knowledge processing in symbolic space, which significantly improves the model performance. 3 Preliminaries Abductive Reasoning. Abductive reasoning, a.k.a abduction, is a basic form of logical inference, which seeks an explanation for an observation. Formally, given observations O, based on background knowledge base KB, it generates a set of abducibles ∆consistent with KB and satisfies KB ∪∆|= O, where |= stands for logical entailment. For example, when observing a text line, based on knowledge of text structure, we could explain that there are several characters with similar shapes and sizes in this text line. Abductive Learning. The target of ABductive Learning (ABL) (Zhou 2019; Zhou and Huang 2022) is to train a machine learning model given unlabeled data and knowledge base. In ABL, the machine learning model perceives primitive logic facts from raw data, while logical abduction exploits the knowledge base to revise wrongly perceived facts to improve the machine learning model. For example, if a model predicts several bounding boxes for characters with dissimilar shapes within a text line, which are inconsistent with the knowledge base, ABL utilizes abduction to revise wrong bounding boxes and treats them as ground-truth labels to update the model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8410 4 The KESAR Approach In this section, we first introduce an overview of the proposed model training method, KESAR (KnowledgeEnhanced historical document Segmentation And Recognition), and then present the details of its three learning stages. 4.1 Overview KESAR consists of two machine learning models for image segmentation and recognition, respectively: • Segmentation Model. The segmentation model takes the raw image as input, predicting the probability distribution for each pixel on three distinct categories: character region (an area containing a character), character affinity (an area between two adjacent characters), or background. The watershed algorithm is then employed to aggregate pixels that are likely to be situated within character regions, thereby isolating each single character. The images of the segmented characters then serve as inputs for the recognition model. On the other hand, character affinity is used to generate text lines by connecting dispersed character regions during the inference stage. • Recognition Model. After the segmentation model identifies individual characters, the recognition model merely needs to predict text of single-character images. This task is relatively straightforward, and a small-scale ResNet network (He et al. 2016) can accomplish it effectively. To integrate knowledge for document image segmentation and recognition, KESAR employs a three-stage learning methodology: 1. Segmentation with Structural Knowledge. It incorporates text structure knowledge to augment the weaklysupervised learning process of the segmentation model. 2. Recognition with Abductive Matching. It uses both knowledge of abductive matching and unlabeled cropped character images to train the recognition model. 3. Joint Optimization. It uses glyph knowledge learned by the recognition model to further improve the segmentation model’s performance. Meanwhile, the refined character segmentation results also facilitate the learning of the recognition model in turn. 4.2 Segmentation with Structural Knowledge Due to the difficulty of labeling, most historical document images only have line-level labels (bounding box and text of each line), whereas the learning of the segmentation model requires character-level supervised information. To bridge this discrepancy, we adopt the abductive learning framework, which incorporates structural knowledge of characters to deal with limited supervision. Fig. 1 illustrates the learning pipeline of the segmentation model in the task of Chinese historical document segmentation. Firstly, the segmentation model takes the document image as input and predicts the character region to generate pseudocharacter bounding boxes. Then, the reasoning component rectifies bounding boxes inconsistent with the knowledge base through abductive reasoning, given the text-line annotations. These revised character bounding boxes are then Segmentation Model 1. Segment Knowledge Base 2. Abduce Segmentation Model 4.Replace 3. Supervised Learning Pseudo Character Bounding Boxes Revised Character Bounding Boxes Text Line Label Reasoning Learning 二 约 𫝆𫝆 𦄘𦄘 文 Figure 1: Illustration of Segmentation with Structural Knowledge. Each iteration begins with the character region prediction to extract pseudo-bounding boxes. Then, it employs abduction to revise inconsistent bounding boxes based on the knowledge base. Finally, it generates revised character regions to update the segmentation model, which will replace the origin one after each iteration. used to update the segmentation model. In this example, we can precisely formalize human knowledge of text structure to FOL rules as follows: reg bbox(B) ←close(asp rat(B), 1). (1) reg textline(TL) ← bbox seq(TL) = {B1, B2, . . .} ∧reg bbox(B1) ∧reg bbox(B2) ∧. . . (2) ∧close(asp rat(B1), asp rat(B2), . . .) ∧close(size(B1), size(B2), . . .). false ←bbox seq(TL) = {B1, B2, . . .} (3) ∧horizontal adjacent(B1, B2, . . .). “←” is implication, which means that if premises on the right hold, then the conclusion on the left holds; reg bbox(B) is the regular-shape constraint on bounding box B; reg textline(TL) determines whether a sequence of character bounding boxes, bbox seq(TL) = {B1, B2, . . .}, contained in TL composes a regular text line; close(V1, V2, ...) calculates the variance of its arguments to assess whether the arguments are adequately proximate; asp rat(B) and size(B) calculate aspect ratio and size of a bounding box B, respectively. These FOL rules essentially convey three fundamental aspects of background knowledge in Chinese historical documents segmentation: (1) Chinese characters are square-shaped. (2) Characters within the same text line possess similar aspect ratios and sizes. (3) Vertically-aligned text does not contain horizontally adjacent characters. In this task, abductive reasoning aims to revise character bounding boxes by maximizing a consistency measure that quantifies the degree to which these boxes align with the semantics of predicates (e.g., close, reg bbox) in rules The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8411 Match by alignment Match by abduction Predict 1. Predict Pseudo Characters List of Character Images 2. Abductive Matching Text Line Label 3. Update using Matched Pairs Recognition Model Knowledge Base Learning Reasoning 立 一 一 年 願 ( , 立) ( , 一) ( , 一) ( , 年) ( , 崩) Case 1: Equal 也 又 稚 池 又 本 也 又 稚 也 又 本 Case 3: Longer 若 安 住 淨 戒 律 石 安 主 淨 或 律 廿 Case 2: Shorter 立 一 十 一 年 崩 立 一 一 年 願 立 一 十 一 年 崩 Figure 2: The upper half illustrates the learning pipeline of the recognition model. In each iteration, the recognition model first predicts pseudo-character labels from images. Potential inaccuracies are then rectified by the knowledge base via abductive matching. The model is then updated using these matched character images and labels. The lower half demonstrates three typical types of matching relationships between prediction and target in abductive matching. Yellow dotted lines represent the results of abduction, which are propagated from the blue ones. (1)-(3). The consistency measure can be calculated in various way (Huang et al. 2020, 2021), and in KESAR it is the negative weighted sum of the predicates’ value and the distance between the abduced and predicted boxes. Among all consistent bounding box sets, the one with the closest aspect ratio and size would have the maximal consistency. Considering the measure’s non-convexity, revising character bounding boxes by directly maximizing the consistency is time-consuming. In practice, the process begins with merging horizontally adjacent boxes. Then, irregular boxes are identified by examining the width and height of the text line. Finally, these irregular boxes are revised according to the average size of regular ones. 4.3 Recognition with Abductive Matching The recognition model takes character images as input and predicts the text. Although most historical document images only have text-line labels, the predicted character bounding boxes of the segmentation model can be used to generate training images. Nonetheless, a significant discrepancy remains when utilizing these images to train the recognition model, primarily due to the potential omission or misidentification of character bounding boxes. As shown in the upper half of Fig. 2, the input text-line image contains 6 characters, while the segmentation model only predicts 5 bounding boxes. Since we do not know the correspondence between the text-line label and these bounding boxes, annotating such 5 boxes with the 6 ground-truth characters becomes puzzling, especially when the recognition result is incorrect. Algorithm 1: Abductive Matching Input: Predicted string P = (p1, p2, . . . , pn); Groundtruth string G = (g1, g2, . . . , gm); Maximum length of matched substrings max len Output: Maximum set of groundings ∆ 1: Initialization: p0 ←‘[START]’; g0 ←‘[START]’; res ←[[0], . . . , [0]]; trace ←[[0], . . . , [0]] 2: for i = 1 to n do 3: for j = 1 to m do 4: if pi = gj then 5: res[i, j] ←res[i −1, j −1] + 1 6: trace[i, j] ←1 7: for k = 1 to min(i, j, max len) do 8: if pi−k = gj−k then 9: res[i, j] ←res[i −k, j −k] + k 10: trace[i, j] ←k 11: break 12: else 13: res[i, j] ←max(res[i −1, j], res[i, j −1]) 14: ∆←postprocess(res, trace) To address this challenge, we introduce the second stage learning of KESAR, namely, Recognition with Abductive Matching. By incorporating general human knowledge for matching relationship construction, this approach can generate pairs of matched character images and labels, thereby promoting the learning of the recognition model. The learning process also follows the framework of ABL. This time, the medium facilitating the interaction between learning and reasoning changes from character bounding boxes to character labels. The upper half of Fig. 2 shows the closed-loop learning process of the recognition model. During each cycle, the predicted characters are revised by the knowledge base via abductive matching and these refined characters, paired with input images, are then used to train the recognition model. The core strategy of abductive matching involves initially aligning identical segments between the prediction and the text-line label and subsequently propagating the matching relationships to equal-length intervals. The lower half of Fig. 2 illustrates three cases of abductive matching, each representing a length relationship between the prediction and the text-line label. The first case often occurs when segmentation is accurate, but recognition may be erroneous. The latter two cases commonly result from the omission and incorrect identification of character bounding boxes, respectively. Efficient Optimization. In abductive matching, different alignment ways will lead to different propagation results and hence different numbers of matched character images and labels. To generate more training data for the recognition model, we construct the following optimization problem: max {(pi j, gi j)| match char(pi j, gi j)} s.t. match str(Pi, Gi) where Pi = {pi 1, pi 2, . . . , pi l}, Gi = {gi 1, gi 2, . . . , gi l}, substr(Pi, P), substr(Gi, G), l < max len. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8412 Segmentation Model Text Line Label Recognition Model 3. Select 2. Over Segmentation and Recombination 1. Segment 4. Update Recognition Model Learning 十 对 淡 鹽 醃 Figure 3: Illustration of Joint Optimization of the segmentation and recognition models. Images segmented by the segmentation model will be used in the recognition model learning, as described in Section 4.3. Furthermore, these images will be used to train the segmentation model after refinement by the Over-Segmentation and Recombination algorithm. Over-segmentation divides the segmented images into finer segments, and recombination fuses these split images in various ways. After being assessed by the recognition model, the recombination way with the highest score will be subsequently used to update the segmentation model. where | · | calculates the number of elements in the set; match str(Pi, Gi) holds true when Pi and Gi have equal length and their respective first and last characters are also identical; match char(pi j, gi j) holds true when pi j and gi j are in the same position of a pair of matched strings; P, G are the predicted string and the target string respectively; substr(Pi, P) holds true when Pi is a substring of P; max len restricts the length of matched substrings, since the credibility of the matching established by propagation gradually decreases as the length of the substring increases. Although this is a combinatorial optimization problem, it can be solved by a dynamic programming algorithm with polynomial time complexity O(nm ∗max len), where n and m are lengths of P and G respectively. Algorithm 1 represents the detail of the proposed ABductive Matching (ABM) method. The algorithm calculates each element in the res array in ascending order. For res[i, j], if pi and gj are the same, they can be matched together and res[i, j] is initialized as res[i −1, j −1] + 1. Then, the algorithm traverses forward up to max len steps to find another pair of matching characters, so that the characters in between can be matched through abduction and res[i, j] can be updated correspondingly (cf. Line 7-11 in Algorithm 1). If pi and gj are different, then res[i, j] is set to the maximum value of res[i −1, j] and res[i, j −1] (cf. Line 13 in Algorithm 1). 4.4 Joint Optimization The joint optimization focuses on using glyph knowledge learned by the recognition model to further improve the predictive accuracy of the segmentation model, especially in complex scenarios where characters are closely packed or portions of a single character are distinctly separated. Besides the performance improvement of the segmentation model, this augmented character segmentation capability also boosts the learning of the recognition model. Algorithm 2: Over-Segmentation and Recombination (OSR) Input: Recognition model f; Sequence of bounding boxes B = (B1, B2, . . . , Bn); Sequence of character labels C = (c1, c2, . . . , cm); Max recombination number r Output: Sequence of recombined bounding boxes D 1: Initialization:D ←∅; score ←[0]; match len ←[0]; trace ←[0] 2: OB ←OverSegment(B) 3: for i = 1 to 2n do 4: score.append(0) 5: for j = max(1, i −r + 1) to 2n do 6: tar char = C[match len[j −1] + 1] 7: comb score = f(Comb(OB[j : i]), tar char) 8: new score = score[j −1] + comb score 9: if new score > score[i] then 10: score[i] ←new score 11: match len[i] ←match len[j −1] + 1 12: trace[i] ←j −1 13: end index ←arg maxi,match len[i]=m score[i] 14: D ←postprocess(trace, end index) Figure 3 presents the overall pipeline of joint optimization. The red dotted arrow represents the recognition model learning process established in Section 4.3. To make the segmentation model learning benefit from the recognition model, we need to close the loop. Considering that the primary issue with the segmentation model is the incorrect merging or splitting of characters, we propose the OverSegmentation and Recombination (OSR) algorithm. Algorithm 2 presents details of the OSR approach. Initially, the algorithm segments each bounding box in B into two new vertically packed bounding boxes, where the segmentation point is approximately halfway through the height of the original character bounding box (cf. Line 2 in Algorithm 2). Following this, OSR iteratively processes these segmented boxes, merging those at the sequence’s tail and employing the recognition model to assess the effect of the combination (cf. Line 3-12 in Algorithm 2). The objective is to find the recombined bounding box sequence with the highest score. This can be computed recursively from trace and end index. The recombination process is inspired by the idea of Rg-ABBS (Xie et al. 2019). However, Rg-ABBS utilizes beam search to determine the combination path, any rejection of the (partial) optimal solution will remove the global optima from subsequent searches. In contrast, our method restricts the number of combined boxes, as a single character will not be excessively lengthy in the majority of cases, thereby ensuring both efficiency and accuracy. 5 Experiments This section presents the experimental results on three historical document datasets to demonstrate the effectiveness of each stage within KESAR and compare it with state-of-theart methods. All experiments are conducted on a server with 8 Nvidia V100 GPUs. The code is available for download1. 1https://github.com/AbductiveLearning/ABL-HD The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8413 Method MTH GBACHD TKH R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) PSENet 88.4 87.3 87.8 73.2 82.2 77.5 97.5 90.7 94.0 FCENet 78.8 81.6 80.2 71.1 69.0 70.0 84.1 84.8 84.4 CRAFT 85.9 93.4 89.5 82.5 93.2 87.5 97.3 98.3 97.8 KESAR (w/o JOPT) 92.1 94.0 93.1 93.5 93.4 93.5 KESAR 93.1 93.4 93.2 94.4 94.0 94.2 Table 1: Segmentation model comparison results on MTH, GBACHD, and TKH. R, P, and F represent the recall, precision, and F-measure respectively. The performances of KESAR (w/o JOPT) and KESAR on TKH are neglected since character-level annotations of TKH are used in the pre-training and there is no need to revise these annotations by structural knowledge or joint optimization. Comparison on TKH is for reference only since the training label used by CRAFT is different from others. 5.1 Datasets TKH (Yang et al. 2018) is a collection of historical documents released by HCIILAB, containing 1,000 images sourced from the Tripitaka Koreana. The dataset incorporates both character and text-line annotations. Text lines are neatly arranged, characters are relatively uniform in size, and the variety of character types is somewhat limited. Due to the inclusion of character annotations, we randomly select 600 images to serve as the pre-training data for KESAR. MTH (Ma et al. 2020) is a more challenging historical document dataset, characterized by prevalent text line distortions, intricate page layouts, and occasional inclusion of drawings. Comprising 2,200 images, the dataset is randomly partitioned into training and testing subsets at a 7:3 ratio. Although the MTH dataset encompasses both character and text-line annotations, only the latter are employed to validate the efficiency of our method. GBACHD is the most challenging dataset in our experiments, released in the 2022 Greater Bay Area (Huangpu) International Algorithm Case Competition. Encompassing 2,000 images, the dataset features over 15,000 character categories, embracing numerous rare characters and variant forms. GBACHD provides only text-line annotations. The complexities of the segmentation and recognition tasks derive not only from the severe distortions and varying scales but also from the presence of multiple sources of noise such as stains, blurred notes, and seals. The dataset is randomly partitioned into 1,400 images designated for training, with the remaining images set aside for testing. 5.2 Implementation Details Our baseline segmentation model is CRAFT (Baek et al. 2019) with ResNet50 as its backbone. We first employ the training data of TKH to pre-train the segmentation model for 320 epochs and then utilize MTH and GBACHD to fine-tune the model for 180 and 80 epochs, respectively. Our recognition model is ResNet34. We first employ the training data of TKH to pre-train the network for 25 epochs and then utilize character images generated by the segmentation model to fine-tune the model for another 25 epochs. The Joint Optimization stage requires only 10 epochs. The whole training process can be finished in 15 hours. More implementation details are listed in the appendix. 5.3 Ablation Study Influence of Structural Knowledge. We investigate the effect of structural knowledge on the segmentation model by comparing its performance with and without the utilization of a knowledge base for rectifying pseudo-character bounding boxes. We denote the structural knowledge-enhanced CRAFT model as KESAR (w/o JOPT) since it has not been fine-tuned by the joint optimization process. As shown in Table 1, KESAR (w/o JOPT) surpasses the vanilla CRAFT in terms of text line segmentation recall, precision, and Fmeasure. Notably, the F-measure (93.1%) achieved by KESAR (w/o JOPT) surpasses that of CRAFT by an absolute 3.6% on the MTH dataset and by an absolute 6.0% on the GBACHD dataset. Furthermore, since KESAR primarily serves as a model training method, it maintains the same inference speed as its baseline model, CRAFT. The performance of KESAR (w/o JOPT) on the TKH dataset is not included since the character-level annotations used in the training obviate the need for revision through knowledge and joint optimization. Influence of Abductive Matching. We study the effect of abductive matching on the recognition model’s performance. Our evaluation metrics include 1-N.E.D. (normalized edit distance) (Zhang et al. 2019) and the successful abduction rate, defined as the proportion of correctly abduced labels. A higher value of 1-N.E.D. indicates better recognition performance and a higher successful abduction rate means a larger portion of character images are matched with a character label. Fig. 4 illustrates the performance trajectory throughout the training process, demonstrating rapid improvement in recognition accuracy that eventually reaches a relatively high level of performance. As shown in Table 2, KESAR achieves 0.924 1-N.E.D. on the MTH dataset, which is absolute 0.093 higher than the initial performance. Improvement is even more significant on the GBACHD dataset, where KESAR ultimately achieves 0.875 1-N.E.D., compared to the initial performance of 0.737. As also shown in Table 3, the trend of the successful abduction rate mirrors that of 1-N.E.D. and even converges more rapidly. It finally achieves a near-optimal result, where almost all character labels are matched with a character image. Since the recognition model of KESAR is a small-scale ResNet, its inference is highly efficient. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8414 0 5 10 15 20 25 Epoch 0.75 0.80 0.85 0.90 0.95 1.00 1 - NED MTH train MTH test TKH test (a) MTH 0 5 10 15 20 25 Epoch 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1 - NED GBACHD train GBACHD test TKH test (b) GBACHD Figure 4: Learning curves of the recognition model. Blue curves indicate the 1-N.E.D. evaluation results on the training data and green curves indicate results on the test data. Considering that TKH is employed during the pre-training phase, we also display results from the TKH test data using yellow curves. Influence of Joint Optimization. We investigate the impact of joint optimization and focus on the performance improvement of the segmentation model, as the capability of recognition is highly dependent on the segmentation. As shown in Table 1, the performances of KESAR and KESAR (w/o JOPT) on the MTH dataset are comparable. This is primarily because the challenges posed by the MTH dataset stem from complex page layouts and variations in character scale across the image, whereas characters within the same text line are generally clearly separated. On the GBACHD dataset, there is a noticeable improvement in model performance, with KESAR surpassing KESAR (w/o JOPT) across all three metrics. This enhancement aligns with our expectations, given that many characters in the GBACHD images are densely clustered, and the recognition model aids in segmenting these difficult instances. 5.4 Comparisons with State-of-the-Art Methods Text Line Segmentation. We employ PSENet (Wang et al. 2019b) and FCENet (Zhu et al. 2021) as comparison methods, which are implemented by the mmocr codebase (Kuang et al. 2021). By incorporating a progressive scale expansion mechanism and multi-scale kernels, PSENet is able to gradually expand the predicted text-line region, which enables the model to effectively distinguish densely packed text lines. Benefiting from the expressive power of Fourier Transformation to represent closed contours, FCENet is especially good at detecting highly distorted text lines which are prevalent in the GBACHD dataset. Table 1 summarizes our results including text line segmentation recall, precision, and F-measure on the MTH and GBACHD datasets. On MTH, KESAR surpasses all comparison models. We can find that the F-measure (93.2%) achieved by KESAR is 5.4% higher than PSENet and 13.0% higher than FCENet on the F-measure. The GBACHD dataset is significantly more challenging than the MTH dataset, featuring a higher level of noise and distortions. As a result, we find a substantial decrease in the performance of comparison methods. Nevertheless, KESAR still performs well, achieving the highest F-measure of 94.2%. Method MTH GBACHD TKH RobustScanner 0.905 0.722 0.991 / 0.989 ABINet 0.900 0.718 0.992 / 0.988 KESAR 0.924 0.875 0.970 / 0.988 Table 2: Recognition model comparison results on MTH, GBACHD, and TKH. The performance metric is 1-N.E.D.. Epoch 1 2 4 10 15 25 MTH 0.949 0.989 0.992 0.994 0.995 0.996 GBACHD 0.919 0.977 0.983 0.988 0.990 0.992 Table 3: Rate of successful abduction w.r.t. training epoch on MTH and GBACHD. Text Recognition. We utilize RobustScanner (Yue et al. 2020) and ABINet (Fang et al. 2021) as comparison methods, which are also implemented by the mmocr codebase. Since our method utilizes TKH for pre-training in the experiments on MTH and GBACHD, we include TKH in the training data of other methods for a fair comparison. Therefore, results on TKH represent the performance of models trained on both MTH/GBACHD and TKH, while tested solely on TKH. Our evaluation metric is 1-N.E.D.. As shown in Table 2, KESAR achieves superior performance on MTH and GBACHD. Remarkably on GBACHD, KESAR outperforms other methods by at least 0.153 1-N.E.D.. On TKH, comparison methods exhibit excellent performance and our method is competitive. It is worth noting that our training data comprise predicted text lines generated by the segmentation model, whereas comparative methods use groundtruth text lines as training data. 6 Conclusion To exploit human knowledge in document segmentation and recognition, we propose a novel approach based on the abductive learning framework, aiming at using background knowledge to enhance character extraction and prediction performance. In detail, our method enables the model to refine segmentation results by utilizing structural knowledge, and the proposed abductive matching mechanism can generate character-level training data for the recognition model from text-line labels. Moreover, through joint optimization, the segmentation and recognition models can mutually benefit and enhance each other’s performance. Empirical evaluation validates that our learning approach can significantly improve the performance of both segmentation and recognition models, outperforming the state-of-the-art OCR methods. KESAR is a general-purposed approach with sufficient flexibility in implementation, e.g., the segmentation and recognition models can be replaced by other networks and the knowledge base can be modified to adapt to other application scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8415 Acknowledgments This research was supported by NSFC (62206124) and JiangsuSF (BK20232003). References Baek, Y.; Lee, B.; Han, D.; Yun, S.; and Lee, H. 2019. Character region awareness for text detection. In CVPR, 9365– 9374. Dai, W.-Z.; Xu, Q.; Yu, Y.; and Zhou, Z.-H. 2019. Bridging machine learning and logical reasoning by abductive learning. NeurIPS. Fang, S.; Xie, H.; Wang, Y.; Mao, Z.; and Zhang, Y. 2021. Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition. In CVPR, 7098–7107. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778. Huang, Y.-X.; Dai, W.-Z.; Cai, L.-W.; Muggleton, S. H.; and Jiang, Y. 2021. Fast Abductive Learning by Similarity-based Consistency Optimization. In NeurIPS, 26574–26584. Huang, Y.-X.; Dai, W.-Z.; Jiang, Y.; and Zhou, Z.-H. 2023a. Enabling Knowledge Refinement upon New Concepts in Abductive Learning. In AAAI, 7928–7935. Huang, Y.-X.; Dai, W.-Z.; Yang, J.; Cai, L.-W.; Cheng, S.; Huang, R.; Li, Y.-F.; and Zhou, Z.-H. 2020. SemiSupervised Abductive Learning and Its Application to Theft Judicial Sentencing. In ICDM, 1070–1075. Huang, Y.-X.; Sun, Z.; Li, G.; Tian, X.; Dai, W.-Z.; Hu, W.; Jiang, Y.; and Zhou, Z.-H. 2023b. Enabling Abductive Learning to Exploit Knowledge Graph. In IJCAI, 3839– 3847. Jaderberg, M.; Simonyan, K.; Zisserman, A.; and Kavukcuoglu, K. 2015. Spatial Transformer Networks. In NeurIPS, 2017–2025. Kuang, Z.; Sun, H.; Li, Z.; Yue, X.; Lin, T. H.; Chen, J.; Wei, H.; Zhu, Y.; Gao, T.; Zhang, W.; Chen, K.; Zhang, W.; and Lin, D. 2021. MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding. In ACM MM, 3791–3794. Liao, M.; Zhang, J.; Wan, Z.; Xie, F.; Liang, J.; Lyu, P.; Yao, C.; and Bai, X. 2019. Scene text recognition from twodimensional perspective. In AAAI, 8714–8721. Liu, Y.; Chen, H.; Shen, C.; He, T.; Jin, L.; and Wang, L. 2020. ABCNet: Real-Time Scene Text Spotting With Adaptive Bezier-Curve Network. In CVPR, 9806–9815. Long, S.; He, X.; and Yao, C. 2021. Scene Text Detection and Recognition: The Deep Learning Era. International Journal of Computer Vision, 129(1): 161–184. Ma, W.; Zhang, H.; Jin, L.; Wu, S.; Wang, J.; and Wang, Y. 2020. Joint Layout Analysis, Character Detection and Recognition for Historical Document Digitization. ICFHR, 31–36. Magnani, L. 2009. Abductive Cognition: The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning. Springer-Verlag. Manhaeve, R.; Dumancic, S.; Kimmig, A.; Demeester, T.; and De Raedt, L. 2018. DeepProbLog: Neural Probabilistic Logic Programming. In NeurIPS, 3749–3759. Raedt, L. D.; Dumancic, S.; Manhaeve, R.; and Marra, G. 2020. From Statistical Relational to Neuro-Symbolic Artificial Intelligence. In IJCAI, 4943–4950. Tsamoura, E.; Hospedales, T.; and Michael, L. 2021. Neural-symbolic integration: A compositional perspective. In AAAI, 5051–5060. Wang, P.-W.; Donti, P.; Wilder, B.; and Kolter, Z. 2019a. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In ICML, 6545–6554. Wang, T.; Zhu, Y.; Jin, L.; Luo, C.; Chen, X.; Wu, Y.; Wang, Q.; and Cai, M. 2020. Decoupled attention network for text recognition. In AAAI, 12216–12224. Wang, W.; Xie, E.; Li, X.; Hou, W.; Lu, T.; Yu, G.; and Shao, S. 2019b. Shape robust text detection with progressive scale expansion network. In CVPR, 9336–9345. Wang, W.; Xie, E.; Song, X.; Zang, Y.; Wang, W.; Lu, T.; Yu, G.; and Shen, C. 2019c. Efficient and accurate arbitraryshaped text detection with pixel aggregation network. In CVPR, 8440–8449. Xie, Z.; Huang, Y.; Jin, L.; Liu, Y.; Zhu, Y.; Gao, L.; and Zhang, X. 2019. Weakly supervised precise segmentation for historical document images. Neurocomputing, 350: 271– 281. Xing, L.; Tian, Z.; Huang, W.; and Scott, M. R. 2019. Convolutional character networks. In CVPR, 9126–9136. Xu, J.; Zhang, Z.; Friedman, T.; Liang, Y.; and Broeck, G. 2018. A semantic loss function for deep learning with symbolic knowledge. In ICML, 5502–5511. Yang, H.; Jin, L.; Huang, W.; Yang, Z.; Lai, S.; and Sun, J. 2018. Dense and Tight Detection of Chinese Characters in Historical Documents: Datasets and a Recognition Guided Detector. IEEE Access, 6: 30174–30183. Yang, Z.; Lee, J.; and Park, C. 2022. Injecting logical constraints into neural networks via straight-through estimators. In ICML, 25096–25122. Yue, X.; Kuang, Z.; Lin, C.; Sun, H.; and Zhang, W. 2020. Robustscanner: Dynamically enhancing positional clues for robust text recognition. In ECCV, 135–151. Zhang, R.; Zhou, Y.; Jiang, Q.; Song, Q.; Li, N.; Zhou, K.; Wang, L.; Wang, D.; Liao, M.; Yang, M.; Bai, X.; Shi, B.; Karatzas, D.; Lu, S.; and Jawahar, C. V. 2019. ICDAR 2019 Robust Reading Challenge on Reading Chinese Text on Signboard. In ICDAR, 1577–1581. Zhou, Z.-H. 2019. Abductive learning: towards bridging machine learning and logical reasoning. Science China Information Sciences, 62(7): 76101. Zhou, Z.-H.; and Huang, Y.-X. 2022. Abductive Learning. In Hitzler, P.; and Sarker, M. K., eds., Neuro-Symbolic Artificial Intelligence: The State of the Art, 353–369. Amsterdam: IOS Press. Zhu, Y.; Chen, J.; Liang, L.; Kuang, Z.; Jin, L.; and Zhang, W. 2021. Fourier contour embedding for arbitrary-shaped text detection. In CVPR, 3123–3131. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8416
2024
935
18,779
Zero-1-to-3: Domain-Level Zero-Shot Cognitive Diagnosis via One Batch of Early-Bird Students towards Three Diagnostic Objectives Weibo Gao1,2, Qi Liu1,2*, Hao Wang1,2, Linan Yue1,2, Haoyang Bi1,2, Yin Gu1,2, Fangzhou Yao1,2, Zheng Zhang1,2, Xin Li1,3, Yuanjing He4 1 University of Science and Technology of China 2 Anhui Province Key Laboratory of Big Data Analysis and Application & State Key Laboratory of Cognitive Intelligence 3 Artificial Intelligence Research Institute, iFLYTEK Co., Ltd 4 The Open University of China, China {weibogao,lnyue,bhy0521,gy128,fangzhouyao,zhangzheng}@mail.ustc.edu.cn, {qiliuql,wanghao3,leexin}@ustc.edu.cn, [email protected] Abstract Cognitive diagnosis seeks to estimate the cognitive states of students by exploring their logged practice quiz data. It plays a pivotal role in personalized learning guidance within intelligent education systems. In this paper, we focus on an important, practical, yet often underexplored task: domainlevel zero-shot cognitive diagnosis (DZCD), which arises due to the absence of student practice logs in newly launched domains. Recent cross-domain diagnostic models have been demonstrated to be a promising strategy for DZCD. These methods primarily focus on how to transfer student states across domains. However, they might inadvertently incorporate non-transferable information into student representations, thereby limiting the efficacy of knowledge transfer. To tackle this, we propose Zero-1-to-3, a domain-level zero-shot cognitive diagnosis framework via one batch of early-bird students towards three diagnostic objectives. Our approach initiates with pre-training a diagnosis model with dual regularizers, which decouples student states into domain-shared and domain-specific parts. The shared cognitive signals can be transferred to the target domain, enriching the cognitive priors for the new domain, which ensures the cognitive state propagation objective. Subsequently, we devise a strategy to generate simulated practice logs for cold-start students through analyzing the behavioral patterns from early-bird students, fulfilling the domain-adaption goal. Consequently, we refine the cognitive states of cold-start students as diagnostic outcomes via virtual data, aligning with the diagnosisoriented goal. Finally, extensive experiments on six realworld datasets highlight the efficacy of our model for DZCD and its practical application in question recommendation. The code is publicly available at https://github.com/bigdataustc/Zero-1-to-3. 1 Introduction Intelligent education systems offer access to learning resources and tailor-made services, contributing significantly to the burgeoning popularity of online learning. These platforms cover a broad range of learning topics. As shown in *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Topic: English Topic: Function Practice Diagnosis Applications Mature Domain Topic: Geometry Cold-start Domain Topic: Physics Early-birds Unseen students ? Figure 1: Illustrative procedure of personalized learning in intelligent education systems. Figure 1, each topic includes an extensive question bank, empowering students to independently select specific questions for targeted practice. By analyzing student practice logs (e.g., correct or incorrect responses), their cognitive states (i.e., the proficiency on specific knowledge concepts) are estimated, which is referred to as the procedure of cognitive diagnosis (Wang et al. 2020). The diagnostic results can support further customized applications, such as question recommendation (Liu et al. 2023a) and adaptive testing (Zhuang et al. 2023). As a result, cognitive diagnosis has garnered significant attention from both the “AI for education” community and the general populace (Nguyen 2015). Previously, a number of cognitive diagnosis models (CDMs) (Embretson and Reise 2013; Reckase 2009; Tsutsumi, Kinoshita, and Ueno 2021; Wang et al. 2020; Gao et al. 2021; Yao et al. 2023) have been developed to enhance diagnostic precision. However, many of these models encounter challenges with the “cold-start” problem. This challenge arises when an online platform launches a novel learning topic with a fresh question bank (e.g., Physics in Figure 1). At the initial launch, there exists only a limited collection of practice records from the early-bird learners, who form the first batch of students in this domain. However, practice logs of unseen students remain unavailable for model training. Consequently, the diagnostic performance of traditional CDMs is often impaired as they only work in mature domains where student practice logs are available. We call this task domain-level zero-shot cognitive diagnosis (DZCD). DZCD is an important and practical task aiming to diagnose cognitive states of unseen students, for whom The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8417 practice logs are blank in the new domain. Recently, promising strategies have emerged to address the challenge of DZCD through cross-domain CDMs (Gao et al. 2023). They primarily focus on the problem of how to transfer student cognitive signals from well-established source domains to cold-start target domains. The key point is to profile cognitive states of students based on their historical behaviors from some mature source domains and represent questions in the target domain with available features. However, they tend to overlook a crucial but rarely touched concern: what to transfer. This issue involves identifying valuable transferable student states from different topical domains, as not all cognitive signals can be transferred. We argue that there exist two kinds of student cognitive preferences in each domain, namely domain-shared and domain-specific. Let us consider Geometry and Function domains as an illustration. Cognitive signals shared across domains (e.g., basic skills) usually provide valuable clues for cross-domain transfer. For example, students may perform similarly on simple questions in both domains because the knowledge examined in these straightforward questions is usually basic and general, e.g., Addition and Subtraction, which is suitable for cross-domain propagation. In contrast, domain-specific states (e.g., high-level concept Cube in Geometry and concept Polynomials in Function) offer deeper insights within their respective domain, while they could be irrelevant or detrimental for other domains, which is commonly non-transferable. For instance, understanding concept Cube might not contribute significantly to mastering Polynomials. Unfortunately, previous models inevitably encode domain-specific information into student states intended for cross-domain transfer, which hampers transfer performance and can even the negative transfer problem (Hu, Zhang, and Yang 2018). In this paper, we aim to delve deeper into the problem of “what to transfer” to unlock the full potential of CDMs under DZCD, which is significant but challenging. Ideally, a desirable solution should fulfill the following three objectives (Gao et al. 2023): (1) diagnosis-oriented: the model should proficiently diagnose student cognitive states in DZCD scenarios. (2) cognitive signal propagation: the model must effectively extract domain-shared states from source domains for the cross-domain propagation of student cognitive signals. (3) domain-adaption: for any new coldstart domain, the model is expected to be domain-adaptive, which needs to fully leverage available domain-specific cues in cold-start scenarios, e.g., the first batch of early-bird students in the domain. Motivated by the above considerations, we propose Zero1-to-3, a domain-level zero-shot cognitive diagnosis framework via one batch of early-bird students towards three diagnostic objectives. Specifically, our approach begins with the pre-training of a CDM across multiple source domains to establish an initial profile of students’ states. During this phase, we separate student profiles into domain-shared and domain-specific parts as input of the CDM. Meanwhile, two well-designed regularizers are placed to guide their optimizations. After pre-training, the shared cognitive signals can be refined and can be transferred to the new domain to provide useful and broad experiences, which ensures the cognitive signal propagation objective. Next, our focus shifts to effectively adapting cold-start students whose practice logs are unavailable. To achieve this, we first use domain-shared states as initial student embeddings in the new domain. Then, we devise a strategy to generate simulated practice logs for unseen students, to fine-tune the states for unseen students. Here, we leverage the cognitive similarity between early-bird and unseen students to transfer practice patterns from the former to the latter, resulting in synthesized data. Notably, as the practice behaviors of early-bird students originate directly from the new domain, they offer distinct insights crucial for achieving the domain-adaption goal. After attaining warm-up states for cold-start students through fine-tuning using simulation logs, we can proceed with diagnostic predictions in the new domain characterizing our approach as diagnosis-oriented. Notably, it is important to highlight that Zero-1-to-3 framework, as a general framework, is applicable across a wide range of CDMs. Finally, extensive experimental results on six real-world datasets not only prove that Zero-1-to-3 can effectively perform DZCD and outperform typical baselines, but also highlight a practical application in question recommendation. 2 Related Work Traditional Cognitive Diagnosis Models. Cognitive diagnosis has been well-researched for decades in educational psychology (Bi et al. 2023; Chen et al. 2023). It aims to profile the cognitive states of students by exploiting their response results (e.g., correct or wrong) (Embretson and Reise 2013; Tong et al. 2022). For instance, Item Response Theory (IRT) (Embretson and Reise 2013) and Multidimensional IRT (MIRT) (Reckase 2009) use unidimensional/multidimensional latent parameters indicating student ability and question difficulty, respectively, to predict student response on this question in a logistic way. Deterministic Inputs, Noisy-And gate (DINA) (De La Torre 2009), NeuralCD (Wang et al. 2020) and RCD (Gao et al. 2021) directly model student proficiency of specific knowledge concepts. Cross-domain Cognitive Diagnosis. Cross-domain cognitive diagnosis is proposed to address the DZCD issue which arises when an online education platform introduces new domains, resulting in unavailable practice logs for most students. DZCD is a practical task, but research in this area is almost blank (Gao et al. 2023). Existing studies (Gao et al. 2023) on DZCD primarily concentrate on effectively transferring student cognitive signals from source domains to cold-start target domains through cross-domain modeling. The primary challenge is to construct student cognitive representations based on existing domains and utilize question attributes (e.g., textual contents (Liu et al. 2019; Schmucker and Mitchell 2022) and question relational attributes (Gao et al. 2023)) as intermediaries for the cross-domain transfer. However, these methods might inadvertently incorporate non-transferable information into student representations limiting transfer performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8418 3 Preliminaries 3.1 Cognitive Diagnosis Model We first introduce the general form of cognitive diagnosis models (CDMs). We select the general form of CDMs proposed in TechCD (Gao et al. 2023) as our framework, which consists of three types of basic elements: students, questions and knowledge concepts. The diagnosis process can be abstracted as modeling student-question-concept interactions through predicting student practice performances as follows: ˆ𝑦𝑢,𝑣= M𝐶𝐷(𝒖, 𝒗, C), where 𝒖and 𝒗are the traits of student 𝑢(e.g., cognitive states) and question 𝑣(e.g., difficulty). C is the embedding matrix of all knowledge concepts. M𝐶𝐷(·) is the diagnostic function to predict the response result ˆ𝑦𝑢,𝑣of student 𝑢on question 𝑣(correct or wrong). For instance, M𝐶𝐷(·) is a logistic-like function for IRT/MIRT with uni-/multidimensional latent parameters of student ability and question difficulty, respectively, i.e., ˆ𝑦𝑢,𝑣= sigmoid(𝒖−𝒗), where 𝒖and 𝒗are enhanced by fusing concept features C. It is a multi-layer neural network 𝑓(·) in NeuralCD, i.e., ˆ𝑦𝑢,𝑣= 𝑓(𝒒𝒗◦( 𝒑𝑢−𝒅𝑣)), where 𝒑𝒖←𝑓𝑢(𝒖, C) and 𝒅𝒗←𝑓𝑣(𝒗, C). Two full connection layers 𝑓𝑢(·) and 𝑓𝑐(·) are used to fuse knowledge concepts into the student/question embeddings, respectively. Each element 𝑝𝑢,𝑐of 𝒑𝑢denotes the mastery level on concept 𝑐of student 𝑢. 𝒒𝒗masks unrelated concepts for question 𝑣with element-wise product ◦, where 𝑞𝑣,𝑐= 1 if question 𝑣associates concept 𝑐and 𝑞𝑣,𝑐= 0 otherwise. To ensure psychometric interpretability of prediction, CDMs should strictly follow the Monotonicity assumption: the probability of correctly answering the question monotonically increases with student cognitive proficiency, i.e., 𝜕M𝐶𝐷/𝜕𝒖> 0 (Tong et al. 2022). 3.2 Problem Setup In the domain-level zero-shot cognitive diagnosis (DZCD) scenario, we consider 𝑀source domains S1, S2, . . . , S𝑀 and one target domain T. In the 𝑘-th source domain S𝑘, suppose US𝑘, VS𝑘and CS𝑘are the sets of students, questions and concepts. The practice logs in this domain is depicted as 𝐿S𝑘= {(𝑢𝑘, 𝑣𝑘, 𝑦𝑘 𝑢,𝑣)|𝑦𝑘 𝑢,𝑣∈{0, 1}, 𝑢𝑘∈US𝑘, 𝑣𝑘∈VS𝑘}, where 𝑦𝑘 𝑢,𝑣= 1 represents student 𝑢answers question 𝑣 correctly in source domain S𝑘, and 𝑦𝑘 𝑢,𝑣= 0 otherwise. In the target domain T, student set UT is split into two subsets: UT (0) and UT? denote the first batch of earlybird students and remaining unseen students, respectively. VT and CT are the sets of questions and concepts, respectively. The practice records of UT (0) is available, denoted as 𝐿T (0) = {(𝑢, 𝑣, 𝑦𝑢,𝑣)|𝑦𝑢,𝑣∈{0, 1}, 𝑢∈UT (0) , 𝑣∈VT}. The student set of the target domain is the subset of all the source domains, but question sets on each domain (including source and target domains) are totally different. Besides, let |U| denote the number of set U, and ||·|| denote ℓ2 norm. Based on the above setup, we aim to diagnose cognitive states for unseen cold-start students UT? through fully exploiting available practice records (i.e., 𝐿S𝑘∪𝐿T (0) ) with student performance predictions. 𝒖𝑘 𝒖𝑠𝑝𝑒 𝑘 Cognitive Diagnosis 𝓛𝑠𝑝𝑒 𝓛𝑠ℎ𝑎 Logs in source domains 𝒖𝒯 Cognitive Diagnosis 𝓛𝑒𝑏_𝑠𝑡𝑢 Early-bird student logs 𝓜𝐶𝐷 𝓜𝐶𝐷 𝒖𝑠ℎ𝑎 𝑘 Average 1 2 Unseen student logs 𝓛𝑐𝑜𝑙𝑑_𝑠𝑡𝑢 Simulation 3 4 (a) (b) : pre-training : frozen Cold-start Target Domain Source Domains Figure 2: The main framework of Zero-1-to-3. (a) shows the pre-training stage in source domains with cognitive state decoupling. (b) is the adaptive diagnosis stage in the new domain, where 1⃝∼4⃝denote execution sequence. 1⃝is the initialization step, 2⃝refines early-bird student states, 3⃝simulates virtual logs for unseen students. The simulated logs are used to fine-tune cold-start students with 4⃝. 4 Methodology Our framework contains a pre-training process with cognitive state decoupling (Figure 2 (a)) and an adaptive diagnosis with fine-tuning via simulation logs (Figure 2 (b)). Next, we introduce them starting from an embedding layer. 4.1 Embedding Layer This layer offers initialized representations for students, questions and knowledge concepts in each domain. Specifically, it contains several parameter matrices as student embeddings, i.e., UUS𝑘∈R|US𝑘|×𝐹and UUT ∈R|UT |×𝐹for students in source domain S𝑘and target domain T, respectively, where 𝐹is the dimensional size. For the question, we adopt a pre-trained Bert (Devlin et al. 2018) to encode its textual content as initial representations by averaging its word-level embeddings. The question content-based embeddings in each domain includes VVS𝑘∈R|VS𝑘|×𝐹and VVT ∈ R|VT |×𝐹. The concept embedding is obtained by averaging all relative question embeddings. We denote concept embeddings in source and target domains as CCS𝑘∈R| CS𝑘|×𝐹and CCT ∈R| CT |×𝐹, respectively. Note that the content-based encoder can cope with cold-start questions/concepts well by encoding their semantics (Liu et al. 2019). Our focus falls into student-side cross-domain transfer. 4.2 Cognitive State Decoupling During this stage, a CDM is pre-trained across multiple source domains to establish an initial profile of students’ states. To ensure effective cross-domain transfer, the model should extract domain-shared cognitive representations from the input student embeddings within source domains. However, prior cross-domain CDMs struggle to differentiate between general and specific signals, potentially hindering the effectiveness of cross-domain transfer. Inspired by the success of decoupling learning in various The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8419 fields (Wang et al. 2023; Liu et al. 2023b), we propose to decouple student states into domain-shared and domainspecific parts. The initial decoupled embeddings for students in each source domain can be obtained as follows: 𝒖𝑘 𝑠ℎ𝑎= 𝑓𝑠ℎ𝑎(𝒖𝑘), 𝒖𝑘 𝑠𝑝𝑒= 𝑓𝑠𝑝𝑒(𝒖𝑘), where 𝑢𝑘is 𝑢-th row of UUS𝑘denoting input trait for student 𝑢in source domain S𝑘from embedding layer. The input can be divided into: 𝒖𝑘 𝑠ℎ𝑎containing domain-shared states while 𝒖𝑘 𝑠𝑝𝑒emphasizing domain-specific states. 𝑓𝑠ℎ𝑎(·) and 𝑓𝑠𝑝𝑒(·) are cross-domain shared full connection layers with the same input and output dimensions. Then, we devise two mathematical regularizers to optimize these two types of signals separately, inspired by (Zhao et al. 2017; Chen et al. 2018, 2022). Unlike prior decoupling methods, our insight is rooted in educational field-specific analysis, thoughtfully tailored for the DZCD task. Domain-shared states reveal general cognitive preference across domains, providing a panoramic perspective for inferring student preferences. Leveraging these shared states can notably enhance the cross-domain transfer capability of CDMs. However, these broad clues cannot offer a fine peek into student preference in each individual domain as they are blocked by domain-specific information. Hence, domainshared states should fulfill the following requirement: Requirement 1 Suppose a CDM has obtained the domainshared student representations from a domain S𝑘. It may exhibit strong overall predictions across source domains. Besides, its performance within S𝑘is hindered by its absence of domain-specific information. The above requirement can be approximatively expressed through the following regularization function: L𝑠ℎ𝑎= 𝑀 ∑︁ 𝑘=1  E𝐿S𝑖,𝑖∈{1,··· ,𝑀} 𝑦𝑖 𝑢,𝑣−M𝐶𝐷(𝒖𝑘 𝑠ℎ𝑎, 𝒗𝑖, 𝑪𝑖) − ∑︁ (𝑢𝑘,𝑣𝑘,𝑦𝑘𝑢,𝑣)∈𝐿S𝑘 𝑦𝑘 𝑢,𝑣−M𝐶𝐷(𝒖𝑘 𝑠ℎ𝑎, 𝒗𝑘, 𝑪𝑘)  , where the first term minimizes the expectation of global prediction errors while the second term deliberately undermines prediction performance within a specific domain. Domain-specific states contain student preferences for unique and specialized knowledge within the relevant learning topic, often supplying richer in-domain insights. Logically, by harnessing these specific cues within a particular domain, CDMs can achieve enhanced diagnostic performance as these clues hold significant in-domain value. However, this unique information might compromise the diagnostic accuracy across other domains, as it is typically irrelevant to unrelated domains. Hereby, the domain-specific cognitive states should satisfy the following requirement: Requirement 2 Assume a CDM has obtained the domainspecific student states from S𝑘. It probably leads to impressive predictive performance within domain S𝑘. Conversely, prediction accuracy is expected to be suboptimal when the model is employed in other domains. The above requirement can be further expressed through the minimization of the following regularization objective: L𝑠𝑝𝑒= 𝑀 ∑︁ 𝑘=1 ∑︁ (𝑢𝑘,𝑣𝑘,𝑦𝑘𝑢,𝑣)∈𝐿S𝑘  𝑦𝑘 𝑢,𝑣−M𝐶𝐷(𝒖𝑘 𝑠𝑝𝑒, 𝒗𝑘, 𝑪𝑘) − ∑︁ 𝑖∈{1,··· ,𝑀}\{𝑘} 𝑦𝑘 𝑢,𝑣−M𝐶𝐷(𝒖𝑖 𝑠𝑝𝑒, 𝒗𝑘, 𝑪𝑘)  , where the first term narrows the gap between the actual performance 𝑦𝑘 𝑢,𝑣and the prediction using the specific state of student 𝑢𝑘on question 𝑣𝑘from domain S𝑘. The second term encourages misleading predictions by using domain-specific student states from other domains. To summarize, the pre-training stage is directed by two regularizers for cognitive state decoupling as: L𝑑𝑒𝑐 = L𝑠𝑝𝑒+ L𝑠ℎ𝑎. The shared cognitive signals from different domains can offer broad experiences that are useful when students encounter new areas, making it easier to transfer knowledge between domains. Consequently, this process can preserve the cognitive signal propagation objective. 4.3 Domain-adaptive Cognitive Diagnosis After completing pre-training, our focus shifts towards performing domain-adaptive cognitive diagnosis for unseen students. A simple yet effective solution is to directly use domain-shared states as initial student representations for DZCD, which has been used in previous studies (Gao et al. 2023). However, we contend that it is not domain-adaptive as it ignores in-domain considerations. For this goal, we devise a strategy to generate simulated practice logs for unfamiliar students by fully utilizing available early-bird students. Based on simulated practice logs, we can warm up student states by fine-tuning within the new domain. Specifically, given a pre-trained CDM M𝐶𝐷and decoupled embeddings of each student 𝑢, i.e., {𝒖𝑘 𝑠ℎ𝑎, 𝒖𝑘 𝑠𝑝𝑒}𝑀 𝑘=1, we freeze model parameters and initialize the state of each student 𝑢∈UT using his/her domain-shared representations from source domains inspired by the related techniques (Yang et al. 2017; Yue et al. 2023) as follows: 𝒖T = 1 𝑀 ∑︁𝑀 𝑘=1 𝒖𝑘 𝑠ℎ𝑎, (1) where an average pooling is adopted to merge shared signals to obtain initial states 𝒖T of each student 𝑢across 𝑀 source domains, since this operation can augment representations compared to ones in a single domain and smooth the biases across domains (Wang et al. 2021; Zhu et al. 2023). The domain-shared states are transferable, which can provide valuable priors. Based on the setup, we utilize practice logs of the earlybird student 𝑢∈UT (0) to refine their cognitive states, aiming to establish an initial understanding for the new domain. This process is optimized by minimizing the difference between prediction M𝐶𝐷(·) and actual response 𝑦𝑢𝑣as: L𝑒𝑏𝑠𝑡𝑢= ∑︁ (𝑢,𝑣,𝑦𝑢,𝑣)∈𝐿T(0) 𝑦𝑢,𝑣−M𝐶𝐷(𝒖T, 𝒗T, 𝑪T) . The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8420 Next, we aim to generate simulated practice logs for remaining unseen students by exploiting in-domain clues from early-bird students. Our basic assumption is that students with similar cognitive preferences are likely to achieve similar practice performance in a certain domain (Long et al. 2022; Yin et al. 2023). Thus, leverage the cognitive similarity between early-bird and unseen students to transfer practice patterns from the former to the latter, resulting in synthesized data. With this in mind, we introduce a strategy to extract students’ similarities from source domains as available clues in the target domain are very few. In detail, for each early-bird student 𝑢, we first find a reference source domain S𝑘through the alignment of their refined embeddings in the target domain and domain-specific states originating from each source domain as follows: S𝑘= arg max 𝑘∈{1,··· ,𝑀} sim  𝒖T, 𝒖𝑘 𝑠𝑝𝑒  , (2) where sim(·, ·) denotes the similarity function, for which we use cosine similarity. S𝑘is selected as a reference domain as student 𝑢has the most similar states between both S𝑘and the target domain. Within S𝑘, we compute similarity score between student 𝑢and each unseen student 𝑖by using their domain-specific representations as follows: 𝑠𝑢,𝑖∈sim 𝑖∈UT?  𝒖𝑘 𝑠𝑝𝑒, 𝒊𝑘 𝑠𝑝𝑒  , (3) where 𝒖𝑘 𝑠𝑝𝑒and 𝒊𝑘 𝑠𝑝𝑒are domain-specific states pre-trained in S𝑘of early-bird student 𝑢and each unseen student 𝑖. Then, we rank these unseen students via similarity scores and select top 𝑝individuals to form a similar peer set for student 𝑢, denoted as P𝑢. Subsequently, for each selected unseen student in P𝑢, denoted as ˆ𝑖, their virtual data can be approximatively generated by duplicating practice records of 𝑢in the target domain as their performances are similar: 𝐿Pˆ𝑖←𝐿T𝑢⊂𝐿T (0) . (4) By repeating the above processes (Eq. (2) - (4)) for each early-bird student, we can generate practice logs for parts of unseen students in the target domain. Note that the above simulation process might not cover every cold-start student, a concern linked to the parameter 𝑝. While a larger 𝑝 could involve more students, such an approach is not recommended due to the potential introduction of noise. Based on simulation data, denoted as 𝐿P, we fine-tune the cognitive states of unseen students in the new domain as: L𝑐𝑜𝑙𝑑𝑠𝑡𝑢= ∑︁ (𝑢,𝑣,𝑦𝑢,𝑣)∈𝐿P 𝑦𝑢,𝑣−M𝐶𝐷(𝒖T, 𝒗T, 𝑪T) . (5) Through optimization with the above loss, the cognitive embeddings of cold-start students can be augmented, ensuring our framework is diagnosis-oriented. Concurrently, this stage maximizes the utilization of domain-specific cues from the perspective of early-bird students, aligning with the domain-adaptation objective. Datasets #students #questions #concepts #logs Geometry 15,283 2,299 251 127,570 Function 15,404 2,172 201 121,512 Probability 4,076 246 32 10,237 Physics 13,369 2,699 552 146,326 Arithmetic 14,073 1,828 200 86,699 English 5,906 409 135 24,739 Total 21,068 9,653 1,371 517,083 Table 1: Some basic statistics of the datasets. 5 Experiments We conduct comprehensive experiments to address the following research questions: • RQ1 How powerful is Zero-1-to-3 for the DZCD task? • RQ2 How effective are the key components of the Zero1-to-3 model? • RQ3 Can the Zero-1-to-3 perform cognitive diagnosis well beyond the cold-start stage? • RQ4 How to apply our framework to provide personalized learning guidance? 5.1 Datasets We conduct experiments on six real-world datasets i.e., Geometry, Function, Probability, Physics, Arithmetic and English, which are collected from the iFLYTEK Learning Machine1. All the datasets provide student practice records, question textual contents and question-concept correlations, where each question associates one knowledge concept. For each dataset, we reserve only the first attempt of each question to ensure that student states are static following the (Wang et al. 2020). Each dataset is treated as a domain. We switch their roles, each acting as the cold-start target domain and leaving the other five as the source domains. We split each source domain by randomly selecting two historical interactions from each student’s logs for validation, with the remaining data serving as the training set, similar to the widely used leave-one-out evaluation (Rendle et al. 2009). We randomly select some students (reported in section 5.2) as early birds (the order in which student commence new domains does not influence one another) to introduce data diversity for each domain when acting as the target domain. Besides, to train the Oracle models (section 5.2), we also split the target domain’s dataset into training (70%), validation (10%), and test sets (20%). The basic statistics of datasets are listed in Table 1. 5.2 Experimental Setup Baselines To verify the effectiveness of our model, we apply our framework to three well-adopted CDMs, i.e., IRT (Embretson and Reise 2013), MIRT (Reckase 2009) and NeuralCD (Wang et al. 2020) (introduced in section 3.1). We call them Zero-CDM, e.g., Zero-IRT. We select several baselines for comparison. Among them, the Random and 1https://xxj.xunfei.cn/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8421 Geometry as Target Function as Target Probability as Target Methods ACC ↑AUC ↑RMSE ↓ ACC ↑AUC ↑RMSE ↓ ACC ↑AUC ↑RMSE ↓ Random 49.91 49.83 57.80 49.97 50.04 57.76 50.17 50.96 57.60 IRT Oracle 73.57∗ 79.86∗ 44.54∗ 74.16∗ 80.44∗ 42.71∗ 70.00∗ 72.41∗ 45.81∗ NLP 62.27 64.93 48.62 55.38 61.92 49.15 58.51 65.28 48.73 GCN 60.49 62.92 51.62 54.23 56.67 52.12 58.16 57.02 51.36 Tech 59.96 62.50 49.86 52.45 59.58 55.13 65.22 54.24 54.58 Ours 57.01 64.96 49.51 55.73 60.96 51.95 65.37 58.30 50.47 MIRT Oracle 73.53∗ 80.57∗ 42.35∗ 74.03∗ 80.92∗ 41.82∗ 70.32∗ 72.76∗ 44.78∗ NLP 56.74 68.27 52.15 55.32 68.00 55.22 65.67 64.62 47.77 GCN 56.50 59.20 51.95 59.39 63.80 50.95 65.22 58.19 50.48 Tech 56.66 63.02 49.33 58.82 61.32 53.98 62.22 60.26 51.50 Ours 51.48 65.98 50.56 60.91 68.01 49.36 68.27 70.46 45.60 NeuralCD Oracle 73.62∗ 81.10∗ 41.95∗ 74.27∗ 80.97∗ 41.99∗ 72.26∗ 75.25∗ 41.81∗ NLP 58.66 58.16 49.05 58.90 57.85 49.31 64.36 50.28 48.73 GCN 56.84 50.10 49.74 55.00 51.12 54.30 64.92 52.89 49.40 Tech 60.20 60.23 53.44 58.33 52.83 52.63 65.22 53.53 57.23 Ours 61.48 60.30 52.58 59.78 55.11 52.04 66.67 67.43 48.68 Physics as Target Arithmetic as Target English as Target Methods ACC ↑AUC ↑RMSE ↓ ACC ↑AUC ↑RMSE ↓ ACC ↑AUC ↑RMSE ↓ Random 49.94 49.99 57.75 49.99 50.03 57.69 50.30 50.23 57.58 IRT Oracle 70.46∗ 77.49∗ 43.91∗ 69.57∗ 69.14∗ 48.48∗ 74.37∗ 81.78∗ 40.96∗ NLP 56.35 59.63 49.51 60.66 60.78 50.83 64.03 66.75 48.37 GCN 54.32 52.08 52.03 54.04 51.12 51.03 54.34 50.90 55.13 Tech 56.17 57.53 56.93 55.44 52.01 51.20 56.73 55.76 51.03 Ours 56.43 52.28 56.52 56.23 52.08 50.72 54.46 58.28 50.07 MIRT Oracle 70.43∗ 77.45∗ 44.50∗ 74.60∗ 81.10∗ 42.68∗ 72.80∗ 81.01∗ 42.54∗ NLP 54.32 61.18 54.96 60.01 66.12 50.77 50.69 58.34 56.75 GCN 54.35 58.17 57.52 60.53 60.02 50.60 50.47 51.15 54.41 Tech 53.33 60.31 50.24 59.93 54.35 55.59 54.86 55.08 53.40 Ours 56.71 72.06 48.71 61.88 63.71 48.52 61.52 66.17 48.24 NeuralCD Oracle 70.22∗ 77.72∗ 43.94∗ 74.45∗ 81.80∗ 41.22∗ 72.64∗ 81.53∗ 41.98∗ NLP 56.32 56.45 51.42 60.63 57.95 49.43 50.47 50.48 51.12 GCN 54.21 54.54 50.26 60.44 58.40 52.68 50.47 52.15 50.51 Tech 54.23 52.18 50.81 56.99 52.40 49.57 57.93 56.12 50.10 Ours 61.59 68.08 50.22 60.89 58.92 51.99 60.87 60.23 48.96 Table 2: Performance comparison (%). The best zero-shot performance is highlighted in bold, and the runner-up is underlined. ↑(↓) means the higher (lower) score the better (worse) performance, the same as below. * indicates the oracle result. Oracle methods indicate the lower and upper bounds of performance. For each baseline (excluding Random), we also select IRT, MIRT and NeuralCD as diagnostic functions. • Random: The Random method predicts the students’ scores randomly from 𝑈𝑛𝑖𝑓𝑜𝑟𝑚(0, 1). • Oracle: The Oracle baseline is trained with logs from both source and target domains using the traditional CDM. • NLP-based (Liu et al. 2019): The NLP-based method uses learnable embeddings as student states in source domains and represents questions by encoding their texts. To implement it, we adopt Bert (Devlin et al. 2018) as the textual encoder following the setup in (Gao et al. 2023). • GCN-based and Tech-based (Gao et al. 2023): Both these methods use a knowledge graph (Yang et al. 2023) linking each domain for transfer. The graph is constructed using a statistical method proposed in RCD (Gao et al. 2021). Evaluation As cognitive states cannot be directly observed in practice, it is common to indirectly assess CDMs through predicting student performance on validation datasets. To evaluate prediction performance, we adopt ACC and AUC, and RMSE as metrics from the perspectives of classification and regression, respectively, following previous works (Gao et al. 2021; Zhang et al. 2023). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8422 0.53 0.58 0.63 0.68 10 30 50 300 500 Geo. Func. Prob. Phy. Arith. Eng. ACC Value of 𝒑 0.45 0.5 0.55 0.6 10 30 50 300 500 Geo. Func. Prob. Phy. Arith. Eng. RMSE Value of 𝒑 Figure 3: Performance under different peer student numbers. Implementation Details We set the dimensions of student and question vectors, i.e., parameter 𝐹, as the total number of knowledge concepts of each domain. The dimensions of neural network layers are 512 and 256 for all NeuralCDbased models. The number of early-bird students ((|UT (0) |)) for each domain is set to be 0.01 of the number of students in the domain, and the number of unseen peer students (𝑝) for the overall results of student performance prediction (RQ1) is 50. We set the mini-batch size as 256 and select learning rate from {0.001, 0.002, 0.02, 0.05} for each model. Each experiment is repeated five times under consistent conditions and the best score is reported. Each model is implemented by PyTorch and optimized by Adam optimizer (Kingma and Ba 2014). All experiments are run on a Linux server with two 3.00GHz Intel Xeon Gold 5317 CPUs and one Tesla A100 GPU. The code is publicly available at https://github. com/bigdata-ustc/Zero-1-to-3. 5.3 Experimental Results Student Performance Prediction (RQ1) We compare our model with several baselines on student performance predictions under DZCD. We switch the role of each dataset acting as the target domain. The overall performance is reported in Table 2. We obtain that: (1) For different diagnostic implementations (i.e., IRT, MIRT and NeuralCD as diagnostic models), our Zero-1-to-3 framework almost outperforms all baselines on each target domain, which indicates the diagnostic effectiveness of our solution under DZCD scenarios. (2) The most significant distinction between Zero-1-to3 and NLP methods lies in our consideration of decoupling student states and only transferring domain-shared states to address the “what to transfer” issue. Our method outperforms NLP methods in most cases, underscoring the significance of adeptly capturing the domain-shared cognitive signals among students. (3) Both GCN-based and Tech-based models employ a knowledge graph linking both source and target domains for domain-adaption by joint training. However, They cannot fully utilize domain-specific student logs. Domain-specific states Domain-shared states Figure 4: T-SNE visualization of student states. 0.4 0.5 0.6 0.7 0.8 Geo. Func. Prob. Phy. Arith. Eng. Zero-NeuralCD-Normal Zero-NeuralCD AUC 0.3 0.4 0.5 Geo. Func. Prob. Phy. Arith. Eng. Zero-NeuralCD-Normal Zero-NeuralCD RMSE Figure 5: Performance in normal diagnosis scenarios. In contrast, Zero-1-to-3 outperforms these methods, which confirms its effectiveness. In the following parts, we primarily present the experimental results of Zero-NeuralCD as the representative ones, since other diagnosis models can be abstracted as the special cases of NeuralCD (Wang et al. 2020). Detailed Analysis (RQ2) This section provides some indepth analysis of how the key component in Zero-1-to-3 contributes to solving the challenges of DZCD. Exploration of Peer Student Number. The sampling number of early-bird students’ peer students who are unseen (𝑝) plays a crucial role in the transfer. To study the impact of different numbers, we train Zero-NeuralCD under several settings and perform zero-shot student performance predictions on each dataset. We randomly sample the number of peer students from {10, 30, 50, 300, 500}. Figure 3 presents The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8423 the results under different settings. From the figure, we can see the diagnostic effect does not consistently improve with the increasing number of peer students during the simulation process, indicating that the violent simulation data will bring noise. We are motivated to explore more suitable methods for matching peer students in future research. Visualization. This part visualizes student cognitive states to observe decoupling effects during pre-training. Under the setting of “English as Target”, for the pre-trained Zero-NeuralCD (without fine-tuning), we randomly select 200 students and visualize their domain-specific/-shared states in each source domain using T-SNE (Van der Maaten and Hinton 2008) in Figure 4. We can observe that domainspecific student states are more distant from each other compared to domain-shared student states, as they encompass more unique insights. Furthermore, domain-shared student states are not entirely blended, and we infer that this is because they still retain some personalized cognitive features that contribute to diagnostic performance. These findings substantiate the efficacy of decoupling regularizers. 5.4 Normal Diagnosis (RQ3) The above examples show Zero-1-to-3 can be successfully applied in the DZCD scenarios. A subsequent question is whether it can effectively perform cognitive diagnosis after the cold-start stage. Thus, we conduct experiments by finetuning a Zero-NeuralCD (named Zero-NeuralCD-Normal) under the oracle setting where 70% data in the target domain can be used for training. The results under “Geometry as Target” in Figure 5 demonstrate a significant improvement in the performance of the model fine-tuned in a normal scenario compared to the model in a cold-start environment. This observation reveals that our model not only performs well in DZCD scenarios but also achieves satisfactory results in the subsequent stage, compared to Oracle models. 5.5 Question Recommendation (RQ4) The above experiments have proved that Zero-1-to-3 can complete the DZCD task effectively. This part demonstrates one of the most popular diagnostic applications, i.e., question recommendation, that are in need of industrial practice. We implement a simple yet effective recommendation strategy as an example to recommend 𝑥questions for the student under DZCD. Generally, a proper recommendation should not be too hard or easy to maintain students’ enthusiasm when practicing (Huang et al. 2019). Thus, with a refined CDM, we first predict each student’s performance on questions in the new domain via the M𝐶𝐷(·) in Eq. (5). All questions can be divided into two sets that answer correctly or not (i.e., positive or negative samples) according to prediction results. Then, we sample 𝑥 2 questions from each type of sample, respectively, to yield the recommendation list. Table 3 lists 6 recommended questions on target domain Geometry for a randomly selected student using a refined Zero-NeuralCD, the diagnosed student mastery levels and question difficulties of the associated concepts, and the student’s true performance on the questions as recorded in the Geometry dataset. We can see the recommended questions are tailored to the student’s proficiency, neither too easy nor Geometry as Target 1⃝ 2⃝ 3⃝ 4⃝ 5⃝ 6⃝ Question id 3,213 200 1,032 2,122 3,013 32 Mastery (%) 40.07 32.00 48.63 26.30 54.33 41.02 Difficulty (%) 41.23 40.04 44.34 30.10 52.01 39.99 True performance × × ✓ × ✓ ✓ Table 3: Question recommendation via Zero-NeuralCD. too difficult. Some of them will challenge the student, while others will serve as “gifts” that can help increase his/her engagement. It confirms the application potential of Zero-1-to3 in cold-start scenarios. 6 Conclusion In this paper, we have proposed a general Zero-1-to-3 framework to tackle the real-world challenge of domain-level zero-shot cognitive diagnosis (DZCD). The central obstacle in DZCD involves extracting shared information to facilitate cross-domain transfer, while simultaneously adapting to new domains. To address this, we first pre-train a diagnosis model with dual regularizers that disentangle student states into domain-shared and domain-specific parts. These shared cognitive signals can be transferred to the target domain, enriching cognitive priors for the new domain. Subsequently, we develop a strategy to generate simulated practice logs for cold-start students in the target domain by using the behavioral patterns of early-bird students. Consequently, the cognitive states of cold-start students can be adaptively refined using virtual data from the target domain, enabling the execution of DZCD. Finally, extensive experiments highlight the efficacy and potential applicability of our framework. Acknowledgments This research was partially supported by grants from the National Natural Science Foundation of China (Grant No. 62337001 and No. 62202443), the Anhui Provincial Natural Science Foundation (No. 2308085MG226), and the Fundamental Research Funds for the Central Universities. References Bi, H.; Chen, E.; He, W.; Wu, H.; Zhao, W.; Wang, S.; and Wu, J. 2023. BETA-CD: A Bayesian Meta-Learned Cognitive Diagnosis Framework for Personalized Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 5018–5026. Chen, P.; Lu, Y.; Zheng, V. W.; and Pian, Y. 2018. Prerequisite-driven deep knowledge tracing. In 2018 IEEE International Conference on Data Mining (ICDM), 39–48. IEEE. Chen, X.; Liu, T.; Zhao, H.; Zhou, G.; and Zhang, Y.-Q. 2022. Cerberus transformer: Joint semantic, affordance and attribute parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19649– 19658. Chen, X.; Wu, L.; Liu, F.; Chen, L.; Zhang, K.; Hong, R.; and Wang, M. 2023. Disentangling Cognitive Diagnosis The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8424 with Limited Exercise Labels. In Thirty-seventh Conference on Neural Information Processing Systems. De La Torre, J. 2009. DINA model and parameter estimation: A didactic. Journal of educational and behavioral statistics, 34(1): 115–130. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Embretson, S. E.; and Reise, S. P. 2013. Item response theory. Psychology Press. Gao, W.; Liu, Q.; Huang, Z.; Yin, Y.; Bi, H.; Wang, M.C.; Ma, J.; Wang, S.; and Su, Y. 2021. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 501–510. Gao, W.; Wang, H.; Liu, Q.; Wang, F.; Lin, X.; Yue, L.; Zhang, Z.; Lv, R.; and Wang, S. 2023. Leveraging Transferable Knowledge Concept Graph Embedding for Cold-Start Cognitive Diagnosis. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 983–992. Hu, G.; Zhang, Y.; and Yang, Q. 2018. Conet: Collaborative cross networks for cross-domain recommendation. In Proceedings of the 27th ACM international conference on information and knowledge management, 667–676. Huang, Z.; Liu, Q.; Zhai, C.; Yin, Y.; Chen, E.; Gao, W.; and Hu, G. 2019. Exploring multi-objective exercise recommendations in online education systems. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1261–1270. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Liu, F.; Hu, X.; Liu, S.; Bu, C.; and Wu, L. 2023a. Meta Multi-Agent Exercise Recommendation: A Game Application Perspective. Liu, Q.; Huang, Z.; Yin, Y.; Chen, E.; Xiong, H.; Su, Y.; and Hu, G. 2019. Ekt: Exercise-aware knowledge tracing for student performance prediction. IEEE Transactions on Knowledge and Data Engineering, 33(1): 100–115. Liu, S.; Yu, X.; Ma, H.; Wang, Z.; Qin, C.; and Zhang, X. 2023b. Homogeneous Cohort-Aware Group Cognitive Diagnosis: A Multi-grained Modeling Perspective. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 4094–4098. Long, T.; Qin, J.; Shen, J.; Zhang, W.; Xia, W.; Tang, R.; He, X.; and Yu, Y. 2022. Improving knowledge tracing with collaborative information. In Proceedings of the fifteenth ACM international conference on web search and data mining, 599–607. Nguyen, T. 2015. The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT Journal of online learning and teaching, 11(2): 309– 319. Reckase, M. D. 2009. Multidimensional item response theory models. In Multidimensional item response theory, 79– 112. Springer. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 452–461. Schmucker, R.; and Mitchell, T. M. 2022. Transferable Student Performance Modeling for Intelligent Tutoring Systems. arXiv preprint arXiv:2202.03980. Tong, S.; Liu, J.; Hong, Y.; Huang, Z.; Wu, L.; Liu, Q.; Huang, W.; Chen, E.; and Zhang, D. 2022. Incremental Cognitive Diagnosis for Intelligent Education. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1760–1770. Tsutsumi, E.; Kinoshita, R.; and Ueno, M. 2021. Deep-IRT with Independent Student and Item Networks. International Educational Data Mining Society. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Wang, C.; Zhu, Y.; Sun, A.; Wang, Z.; and Wang, K. 2023. A Preference Learning Decoupling Framework for User ColdStart Recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1168–1177. Wang, F.; Liu, Q.; Chen, E.; Huang, Z.; Chen, Y.; Yin, Y.; Huang, Z.; and Wang, S. 2020. Neural cognitive diagnosis for intelligent education systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 6153–6161. Wang, H.; Lian, D.; Tong, H.; Liu, Q.; Huang, Z.; and Chen, E. 2021. Hypersorec: Exploiting hyperbolic user and item representations with multiple aspects for social-aware recommendation. ACM Transactions on Information Systems (TOIS), 40(2): 1–28. Yang, Y.; Yang, J.; Bao, R.; Zhan, D.; Zhu, H.; Gao, X.; Xiong, H.; and Yang, J. 2023. Corporate Relative Valuation Using Heterogeneous Multi-Modal Graph Neural Network. IEEE Trans. Knowl. Data Eng., 35(1): 211–224. Yang, Y.; Zhan, D.; Fan, Y.; Jiang, Y.; and Zhou, Z. 2017. Deep Learning for Fixed Model Reuse. In Proceedings of the 31 Conference on Artificial Intelligence, 2831–2837. San Francisco, California. Yao, F.; Liu, Q.; Hou, M.; Tong, S.; Huang, Z.; Chen, E.; Sha, J.; and Wang, S. 2023. Exploiting non-interactive exercises in cognitive diagnosis. Interaction, 100(200): 300. Yin, M.; Wang, H.; Xu, X.; Wu, L.; Zhao, S.; Guo, W.; Liu, Y.; Tang, R.; Lian, D.; and Chen, E. 2023. APGL4SR: A Generic Framework with Adaptive and Personalized Global Collaborative Information in Sequential Recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM ’23, 3009–3019. New York, NY, USA: Association for Computing Machinery. ISBN 9798400701245. Yue, L.; Liu, Q.; Du, Y.; Gao, W.; Liu, Y.; and Yao, F. 2023. FedJudge: Federated Legal Large Language Model. arXiv preprint arXiv:2309.08173. Zhang, Z.; Liu, Q.; Jiang, H.; Wang, F.; Zhuang, Y.; Wu, L.; Gao, W.; and Chen, E. 2023. FairLISA: Fair User Modeling The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8425 with Limited Sensitive Attributes Information. In Thirtyseventh Conference on Neural Information Processing Systems. Zhao, H.; Lu, M.; Yao, A.; Guo, Y.; Chen, Y.; and Zhang, L. 2017. Physics inspired optimization on semantic transfer features: An alternative method for room layout estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 10–18. Zhu, J.; Wang, Y.; Zhu, F.; and Sun, Z. 2023. Domain Disentanglement with Interpolative Data Augmentation for DualTarget Cross-Domain Recommendation. arXiv preprint arXiv:2307.13910. Zhuang, Y.; Liu, Q.; Zhao, G.; Huang, Z.; Huang, W.; Pardos, Z.; Chen, E.; Wu, J.; and Li, X. 2023. A Bounded Ability Estimation for Computerized Adaptive Testing. In Thirty-seventh Conference on Neural Information Processing Systems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8426
2024
936
18,780
Your Career Path Matters in Person-Job Fit Zhuocheng Gong1, Yang Song2, Tao Zhang2, Ji-Rong Wen3, Dongyan Zhao1,4,5†, Rui Yan 3† 1 Wangxuan Institute of Computer Technology, Peking University 2 BOSS Zhipin 3 Gaoling School of Artificial Intelligence, Renmin University of China 4 National Key Laboratory of General Artificial Intelligence 5 Beijing Institute for General Artificial Intelligence {gzhch,zhaody}@pku.edu.cn, {ruiyan,jrwen}@ruc.edu.cn, {songyang,kylen.zhang}@kanzhun.com Abstract We are again confronted with one of the most vexing aspects of the advancement of technology: automation and AI technology cause the devaluation of human labor, resulting in unemployment. With this background, automatic person-job fit systems are promising solutions to promote the employment rate. The purpose of person-job fit is to calculate a matching score between the job seeker’s resume and the job posting, determining whether the job seeker is suitable for the position. In this paper, we propose a new approach to personjob fit that characterizes the hidden preference derived from the job seeker’s career path. We categorize and utilize three types of preferences in the career path: consistency, likeness, and continuity. We prove that understanding the career path enables us to provide more appropriate career suggestions to job seekers. To demonstrate the practical value of our proposed model, we conduct extensive experiments on realworld data extracted from an online recruitment platform and then present detailed cases to show how the career path matters in person-job fit. Introduction The job market is experiencing a rapid transformation in hiring practices due to the emergence of online recruitment services. The ongoing AI revolution has further accelerated the urgent need for an effective online recruitment system. Recent breakthroughs in NLP (e.g. ChatGPT*) and CV (e.g. Diffusion Model) have given us a glimpse into a future where human labor may no longer be necessary for many industries, potentially leading to a decline in employment. As we stand on the brink of another technological revolution that will reshape human society, we believe that employment is now more crucial than ever as a social issue. Therefore, the task of person-job fit, which seeks to automatically match job seekers with suitable employers, has garnered significant attention. Numerous researchers have delved into the concept of person-job fit using vast amounts of online recruitment data. A common approach among these studies is to frame the Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. †Corresponding authors: Dongyan Zhao ([email protected]) and Rui Yan ([email protected]). *https://openai.com/blog/chatgpt/ Self Statement I have extensive experience in framework development. My coding style is good and the code quality is high. Personal Statement Career Path Career Path I have extensive experience open-source framework development. My coding style is good and the code quality is high. Responsible for the design and development of an intelligent testing platform with Swift. Working as a system architect, responsible for developing an audio related software using FFMPEG and C language. Experience 1 Experience 2 Experience 3 Job Position Software Engineer Job Description Major in computer science or software engineering Familiar with software engineering and software quality assurance methods. Proficient in one or more of the following programming languages: Java, object-c, Swift... Good communication skills and teamwork Figure 1: An illustrative example for person-job fit in our scenario. The upper part indicates a candidate’s resume and the lower part denotes a job posting. task as a supervised text-matching problem, where resumes and job postings are treated as text pieces and various text mining and deep learning techniques are applied to the task. Based on this approach, different perspectives have been explored to improve performance, including job-oriented ability modeling (Qin et al. 2018), user preference mining (Yan et al. 2019), psychological motivation modeling (Le et al. 2019), and two-way relationship modeling (Yang et al. 2022). These methods have demonstrated significant success and achieved promising milestones in real-world applications with commercial value. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8427 Although significant progress has been made in automatic online recruitment, there is an important aspect that has been overlooked - career path information. Specifically, a resume typically includes a brief self statement section and a career path section that outlines the job seeker’s work experiences from their first employment to their current job situation. We believe that the career path is a critical factor in job seeking, and analyzing the career path of each job seeker can reveal hidden preferences that can enhance person-job fit. We have identified three types of preferences inferred from the career path: consistency, likeness, and continuity. Consistency measures whether the job seeker’s work experiences are consistent with each other and with their overall career path. For instance, in Figure 1, the IT practitioner’s work experiences are all IT-related and therefore consistent with each other. Likeness measures how closely a job posting aligns with a candidate’s work experiences, allowing us to identify their job preferences. By analyzing work experiences, we can determine a candidate’s job preferences and understand whether they are interested in a particular type of job or not. Continuity involves the evolution of work experiences over time, including the accumulation of job skills and new responsibilities. By tracking the evolving nature of work experiences along the career path, we can filter out positions that a candidate is overqualified for but still a good fit. For example, in Figure 1, a job seeker who started as a software development engineer (SDE) and then became a senior software development engineer (Senior SDE) may be a good fit for a software architect position. Our aim is to enhance the person-job matching task by incorporating previous work experiences of job-seekers within their resumes. We recognize the benefits of characterizing talent career paths and propose to use them to better understand job-seekers’ preferences, working levels, skill status, and future career development. The career path is considered a sequence of work experiences in chronological order. To utilize this information, we propose two self-supervised auxiliary objectives and a contrastive objective to model the consistency, continuity, and likeness of the career path, respectively. We then integrate the captured preferences into our proposed person-job matching framework, which we call the Work Experience enhanced Person-Job Matching model (WEPJM). To sum up, our contributions are manifold as follows: • We investigate the resume information in fine-grain and capture and utilize different aspects of career path preference. We incorporate techniques of self-supervised learning and contrastive learning to model the career path. • We conduct experiments on real-world data. Experimental validation confirms that our framework matches job seekers with jobs that better align with their preferences and experience, indicating that AI technology helps to address employment issues. Related Work There is an increasing number of studies on recruitmentoriented talent science that covers a lot of topics, including person-job matching (Zhu et al. 2018; Qin et al. 2018; Bian et al. 2019; Le et al. 2019; Yan et al. 2019; Luo et al. 2019; Bian et al. 2020; Jiang et al. 2020), job mobility prediction (Meng et al. 2019; Zhang et al. 2019; Xu et al. 2015; Li et al. 2017), person-organization fit (Sun et al. 2019), job skill mining (Qin et al. 2019; Wu et al. 2019; Xu et al. 2018) and organization analysis (Lin et al. 2017). Our study is highly related to the topic of person-job fit. Pioneering work of (Malinowski et al. 2006) learns the representation of jobs and talents with approaches based on latent factors. Along this line of research, (Zhu et al. 2018) proposes to encode the job and resume with two convolutional neural networks (CNN) respectively and calculate matching scores by cosine similarity. (Qin et al. 2018) leverages hierarchical recurrent neural networks (RNN) to encode the documents and incorporates the attention mechanism to model job abilities and skills. More recently, (Yan et al. 2019) incorporates interview history from both the jobseeker and the recruiter, and (Luo et al. 2019) introduces an adversarial training method in the representation learning of the job posting which constrains the representation to be indistinguishable from a prior Laplace distribution. (Bian et al. 2020) proposes a co-teaching network to handle both textbased matching information and relation-based information in a single framework. Recently, (Yang et al. 2022) proposes to model two-way selection preference for person-job fit. (Wang et al. 2022) uses co-attention and GNN to model the related recruitment history, achieving promising performance. In this work, we study the effect of modeling career path preference. Recently, contrastive learning has attracted much attention for its strong performance on sentence representation learning (Fang et al. 2020; Giorgi et al. 2021). In this work, we propose a contrastive learning strategy to align the representations of job postings and their related work experiences. Notations and Task Formulation In this paper, we aim to tackle the person-job matching task, which measures whether a particular job position is suitable for the background of a candidate job-seeker. To formulate the task, we use j={j1, j2, . . . , j|j|} to denote a job description. The job description j contains |j| sentences where each sentence ji describes the job requirements and/or job responsibilities for the work position. Similarly, we use r to represent the resume of a candidate’s talent. As mentioned in Section , we decompose the resume r into two parts, i.e., r = s ∪c where s stands for the statement part and c indicates the career path of work experiences. We denote s = {s1, s2, . . . , s|s|}. A career path c consists of a sequence of work experiences as {w1, w2, . . . , w|c|} where ci indicates a particular work experience as wi = {wi1, wi2, . . . , wi|wi|}. Unlike the job part and the statement part, the career path is organized as a hierarchical structure from the sentence level to the career level. The task of person-job fitting is formally defined as a classification problem that predicts the matching degree given the resume and the job description. For each pair of (j, r), we have the corresponding recruitment label y ∈{0, 1}, which indicates whether the selected candidate and the job position The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8428 Figure 2: Model overview. WEPJM learns representations for input resumes and job postings with hierarchical encoders. The overall objective can be decomposed into the main task of person-job fit and three auxiliary tasks. The auxiliary tasks include (b) Career Path identification, (c) Career Path Reconstruction, as well as (d) Job-career Contrastive Learning. form a good matching pair (i.e., y=1) or not (i.e., y=0). Our objective is to learn a matching function M(r, j) that maximizes the probability of predicting the right matching decision given the candidate’s resume and the job description. Formally, ˆM = arg min M loss(M(r, j), y) (1) Proposed Method In this section, we focus on the details of our proposed WEPJF as shown in Figure 2. We first introduce the overall architecture design and then elaborate on the auxiliary objectives introduced to model the career path preference. Work Experience Enhanced Person-Job Matching Framework A person-job fit system deals with inputs from two sides, namely the resume of the job seeker and the job posting. As described before, we decompose the resume into the statement part and the career path part, processing them with separate text encoders. The text encoder is a Bi-LSTM on top of the pre-trained BERT sentence encoder, where BERT is used to capture sentence-level semantics while LSTM captures document-level semantics. After obtaining the statement representation s and a series of work representations {wi}|c| i=1, we apply the attention mechanism to inject the statement semantic into the representation of work experiences. Then we take the weighted sum of the work experience representations as the career representation, denoted as c, formally, ei = tanh(W1s + W2wi), αi = exp(vT ei) P|c| j=1 exp(vT ej) , c = |c| X i=1 αiwi (2) The encoding process for the job posting side is the same. After extracting the statement representation s, the career representation c and the job representation j, then we concatenate them together and feed to a multi-layer perception to calculate the matching score. The main objective for person-job fit is defined as: Lmain = −y log(M(r, j)) −(1 −y) log(1 −M(r, j)). (3) Mining Career Path Preference As described above, we categorize the preference inferred from the career path into three types, namely consistency, likeness, and continuity. In this section, we elaborate on the details of how we model such preference. For the consistency of the career path, we introduce the task of career path identification. For maintaining the continuity of the career path, we proposed the task of career path reconstruction. Finally, for the likeness aspect, we propose a job-career contrastive learning objective. Career Path Identification In the auxiliary task of Career Path Identification (CPI), we replace some of the work experiences in the career path with random work experiences The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8429 sampled from the pool of work experiences. We train the model to identify whether they are original or replaced work experiences. Concretely, we assign each work experience wi with a label yCPI i ∈{0, 1}, where yCPI i = 1 represents the i-th work experience remains unchanged, and yCPI i = 0 indicates that the i-th work experience is randomly replaced. We design the module for career path identification with joint supervision from the statement part in the resume. This auxiliary task is built upon an MLP classifier to output a score indicating whether the work experience belongs to the same job seeker without any changes. The module is formulated as MCPI(s, c, wi), where wi is the work experience to be examined. For each job seeker, we replace at most one work experience, ensuring the replacement has minimal impact on the main task. The CPI task actually characterizes the latent preference of the job-seeker, indicating whether a particular work experience belongs to the talent or not. The task is optimized with cross-entropy loss, which can be formulated as: LCPI = −yCPI i log(MCPI(s, c, wi)) −(1 −yCPI i ) log(1 −MCPI(s, c, wi)). (4) Career Path Reconstruction As we have mentioned, the career path is not just a simple collection of work experiences but an evolving work experience sequence over time, so we model the “path” information with this auxiliary task. The development of the career path is (to some extent) orderpreserving. With the accomplishment of junior-level positions, the job-seeker will be eligible for senior-level choices. In order to characterize such intuition, we shuffle the order of the job seekers’ work experiences and regard the disordered career path as a negative training instance in contrast to the original ones as positive instances. Again, we apply the MLP classifier to distinguish the positive/negative samples, namely MCPR(s, c). For this task, half of the training samples are shuffled. Those with the wrong order are masked from the main task to avoid the negative transfer. Also, the CPR task is optimized with cross-entropy loss, which can be formulated as: LCPR = −yCPR log(MCPR(s, c)) −(1 −yCPR) log(1 −MCPR(s, c)). (5) Job-career Contrastive Learning There are similarities between work experience and job descriptions, as both describe content related to work. To enhance the relationship modeling between work experiences and job descriptions of job postings, we propose a job-career contrastive learning strategy to close the continuous representation gap between work experiences w and job postings j. Specifically, for a given job posting, we treat the last work experience from the positive person-job pair as a positive example. Work experiences from other job seekers and other job postings in the same batch as negative examples. Therefore, the model is optimized by minimizing the objective function: LCL = −log e cos(wk |ck|,jk)/τ P i̸=k(e cos(wi |ck|,ji)/τ + ecos(ji,jk)/τ) , (6) where τ is the temperature hyperparameter. Statistics Values # of job postings 82,362 # of resumes 33,285 # of work experiences in resumes 117,780 avg # of work experiences per resume 3.57 avg # of sentences per job posting/resume 11.83/19.75 avg # of words per job posting/resume 114.68/258.50 # of positive person-job pairs 119,031 # of negative person-job pairs 359,721 Table 1: The statistics of the dataset. Training WEPJF We apply a multi-task learning manner for WEPJF. The general objective function can be formulated as follows: Loverall = Lmain + Lauxiliary, Lauxiliary = λCPILCPI + λCPRLCPR + λCLLCL, (7) where λCPI, λCPR and λCL are hyperparameters that adjust the weight of corresponding auxiliary objective functions. Experiments In this section, we conduct extensive experiments on a realworld dataset to evaluate our proposed model. We first describe the dataset, experimental setups, and baseline methods. Then we compare our proposed model with the state-ofthe-art neural network based person-job matching baselines in terms of accuracy, precision, recall, F1, and AUC metrics. Moreover, we explore the impact of career path modeling, indicating auxiliary tasks through ablation studies. Experiment Setup Dataset We build a dataset by collecting data from a realworld online recruiting platform. † We anonymize all identity information to protect the privacy of job-seekers and recruiters. We collect both positive samples and negative samples of person-job pairs annotated by the system log: if a candidate’s talent chats with a job recruiter on the platform, the person-job pair will be labeled as positive. If the talent views the job profile without taking further action, it will be labeled as a negative instance. We summarize the statistics of the dataset in Table 1. Comparison Methods We compare our model with both classic classification methods and the latest neural network approaches. We include Logistic Regression (LR) (Galton 1886), Decision Tree (DT) (Quinlan 1987), Naive Bayes(NB) (Rish et al. 2001), Random Forests (RF) (Pal 2005), and Gradient Boosting Decision Tree (GBDT) (Friedman 2001) as traditional classification methods. For these methods, we use BERT as the feature extractor that outputs semantic representations. Then the extracted representations are concatenated and fed to the classical baseline methods. We include the following neural baselines: †anonymouswebsite The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8430 Methods Acc. Prec. Recall F1 AUC LR 0.525 0.580 0.523 0.550 0.545 DT 0.624 0.602 0.629 0.615 0.627 NB 0.522 0.472 0.524 0.497 0.530 RF 0.648 0.722 0.629 0.673 0.696 GBDT 0.557 0.612 0.552 0.580 0.594 PJFNN 0.743 0.780 0.693 0.737 0.836 APJFNN 0.760 0.790 0.722 0.750 0.847 JRMPM 0.780 0.799 0.743 0.773 0.870 ResumeGAN 0.774 0.794 0.743 0.767 0.863 PJFCANN 0.813 0.824 0.807 0.816 0.894 BERT 0.768 0.794 0.728 0.760 0.855 BERT+TAPT 0.804 0.819 0.780 0.798 0.887 WEPJM 0.850⋆ 0.844⋆ 0.829⋆ 0.837⋆ 0.929⋆ Table 2: Overall performance of all methods. ‘⋆’ indicates that we accept the improvement hypothesis of our model over the best baseline at a significant test level of 0.01. • PJFNN (Zhu et al. 2018) leverages CNN to extract text representations and then calculates the cosine similarity to model the resume-job relations. • APJFNN (Qin et al. 2018) extracts ability-aware representations for resumes by incorporating an attention mechanism to align key information in job postings to the resume documents. • JRMPM (Yan et al. 2019) considers the preference of both the job-seeker and the recruiter by leveraging two memory modules to “remember” records in application history. • ResumeGAN (Luo et al. 2019). This approach integrates different types of information in a sophisticated way and introduces adversarial learning to learn more expressive representation. • PJFCANN (Wang et al. 2022) uses co-attention and GNN to model the related recruitment history. • BERT (Devlin et al. 2018). For this approach, we use BERT-base as the backbone model and fine-tune the pretrained model. • BERT+TAPT (Gururangan et al. 2020). This approach performs task-adaptive pre-training (TAPT) on the task dataset to adapt pre-trained BERT to the person-job matching task. Implementation Details We use BERT-base to learn sentence-level representations. ‡ After encoding with BERT, the dimension of hidden states is set to 200. The batch size is set to 16. λCPI, λCPR and λCL are search in {0.1, 1}. The temperature hyperparameter is set to 1. The model is trained with Adam optimizer (Kingma and Ba 2014) with the learning rate initialized as 5e-4. Both the size of the validation set and the testing set are set to 3840 with 1920 positive samples and 1920 negative samples. The training will be early stopped if the evaluation results do not increase for 3 successive epochs. ‡https://github.com/huggingface/transformers Acc. Prec. Recall F1 AUC All 0.850 0.844 0.829 0.837 0.929 No 0.813 0.818 0.794 0.803 0.889 +CPI 0.827 0.832 0.823 0.828 0.917 +CPR 0.819 0.821 0.815 0.818 0.898 +CL 0.821 0.825 0.821 0.824 0.904 -CPI 0.831 0.833 0.814 0.823 0.913 -CPR 0.842 0.833 0.822 0.827 0.917 -CL 0.834 0.833 0.816 0.825 0.912 Table 3: Ablation studies of auxiliary tasks. We run experiments to test different combinations of the 3 auxiliary tasks, i.e., CPI, CPR, and CL. ‘+’ indicates to use the single auxiliary task only while ‘-’ denotes to exempt the auxiliary task from all three tasks. We enumerate all possible combinations of the auxiliary tasks. Experiment Results Overall Performance We discuss the overall performance of WEPJM and comparison methods. The evaluation results are reported in Table 2. We observe that neural network based methods generally outperform traditional classification algorithms by a large margin, showing the advantage of neural networks in capturing deep semantic information from text, which is consistent with previous studies (Yan et al. 2019; Qin et al. 2018; Bian et al. 2020). By comparison, we can see that APJNN outperforms PJFNN, indicating that the quality of learned representations is a strong factor that has a sufficient impact on the final matching performance. Besides, JRMPM incorporates historical information to enhance the learning of representations for both resumes and jobs, resulting in better performance Different from the previously mentioned methods, ResumeGAN integrates different types of information to learn more expressive representations. The performance of ResumeGAN is compatible with JRMPM while higher than the rest of the non-pre-trained baseline methods by a large margin, which demonstrates that explicitly modeling different information flow—instead of merging everything in a coarse grain—can improve the performance. PJFCANN performs best in baselines. Without sophisticated architecture design, directly fine-tuning a pre-trained BERT can achieve a decent performance when compared with other baselines. After adding the task-adaptive pre-training, the model can better capture the domain knowledge, resulting in a boost in performance. We observe that the results of our proposed method have overall advantages over the performance of all the baselines on all the metrics and the improvements have passed the significance test, i.e., t-test with p-value¡0.01. The results indicate that our hypothesis to explicitly model the career path to improve the person-job matching performance has been verified. Next, we proceed to investigate how the different model components contribute to the overall performance by conducting ablation studies. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8431 Self Statement I am a graduate of animation major and proficient in hand drawing. I am skillful in PS, AI and other painting software. I can adapt to a variety of painting styles while maintaining steady preference over different styles. I have experience in creating IP illustrations and designing derivative products. I intend to develop myself in the IP industry. Career Path Work Experience 1 - Junior Character Art Designer • Conducted the character painting assigned by the editor and senior designers • Participated in modifying and polishing character artwork according to the comments given by the editor Work Experience 2 – Creative Advertisement Designer • Designed game advertising images, logos and icons. • Be responsible for designing posters, product images, comic pictures, emoji gifs, banner images, etc. • Be responsible for the design of daily images, posters of "XXX" Weibo and post-editing video Work Experience 4 – Art Director • Initiated the creation of a new IP and its related products. • Hand-painted the characters of the hottest anime and make related products. • Cooperated with other artists to accomplish other design work, giving revision opinions and suggestions. • Worked as a group leader, design and develop new IP and run the Weibo account. Job Description (PJFNN) • Have solid art foundation on poster design, and be creative • Have creative thinking and unique design innovation capabilities. • Proficient in graphic design software. Job Description (APJFNN) • Design posters according to requirements. • Be skillful in PS and AI. • Major in art and be self-motivated. • Have a team spirit Job Description (WEPJF) • Responsible for the overall design and planning, manage the work plan of the design department. • Organize design meetings and propose insightful opinions. • Responsible for the company's brand designing and UI designing. Self Statement I am a software develop that have worked in the development of six software. I am familiar with the whole process of software development and have a deep understanding of software design principles. I am proficient in Java and C# and familiar with most mainstream programming languages. Career Path Work Experience 1 - Junior Programmer • Participated in the development of software applications under the guidance of experienced co-workers. • Guided by team leader to practice job-required skills including Java… Work Experience 2 - Experienced Software Developer • Participated as a core programmer in the development of software applications, collaborating with other teammates. • Designed and implemented software features and functionality. • Be proficient in certain programming languages. Work Experience 3 - Experienced Software Developer • Participated as a core programmer in the development of software applications. • Discuss with the project manager. Work Experience 4 - Senior Software Developer • Led the design of a software application. • Be familiar with the complete process of software development. • Be proficient in program language such as Java, C# … Job Description (PJFNN) • Work with a team to develop software applications • Self-motivated and passionate and be good at communication • Be familiar with Java Job Description (APJFNN) • Have experience in software development. • Expertise in one or more programming languages, such as Java, Python. Job Description (WEPJF) • Minimum of 5 years of experience in software development, participated in at least 3 development projects. • Lead the development of software and collaborate with other software developer. • Experience with project management. • Have good problem-solving abilities Figure 3: Two cases to show the advantage of our model over other methods by capturing the evolving career development over time. The blue parts are the job seekers’ resumes while the red parts are the job postings selected by different methods. Due to space limits, we remove unrelated content and simplify some detailed content in both resumes and job postings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8432 Figure 4: Performance of fresh job seeker (job seekers with less than 3 work experiences). Does three auxiliary tasks really work? As we include three auxiliary tasks to model the career path of the jobseekers, now we study the contribution of each auxiliary task. We consider all possible combinations of the three auxiliary tasks where each auxiliary task can either be included or exempted. Now we analyze the effects of different model variants on the person-job matching performance. The results are reported in Table 3. Firstly, removing all three tasks (denoted as ‘No Auxiliary’) causes a significant decrease in performance: all the other variants perform better compared with “No Auxiliary”, indicating that all auxiliary tasks can bring benefits for the person-job matching task. Then we consider cases with only one auxiliary task, i.e., “+CPI”, “+CPR”, and “+CL”. By comparing the results, we can roughly see the different contributions of various auxiliary tasks. Specifically, career path identification (CPI) performs best out of three, which leads to a 2.5% improvement, followed by adding career-job contrastive learning (CL). The model learns the talent’s job preference from CPI by identifying whether a work experience belongs to a particular talent, and learns to align the representations of work experiences and jobs with the contrastive objective. Similar observations have been identified from the “leave-one-out” examinations, i.e., ‘-CPI’, ‘-CPR’, and ‘-CL’, which concur with our conclusion that all auxiliary tasks can help. What if the job seeker has no/few work experiences? A basic fact about the employment market is that every job seeker has their first job. Therefore, we are curious about how well WEPJF performs for fresh job seekers with few or zero previous work experiences. To test this, we filter out the talents with less than 3 work experiences and evaluate the model’s performance. As shown in Figure 4, the model’s performance degrades with fewer work experiences, as it becomes more challenging to infer career path preferences accurately. However, there is still a significant performance gain over other text-matching baselines. We attribute this advantage to the sophisticated modeling of the career path, from which the model gains “virtual” career path preference by aligning the resume of the fresh job seeker with that of experienced ones. Case Study Finally, we use a case study to demonstrate how career path information matters in person-job fit. The first case is a software developer whose career path clearly reveals a progressive development. The job seeker grows from a junior programmer to an experienced developer, accumulating practical development experiences and improving job skills during his rich experiences. If looking into his career path, we can see that his third and fourth work experiences already relate to more complex responsibilities other than programming. He definitely is overqualified for the job positions proposed by PJFNN and APJFNN which require no more than software development experience. The second case in Figure 3 shows the resume of an art designer, from which we can see an evolving path in the previous works. The candidate starts from a junior position and can only take orders from editors. Afterward, this talent starts to design advertising images and logos and is responsible for poster designs. The latest work experience indicates that the candidate now becomes a group leader in charge of new IP initiation. Our proposed model identifies career development and matches with a senior position that requires leadership and teamwork. For a talent like this, recommending job positions as ordinary art designers may not be the best option. It is worth noticing that the purpose of the case study is not to depreciate other person-job fit methods in order to elevate ours, Actually, the job positions present in Figure 3 are all reasonable matching in the sense of qualification. However, there are still differences in these “good” matching: whether they conform to the preference revealed in the career path. We believe that the ideal next job follows the direction of career advancement for talents who are self-motivated and seeking advancements and perfection in career development. Conclusions In this paper, we propose a new perspective of person-job fit to emphasize the preference of career path. We introduce three auxiliary tasks: career path identification, career path reconstruction, and career-job contrastive learning to investigate and extract the career path information and propose an effective architecture WEPJF that successfully fuse the extracted career path preference into the objective of personjob fit. Through experiments and case studies, we demonstrate that WEPJF is a competitive method and career path really matters in person-job fit. Given that employment is under the strike of a new wave of automation and AI technology, more and more people will rely on online recruitment platforms for the foreseeable future. How to provide job seekers with better job-matching services is becoming an increasingly urgent problem. In this paper, we alleviate this problem by utilizing career path information. Acknowledgments This work is supported by National Key R&D Program of China (No. 2022YFC3301900) and National Natural Science Foundation of China (NSFC Grant No. 62122089). We The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8433 sincerely thank all reviewers for their valuable comments and suggestions, which are crucial for improving our work. References Bian, S.; Chen, X.; Zhao, W. X.; Zhou, K.; Hou, Y.; Song, Y.; Zhang, T.; and Wen, J.-R. 2020. Learning to Match Jobs with Resumes from Sparse Interaction Data using MultiView Co-Teaching Network. In CIKM, 65–74. Bian, S.; Zhao, W. X.; Song, Y.; Zhang, T.; and Wen, J.-R. 2019. Domain Adaptation for Person-Job Fit with Transferable Deep Global Match Network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4812–4822. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Fang, H.; Wang, S.; Zhou, M.; Ding, J.; and Xie, P. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. Friedman, J. H. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189–1232. Galton, F. 1886. Regression towards mediocrity in hereditary stature. The Journal of the Anthropological Institute of Great Britain and Ireland, 15: 246–263. Giorgi, J.; Nitski, O.; Wang, B.; and Bader, G. 2021. DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations. In ACL-IJCNLP, 879–895. Gururangan, S.; Marasovi´c, A.; Swayamdipta, S.; Lo, K.; Beltagy, I.; Downey, D.; and Smith, N. A. 2020. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks. In ACL, 8342–8360. Jiang, J.; Ye, S.; Wang, W.; Xu, J.; and Luo, X. 2020. Learning Effective Representations for Person-Job Fit by Feature Fusion. In CIKM, 2549–2556. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Le, R.; Hu, W.; Song, Y.; Zhang, T.; Zhao, D.; and Yan, R. 2019. Towards Effective and Interpretable Person-Job Fitting. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1883– 1892. Li, H.; Ge, Y.; Zhu, H.; Xiong, H.; and Zhao, H. 2017. Prospecting the career development of talents: A survival analysis perspective. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 917–925. Lin, H.; Zhu, H.; Zuo, Y.; Zhu, C.; Wu, J.; and Xiong, H. 2017. Collaborative company profiling: Insights from an employee’s perspective. In Thirty-First AAAI Conference on Artificial Intelligence. Luo, Y.; Zhang, H.; Wen, Y.; and Zhang, X. 2019. ResumeGAN: An Optimized Deep Representation Learning Framework for Talent-Job Fit via Adversarial Learning. In CIKM, 1101–1110. Malinowski, J.; Keim, T.; Wendt, O.; and Weitzel, T. 2006. Matching people and jobs: A bilateral recommendation approach. In HICSS’06, volume 6, 137c–137c. IEEE. Meng, Q.; Zhu, H.; Xiao, K.; Zhang, L.; and Xiong, H. 2019. A Hierarchical Career-Path-Aware Neural Network for Job Mobility Prediction. In SIGKDD, 14–24. Pal, M. 2005. Random forest classifier for remote sensing classification. International journal of remote sensing, 26(1): 217–222. Qin, C.; Zhu, H.; Xu, T.; Zhu, C.; Jiang, L.; Chen, E.; and Xiong, H. 2018. Enhancing person-job fit for talent recruitment: An ability-aware neural network approach. In SIGIR, 25–34. Qin, C.; Zhu, H.; Zhu, C.; Xu, T.; Zhuang, F.; Ma, C.; Zhang, J.; and Xiong, H. 2019. DuerQuiz: A Personalized Question Recommender System for Intelligent Job Interview. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2165–2173. Quinlan, J. R. 1987. Simplifying decision trees. International journal of man-machine studies, 27(3): 221–234. Rish, I.; et al. 2001. An empirical study of the naive Bayes classifier. In IJCAI 2001 workshop on empirical methods in artificial intelligence, volume 3, 41–46. Sun, Y.; Zhuang, F.; Zhu, H.; Song, X.; He, Q.; and Xiong, H. 2019. The impact of person-organization fit on talent management: A structure-aware convolutional neural network approach. In SIGKDD, 1625–1633. Wang, Z.; Wei, W.; Xu, C.; Xu, J.; and Mao, X.-L. 2022. Person-job fit estimation from candidate profile and related recruitment history with co-attention neural networks. Neurocomputing, 501: 14–24. Wu, X.; Xu, T.; Zhu, H.; Zhang, L.; Chen, E.; and Xiong, H. 2019. Trend-aware tensor factorization for job skill demand analysis. In IJCAI, 3891–3897. AAAI Press. Xu, H.; Yu, Z.; Xiong, H.; Guo, B.; and Zhu, H. 2015. Learning career mobility and human activity patterns for job change analysis. In 2015 IEEE International Conference on Data Mining, 1057–1062. IEEE. Xu, T.; Zhu, H.; Zhu, C.; Li, P.; and Xiong, H. 2018. Measuring the popularity of job skills in recruitment market: A multi-criteria approach. In Thirty-Second AAAI Conference on Artificial Intelligence. Yan, R.; Le, R.; Song, Y.; Zhang, T.; Zhang, X.; and Zhao, D. 2019. Interview Choice Reveals Your Preference on the Market: To Improve Job-Resume Matching through Profiling Memories. In SIGKDD, 914–922. Yang, C.; Hou, Y.; Song, Y.; Zhang, T.; Wen, J.-R.; and Zhao, W. X. 2022. Modeling Two-Way Selection Preference for Person-Job Fit. In Proceedings of the 16th ACM Conference on Recommender Systems, 102–112. Zhang, L.; Zhu, H.; Xu, T.; Zhu, C.; Qin, C.; Xiong, H.; and Chen, E. 2019. Large-Scale Talent Flow Forecast with Dynamic Latent Factor Model? In The World Wide Web Conference, 2312–2322. Zhu, C.; Zhu, H.; Xiong, H.; Ma, C.; Xie, F.; Ding, P.; and Li, P. 2018. Person-job fit: Adapting the right talent for the right The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8434 job with joint representation learning. ACM Transactions on Management Information Systems (TMIS), 9(3): 1–17. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8435
2024
937
18,781
Efficient Representation Learning of Satellite Image Time Series and their Fusion for Spatiotemporal Applications Poonam Goyal, Arshveer Kaur, Arvind Ram, Navneet Goyal ADAPT Lab, Birla Institute of Technology and Science, Pilani [poonam,p20170432,f20201210,goel]@pilani.bits-pilani.ac.in Abstract Satellite data bolstered by their increasing accessibility is leading to many endeavors of automated monitoring of the earth’s surface for various applications. Such applications demand high spatial resolution images at a temporal resolution of a few days which entails the challenge of processing a huge volume of image time series data. To overcome this computing bottleneck, we present PatchNet, a bespoke adaptation of beam search and attention mechanism. PatchNet is an automated patch selection neural network that requires only a partial spatial traversal of an image time series and yet achieves impressive results. Satellite systems face a trade-off between spatial and temporal resolutions due to budget/technical constraints e.g., Landsat-8/9 or Sentinel-2 have high spatial resolution whereas, MODIS has high temporal resolution. To deal with the limitation of coarse temporal resolution, we propose FuSITSNet, a twofold feature-based generic fusion model with multimodal learning in a contrastive setting. It produces a learned representation after fusion of two satellite image time series leveraging finer spatial resolution of Landsat and finer temporal resolution of MODIS. The patch alignment module of FuSITSNet aligns the PatchNet processed patches of Landsat-8 with the corresponding MODIS regions to incorporate its finer resolution temporal features. The untraversed patches are handled by the cross-modality attention which highlights additional hot spot features from the two modalities. We conduct extensive experiments on more than 2000 counties of US for crop yield, snow cover, and solar energy prediction and show that even one-fourth spatial processing of image time series produces state-of-the-art results. FuSITSNet outperforms the predictions of single modality and data obtained using existing generative fusion models and allows for monitoring of dynamic phenomena using freely accessible images, thereby unlocking new opportunities. Introduction Satellite technology is extensively used to monitor the Earth’s surface for different applications. Popular satellite systems that make their data publicly available include Landsat-8/9 (NASA 2016), Sentinel-2 (European Space Agency Signature 2017), and MODIS (NASA 2015). The last decade has witnessed a significant improvement in sensor technology leading to availability of higher spatial and Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. temporal resolution satellite images. However, due to budgetary and technological constraints, it is not possible to capture satellite images with required high spatial and temporal resolutions using a single satellite system. This necessitates the development of efficient fusion algorithms that combine satellite image time series (SITS) from two satellites. Many applications predicting crop yield, forest cover, forest fire (Gupta et al. 2023), etc. require SITS at high resolution along both spatial and temporal dimensions. Publicly available data from satellites like LANDSAT 8/9, SENTINEL-2, MODIS, etc. have high resolution only along one dimension, e.g., LANDSAT 8 has a spatial resolution of 30m and a 16-day revisit cycle whereas, MODIS has a spatial resolution of 250-500m and a revisit time of 8 days. For high spatial resolution SITS, the amount of data we need to process increases manifolds, leading to a computing bottleneck. The amount of 7 years’ data processed for 2000 counties considered in the paper is approximately 2.1 TB for MODIS and 10.0 TB for Landsat 8, respectively after applying the bits compression technique (Hubara et al. 2016). The huge amount of data processing required seriously impedes the democratization of the use of satellite images for various applications. In this paper, we are addressing two major problems- 1) impractical computational requirements for processing high spatial resolution SITS & 2) dealing with coarse temporal resolution of high spatial resolution SITS. For 1), we propose PatchNet which learns prominent patterns in a SITS by doing a spatial patch-based partial traversal, e.g., (1/p)th spatial processing of SITS using the idea of beam search and attention mechanism for learnable patch selection. The learnable patch selection mechanism eliminates the need for full spatial processing of SITS, thereby reducing the amount of processing by a factor of p with some additional overheads and still achieves SOTA results for end tasks. Existing methods deal with the processing challenges by transforming the images into histograms (Sun et al.; You et al. 2017; Sun et al. 2020; Kaur et al. 2022). A few researchers have also tried to transform images into singlevalue numeric vegetation indices (Sakamoto 2020; Skakun et al. 2021; Ji et al. 2022; Choudhary et al. 2019). Both these approaches suffer from information loss. For 2), we propose FuSITSNet, a twofold feature-based fusion model which can be used to fuse any two SITS. We applied it for Landsat-8 and MODIS SITS. FuSITSNet imThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8436 proves the temporal features of Landsat SITS by aligning its PatchNet processed patches with the MODIS SITS. It takes care of the untraversed area of the time series by crossmodality attention which assimilates complementary features from two modalities. Its twofold feature fusion eliminates the need for image generation at mid-timestamps to increase the temporal resolution by a factor of 2. Our approach gives the learned representation directly from Landsat & MODIS SITS and thus does not increase the data volume for effectively increasing the temporal granularity. Code is available at (Poonam Goyal 2022) Generative models such as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) (Gao et al. 2006), Robust Spatiotemporal Fusion Network (RSFN) (Tan et al. 2022), and GAN ((Bouabid et al. 2020) have been used to generate Landsat-8 like images at higher temporal granularity. These models are constrained by the requirement of availability of Landsat & MODIS images captured on the same day and may also propagate existing noise. Moreover, the generation process is slow. If we use the generative models to enhance the temporal resolution of SITS, it increases the amount of data twofold and thus is computationally prohibitive. We used these models to generate Landsat-like images at improved temporal granularity of 8 days to create a baseline for comparisons. FuSITSNet is set up in a contrastive learning framework (Fan, Zhang, and Gao 2020) which gives representation of SITS from both modalities. It can be applied to any two modalities having varied spatial, temporal, or spectral resolutions. The key contributions of the paper are as follows: • To the best of our knowledge, this is the first attempt to efficiently process time series of high spatial resolution satellite images. We propose PatchNet which only needs to partially process the image time series using the concept of patches. The patch selection mechanism recommends most informative patches and achieves SOTA results for the end tasks considered. • We also propose FuSITSNet, a twofold feature-based fusion model for fusing two image time series having different resolutions. We complementarily use a patch alignment module and cross-modality attention to learn high spatial resolution features of Landsat-8 and high temporal features of MODIS. • We conduct extensive experiments to validate our models, PatchNet and FuSITSNet, for three applications – Crop yield prediction (CYP), snow cover prediction (SCP), and Solar energy prediction (SEP). The results of PatchNet are compared with those of existing models which use histogram time series and a significant improvement is observed. The direct feature-based learning from two SITS using FuSITSNet outperforms the enhanced SITS obtained from existing generative fusion models and all the baselines on single modality. Related Work Satellite data: Satellite systems like PlanetScope (planet 2019), CartoSat-1 (ISRO 2019), MODIS (NASA 2015), Landsat-8/9 (NASA 2016), Sentinel-2 (European Space Agency Signature 2017), and others are orbiting around the earth and collecting data at varying spatial, temporal and spectral resolutions. AVHRR has a coarse spatial resolution of 1km while PlanetScope, CartoSat-1 have a high resolution of 2-3m but their data is not freely available. Popular satellite systems are MODIS, Landsat-8/9, and Sentinel-2 due to their publicly available data which can be used in different real-world applications like disaster management, urban planning, agriculture, climate studies, etc. MODIS launched in 1999 provides data at a spatial resolution of 250500m with a revisit time of daily or 8 days depending on the product. Landsat-8/9, launched in 2013/2021, has a spatial resolution of 30m with a revisit time of 16 days, and Sentinel-2 launched in 2015 has a spatial resolution of 20m with a revisit time of 10/5 days. Spatiotemporal Applications: We consider applications viz. CYP, SCP, and SEP. These applications abide by the permutation invariant property where value of a pixel contributes to the end task irrespective of its position in the image (You et al. 2017). Accurate CYP is crucial for ensuring food security around the globe. Researchers have tried to predict crop yield with climate data (Fan et al. 2022; Verma et al. 2016; de Wit, Duveiller, and Defourny 2012; Guruprasad, Saurav, and Randhawa 2019) using traditional machine learning models. These models lack in capturing complex relationships between meteorological attributes and yield. A few researchers applied deep learning models and incorporated genotype (Khaki and Wang 2019; M˚aløy et al. 2021) and/or soil (Sun et al. 2020; Kaur et al. 2023) information. Recent studies attempt to include physics-guided patterns (He et al. 2023), and topological features (Jiang et al. 2022) along with climate data. Research shifted from meteorological data to the use of satellite image data after getting easy access to it. However, it is difficult to process image data due to its high volume. Therefore, vegetation indices are directly computed from MODIS product MOD13Q1 for a location. (Sun et al., 2020) and (You et al. 2017) converted MODIS images into histograms and used histogram time series to predict crop yield. Authors (Kaur et al. 2022) presented a deep learning model for MODIS, Landsat, and Sentinel histogram time series and fusion model (Kaur, Goyal, and Goyal 2023) to predict crop yield and highlighted the importance of high spatial and high temporal resolution of data required for the application. However, researchers faced data scarcity for training models using high-resolution satellites. The other two applications have gained interest only recently, and very little work is available in the literature. (Xiao et al. 2022) applied a support vector machine on atmospheric-oceanic dynamics data SCP. However, satellite data has great importance for inaccessible and hazardous regions where gathering data physically is not possible (Xiao et al. 2022). SEP is done to find a suitable location for the installation of solar plants and reduce the dependence on fossil fuels for economic development. (Jebli et al. 2021) predicted solar energy using random forest on meteorological data. The existing studies for the listed applications do not use satellite data. However, they need analysis of SITS with high spatial & high temporal resolutions but data from a single satellite system has a trade-off between them. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8437 Spatiotemporal Fusion: To handle the trade-off, spatiotemporal fusion is a possible solution. STARFM (Gao et al. 2006) creates synthetic Landsat-like image at timestamp t + 1 by fusing MODIS and Landsat images at time t. It is a linear model which calculates the reflectance value of a pixel by a weighted sum of the neighboring pixels. It is a pixel-based method that needs at least one pair of images captured on the same day. Developed variants of the method also suffer from similar challenges. Another study uses a linear regression on pixels to generate images (Ping, Meng, and Su 2018). The pixel-based methods blindly use noisy pixels in the fusion process, thus propagating the noise in the neighboring pixels of the predicted image (Tan et al. 2022). Given the limitations of pixel-based methods, learningbased approaches are gaining interest due to their ability to capture complex relationships in data without relying on predefined assumptions. (Wang et al. 2017) and (Wei et al. 2017) used downscaling & upscaling to generate an image having a spatial resolution of Landsat-8 with the help of a MODIS image. A few attempts have been made to use advanced Generative adversarial networks (GAN) for image generation. (Bouabid et al. 2020) generate a Landsat image at time t using MODIS and Landsat images at t and t −1, respectively. Similarly, (Tan et al. 2022) used GAN to handle noise while generating a Landsat-like image using a MODIS image at timestamp t and Landsat images at t −1 and t + 1. Due to the complex image generation process, generative models have been applied to small-scale datasets having only a few locations (Bouabid et al. 2020). The applications under consideration require time series at a high temporal resolution and interpolating images between two consecutive images is inefficient and computationally expensive. Moreover, this approach increases data volume twofold. Also, the generated images can increase the already existing noise in the original images. Our proposed method overcomes these problems. Problem Formulation We have considered three spatiotemporal forecasting problems viz. CYP, SCP, and SEP. The goal is to predict ˆyc,zϵ {crop yield, percentage of area under the snow, and solar energy produced} for a county c at prediction time granularity z which is a year, a month and a fortnight for CYP, SCP, and SEP, respectively. Let input data set of TS be Xz = {[x1 1, x2 1, ..., xt 1], [x1 2, x2 2, ..., xt 2], ...[x1 z−1, x2 z−1, ..., xt z−1]}, where t represents the number of timestamps depending on the application and the satellite and xt 1 is its image at timestamp t. For example, for soybean crop t=15 and t=30 for Landsat-8 and MODIS, respectively. Proposed Framework We propose two models: PatchNet and FuSITSNet. PatchNet extracts prominent features from high spatial resolution SITS by partial spatial traversal and covering the hotspot areas using patch selection mechanism. FuSITSNet is a twofold fusion model that presents a way to fuse two SITS at the feature level. It uses two encoders - 1) image time series encoder (TSE) and 2) PatchNet (shown in blackbox Figure 1: FuSITSNet in Figure 1 for image time series of high spatial resolution satellites. The representations of the two time series are then passed to a twofold Fusion module (shown as a black box in Figure 1). It learns complementary high spatial and temporal features to give a joint representation of the two TS. The regression module is applied to joint features for final prediction. The overview of the FuSITSNet is shown in Figure 1 & all the modules are described in subsequent subsections. PatchNet PatchNet is designed to encode high spatial resolution SITS which is otherwise impractical to process. It works on image times series iteratively for multiple patch time series (patchTS) and uses the idea of a beam search for optimizing the patch selection process. A patch is selected in the spatial dimension and patchTS consists of entire time series for the patch. The architecture of the PatchNet is given in Figure 2. We divide the image time series into a spatial virtual grid, resulting in multiple patchTS, one for each cell. We now onwards refer to patchTS as a patch. The patches are processed using TSE and their representations are passed to the Patch selection module (PSM). PSM uses attention score to identify top ’k’ patches that are then forwarded to the Neighbor Selector (NS). NS determines the unprocessed neighboring patches of top ’k’ patches and also creates a list of patches to be processed in the next iteration. The process continues till a fraction (1/p) of the SITS is processed. The enhanced patch representations obtained from PSM are passed to the embedding generation module which outputs the embedding of the entire SITS learned by the network in multiple iterations. The pseudo-code is given in Algorithm 1. Time Series Encoder (TSE) gives a linear representation of the input patch. It consists of two submodules 3DCNN network and a Spatial Attention Mask (SAM) followed by a linear layer. 3DCNN Module (Gavahi, Abbaszadeh, and Moradkhani 2021) consists of three convolution layers having 10, 15, and 20 filters with zero padding. Each convolution layer is followed by a 3D-max pool layer. 3DCNN leverages both spatial and temporal features simultaneously and learns more informative representations of the volume. Spatial attention mask (SAM) We followed (Mohla et al. 2020) and modified it for our problem. It has 6 2D convolution layers, each followed by a batch normalization layer to reduce the internal covariate shift and model overfitting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8438 Figure 2: PatchNet The skip connections are used after every two convolution layers to improve the information flow within the network and mitigate the problem of vanishing gradient. Global average pooling is done by two pooling operations ’average pooling’ and ’max pooling’ applied along the channel axis and are concatenated to create an efficient feature descriptor. A convolution layer is applied over the feature descriptor to get the highlighted regions. Patch Selection Module (PSM) We utilize self-attention mechanism (given by eq. 1 and 2) to focus on the most important k patches from n input patches. PSM learns the enhanced representations of all the patches across iterations using the following process and gives the score of each patch based on its contribution to the end task. The input to PSM is R = {r1, r2, ...rn}, where n is total number of patches and ri is the linear representation of each patch after being processed by TSE. The mathematical representation is: Query (Q), Key (K), and Value (V) for self attention are: Q = R × wq, K = R × wk, V = R × wv (1) where wq, wk, and wv are weight matrices for Q, K, and V, respectively. A = softmax(QKT ) (2) where A = {a1, a2, ...., an} is an attention score matrix for all the n patches, and each as is of size b equal to size of patch embedding. To get the collective score i.e. contribution of the patch towards the end task is calculated as: S = b X i=0 ai s s = 1 to n (3) l, ¯l = top k(S) where l + ¯l = n (4) where top k is the function that returns a list, l, the indices of top k patches and a list, ¯l, remaining patches to be used in the next iteration of the selection process. Algorithm 1: PatchNet Input: SITS Output: Embedding of SITS initialize: m = 0 and |P| = totalpatches in SITS 1: while (m) ̸= |P|/p do 2: select n random patches 3: R = TSE(patchTS)∀n patches // apply TSE to get linear representation of n patches 4: l, ¯l, ˜R = PSM(R) // list of top ’k’ patches and enhanced patch representations 5: n′ = NS(l) // NS gives neighbors of each patch in l 6: Select n −n′ random patches 7: m = m + n 8: EL = EG( ˜R) 9: end while 10: return EL PSM also helps in enhancing the patch representations R as: ˜R = S × V (5) Neighbor Selector (NS) finds the untraversed neighboring patches of all k patches. For a patch pij, set of neighbors is {pef−pij}, e = i−1, i, i+1 and f = j−1, j, j+1. Selecting the neighboring patches ensures that focus is maintained near the hotspots and this leverages the geospatial information to boost the prediction. We select a few untraversed random patches for the next iteration to make the number of patches n. Embedding Generation (EG) is a two-level process and consists of two linear layers. The first layer is used to get the representation of each patch, pij in an iteration. The embeddings of all selected patches across iterations are then concatenated and passed to the second linear which gives a representation of the entire SITS. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8439 FuSITSNet FuSITSNet (Figure 1) consists of two encoders TSE and PatchNet and a fusion module. We use FuSITSNet for fusing two SITS from Landsat-8 and MODIS. We processed Landsat SITS using PatchNet. MODIS has a coarser spatial resolution and can be processed as a whole, thus we used TSE for processing its time series. However, we can replace TSE with PatchNet to generate embeddings if the second SITS also has a high spatial resolution. Figure 3: Fusion Module Fusion Module: Given in Figure 3 is a twofold module that takes the embeddings EM and EL from the two encoders for MODIS and Landsat-8, respectively. It learns the features from the two modalities using two sub-modules, patch alignment module, and cross-modality attention. Patch Alignment Module (PAM): We use PAM inspired by the text video correlation aware module (Chen et al. 2022) to align the patches of fine spatial resolution modality (Landsat-8) with the corresponding regions in MODIS time series and learn fine temporal patterns for the aligned patches. This also suppresses the noise present in two SITS and mitigates its effect on the end task. In the alignment process we calculate the similarity, SimL of Landsat-8 patches with MODIS as given below: SimL = EL(EM)T (6) Softmax is applied over SimL and we use an average patchwise aggregator over the MODIS embeddings as: P i ML = softmax(SimLi)EM, (1 ⩽i < n) (7) PML = [P 1 ML; P 2 ML; · · · ; P n ML] (8) n is number of patches traversed, P i ML is similarity-aware aggregated MODIS representation of ith patch of Landsat. Cross-Modal attention (CMA) learns the inter-modality relationships from the two embeddings EM and EL by applying bi-directional cross-modality attention by taking queries from both modalities to leverage their profound features. The scalar dot product attention between the hotspot spatial features of Landsat and highlighted temporal features of MODIS gives the joint high quality features in both aspects. This helps the model to capture the complementary aspects of the two modalities and thus utilizes the information from one modality to compensate for the low quality of the other modality. Also, the module covers the untraversed Landsat8 regions with the help of MODIS time series. The output of the module is represented by FM and FL. We concatenate PML, FM, and FL and apply multi-head self-attention to highlight the combined hot spot features. It is followed by a feed-forward network comprising three linear layers with the Gaussian Error Linear Unit (GELU) activation function and followed by layer normalization. Lastly, a regression layer is applied to get the prediction. Dataset Details We considered the top producers of corn and soybean from the United States for CYP. The crop yield labels are collected from Quick Stats (USDA 2010) compiled by the United States Department of Agriculture (USDA). For SCP, we have considered the counties which experience average snowfall of more than 250 inches per year. The percentage of the area covered under snow is obtained from the MODIS product MOD10A1 (NASA 2000). For SEP, we considered 5 states. Details are given in Appendix A. Data Preparation Satellite images are by default in float values and require more bits for storage. We applied bits precision for compression where we replaced float values with unsigned integers (uint) (Hubara et al. 2016) which reduced the storage requirements by four. The difference between RMSE for CYP is less than 1% when we applied our model for 100 counties on float SITS and uint SITS. We used uint SITS with five surface reflectance bands common in both satellites (MODIS & Landsat-8) for all the experiments conducted. For data preparation details see Appendix B. Learning Objectives We use marginal contrastive loss for contrastive learning and the mean square error (MSE) for the prediction task. Margin Contrastive Loss: In our twofold fusion model, we innovatively applied margin contrastive loss (Shah et al. 2022). Utilizing contrastive loss in regression problems is challenging since there are no explicit class categories to directly determine positive and negative pairs for training. The number of ”classes” is roughly equivalent to the size of the dataset, rendering traditional contrastive loss implementation difficult. To overcome this challenge, we used batchwise margin contrastive loss (Kaur, Goyal, and Goyal 2023). We selected the margin as 0.5 which minimized RMSE after experimenting with values [0.1,0.5,1.0]. Contrastive learning gives us two losses losspos and lossneg for positive and negative pairs, respectively. Mean squared error (MSE): We used mean squared error for the regression task. It gives the mean squared error between the actual target yc,z and the predicted output ˆyc,z for all the considered location-year pairs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8440 Model Corn Soybean CNN(You et al. 2017) 24.617 8.346 CNN+GP(You et al. 2017) 23.881 8.343 CNN+LSTM*(Sun et al. 2020) 23.632 8.370 CYN**(Kaur et al. 2022) 21.819 7.370 PatchNet 21.469 7.290 PatchNet+M 20.631 6.963 *uses additional soil data, **uses both meteorological & soil Table 1: Comparison: PatchNet vs histogram models The total loss (L) for the model is: L = losspos + lossneg + MSE Baselines for Comparison We considered three types of models for comparison: (i) histogram models which work on histogram TS of satellite data. (ii) our baselines for single modality SITS. (iii) our baselines on high temporal SITS generated using generative fusion models Histogram models: We considered four existing CYP models CNN (Sun et al.), CNN+GP (Sun et al.), CNN+LSTM (Sun et al. 2020), and CYN (Kaur et al. 2022) working on histogram TS to compare with the proposed PatchNet. CNN and CNN+GP use only surface reflectance data and did not exploit the temporal dependency in the data. CNN+LSTM models CYP as a temporal problem using soil data and surface reflectance TS. The authors processed raw features using 2DCNN and used LSTM to model the sequence embeddings. CYN modeled CYP as a spatiotemporal problem and used soil, and meteorological data along with surface reflectance histograms. All these models work for different locations, and time duration. However, we used the data for the same locations and time duration in all the models for a fair comparison. To the best of our knowledge, there are no existing models working on histograms for the other two applications. Baseline models: To the best of our knowledge, there is no method that works with SITS for spatiotemporal problems. We applied the proposed PatchNet and TSE models on single modality image time series of Landsat-8 and MODIS, respectively, and compared with FuSITSNet to see the significance of fusing two time series over a single modality. Generative fusion models: We applied three existing generative fusion models viz. STARFM (Gao et al. 2006), RSFN (Tan et al. 2022), and GAN (Bouabid et al. 2020) to enhance the temporal resolution of Landsat time series. We generated images at every mid-timestamp to get time series of 8-day frequency. We then applied PatchNet for predictions using enhanced SITS and compared them with FuSITSNet. Details of models for comparison are in Appendix C. Experiments We performed experiments using Pytorch 1.11.0 and CUDA 11.7 on an A100 GPU server with 80 GB RAM. A model is trained for 50 epochs with a batch size of 8 using Adam optimizer with a learning rate η. We have trained the model with Model CYP SCP SEP Corn Soy TSE (MODIS) 23.335 7.545 17.167 8.863 PatchNet(Landsat-8) 21.469 7.290 12.813 8.543 PatchNet(STARFM) 20.289 6.308 — 7.227 PatchNet(RSFN) 22.839 6.432 12.329 8.012 PatchNet(GAN) 18.102 6.296 11.951 7.043 FuSITSNet 16.1925 5.0389 9.2308 2.0447 Table 2: FuSITSNet vs single modality baselines 5 years of data (2014-2018) and, 2 years (2019 and 2020) for testing. To predict the output for the zth year, the training is conducted until the (z −1)th year. For CYP, η = 0.0005 for a single modality (TSE and PatchNet) and η = 0.000005 for FuSITSNet. In case of SCP and SEP, η = 0.00001 for all three models. We performed each experiment 5 times and observed a standard deviation of less than 0.2 for all proposed models. The evaluation metric used is Root Mean Squared Error (RMSE). Details are given in Appendix D. Results and Analysis Significance of using SITS over histograms time series: The first set of experiments are conducted to compare the proposed model PatchNet with existing CYP models using histogram TS. Table 1 presents RMSE (in bu/ac) achieved for corn and soybean yield prediction using various models. It can be observed from the table that for corn yield prediction RMSE reduced by ≈12% and 10% with that of CNN and CNN+GP, respectively. These two models use only surface reflectance histograms. The reduction in RMSE is 9% for the CNN+LSTM model which also incorporates meteorological data. CYN uses both meteorological and soil data along with surface reflectance histograms. PatchNet outperforms CYN even without using any other data. However, the error is reduced by an additional 6% when meteorological data is incorporated into PatchNet. Comparison of FuSITSNet with single modality baselines: Table 2 presents RMSE obtained by FuSITSNet and single modality baselines TSE and PatchNet using MODIS and Landsat-8 time series, respectively. It is evident from the results, that the PatchNet (Landsat-8) performed better than TSE (MODIS) with ≈8% and 3.5% lower RMSE in corn and soybean yield prediction, respectively. RMSE reduced from 17.17 to 12.81 for SCP and from 8.86 to 8.54 for SEP, making an improvement of 25% and 3.6%, respectively. This shows the importance of using high spatial resolution data for the applications. The results improved further using FuSITSNet for all three applications. RMSE reduced by ≈24% and 30% for corn and soybean yield prediction, respectively when compared with PatchNet (Landsat-8). A similar pattern is observed in SCP with an improvement of 46% from TSE and ≈28% from PatchNet (Landsat-8). The maximum improvement is observed in SEP with almost 76%. The huge reduction in error signifies that FuSITSNet exploits high temporal and high spatial features and is thus The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8441 Model CYP SCP SEP Corn Soy FuSITSNet 16.192 5.038 9.230 2.044 FuSITSNet (no PAM) 17.198 5.286 12.448 2.501 FuSITSNet (no CMA) 17.242 5.847 11.523 2.749 Table 3: Ablation Study suitable for spatiotemporal applications. The performance of FuSITSNet is improved further by using meteorological data. Details are given in Appendix E. We also compared the models for no. of parameters & running time required. The no. of parameters & training time required for generative fusion models is the sum of parameters & time needed in the generation and prediction process. FuSITSNet has more parameters, but the running time is approx. 1/4th of other fusion models as it does not need a generation process. Our baselines on enhanced SITS: We generated Landsat8 images at every mid-timestamp. It can be observed from Table 2 that RMSE reduces when PatchNet is applied over enhanced time series in comparison to PatchNet (Landsat8) with an exception in the case of corn yield prediction. Out of three generative models, GAN generated image time series performed the best with an improvement of ≈16%, 14%, 7%, and 18% for corn, soybean, snow cover, and solar energy prediction, respectively. Comparison of FuSITSNet with Generative fusion models: Table 2 shows that FuSITSNet outperforms all the scenarios where PatchNet applied on SITS generated by the existing generative fusion models. RMSE reduced by ≈20% for CYP in comparison to the pixel-based model STARFM. The reduction in RMSE for FuSITSNet is 29% and 10% in comparison to learning-based models RSFN and GAN, respectively for corn yield prediction and the corresponding reduction is 21% and 19% for soybean. The best results are obtained for solar energy prediction where RMSE is reduced by ≈70% using FuSITSNet than that of PatchNet(GAN). Ablation Study: We carried out an ablation study to show the importance of modules in FuSITSNet. Considering variations in results in Table 3, we observed that RMSE increased significantly without using PAM with a maximum increase of 25% for SCP followed by 18% in solar energy. Similarly, the performance of the model is also degraded without crossmodality attention. This shows that both modules are important to effectively exploit high spatial and high temporal features in the two SITS. Significance of patch selection mechanism: We conducted experiments without using PSM and NS in PatchNet and instead replaced them with random patch selection. RMSE achieved by random selection is 24.29 bu/ac and 9.98 bu/ac for corn and soybean yield prediction in comparison to 21.47 bu/ac and 7.29 bu/ac, respectively using PatchNet. It is evident that there is a significant improvement in the model performance and PSM and NS collectively work effectively to exploit the required hot spot features in the SITS and eliminate the need to fully process it. Also, it suppresses the noise in the two modalities. Deciding (1/p)th traversal of SITS: The next set of experi21.2 21.6 22 22.4 22.8 23.2 35 40 45 50 55 60 65 70 75 80 5 4 3 2 RMSE Time p Total time (hrs) RMSE (bu/ac) Figure 4: Deciding p for (1/p)th traversal of SITS ments is performed for PatchNet to know the optimum number of patches for the traversal of Landsat-8 image time series. Figure 4 shows the computation time required and RMSE curve for corn yield prediction by varying p as 5,4,3, and 2 keeping the patch size of H×H, H = 64. It can be observed from Figure 3 that, there is an improvement of ≈6% in the performance of the model when the traversed region p changes from 5 to 4, and RMSE did not change much after that. However but the computation time required from p = 5 to 4 is almost constant but it increases linearly after that. We also experimented by changing the patch size to H = 128 for p = 4 and found that RMSE reduced to 21.42 bu/ac which is just 0.2% as compared to that with patch size 64, but the computation time required to process the patch size of 128 increased 1.5 times. To maintain the trade-off between the computation time and RMSE, we chose to carry out the results by taking p = 4 and patch size H = 64. Conclusion Satellite image technology is increasingly being adopted by researchers worldwide for Earth observation to solve problems related to environment and climate change. The democratization of this technology is still marred by the need for processing huge volumes of data and by the unavailability of high spatial and temporal resolution images from a single publicly available satellite system. We proposed two models, PatchNet and FuSITSNet, to overcome these problems. PatchNet makes it feasible to efficiently process high spatial resolution SITS whereas, FuSITSNet fuses two image time series to obtain a joint representation that captures high spatial resolution features of one satellite system and high temporal resolution features of another. We have fused Landsat-8 and MODIS SITS to predict crop yield, snow cover, and solar energy for 2000 US counties and obtained state-of-the-art results. One of the salient features of the models is that high spatial and temporal features are learned without image generation, thereby not increasing the voluminous data further as is the case with generative approaches. Another salient feature is that the fusion module of FuSITSNet suppresses noise. The performance of the proposed approach is improved further by incorporating meteorological data. The proposed approach is applied only on permutation invariant applications for now. In future, we plan to extend our approach to other applications like land use and land cover classification problems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8442 Acknowledgments This work is carried out in the Disruptive Technologies lab which is supported by the Department of Science and Technology (DST), Govt. of India in the form of FIST Level-1 grant to the Department of CSIS, BITS Pilani References Bouabid, S.; Chernetskiy, M.; Rischard, M.; and Gamper, J. 2020. Predicting landsat reflectance with deep generative fusion. arXiv preprint arXiv:2011.04762. Chen, X.; Zhang, N.; Li, L.; Deng, S.; Tan, C.; Xu, C.; Huang, F.; Si, L.; and Chen, H. 2022. Hybrid transformer with multi-level fusion for multimodal knowledge graph completion. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 904–915. Choudhary, K.; Pandey, V.; Murthy, C.; and Poddar, M. 2019. Synergetic use of optical, microwave and thermal satellite data for non-parametric estimation of wheat grain yield. The Int. Archives of Photogrammetry, RS and Spatial Information Sciences, 42: 195–199. de Wit, A.; Duveiller, G.; and Defourny, P. 2012. Estimating regional winter wheat yield with WOFOST through the assimilation of green area index retrieved from MODIS observations. Agricultural and forest meteorology, 164: 39–52. European Space Agency Signature. 2017. Sentinel webpage. https://www.netiq.com/documentation/sentinel-82/ user/data/bookinfo.html. Accessed: 2023-2-25. Fan, H.; Zhang, F.; and Gao, Y. 2020. Self-supervised time series representation learning by inter-intra relational reasoning. arXiv preprint arXiv:2011.13548. Fan, J.; Bai, J.; Li, Z.; Ortiz-Bobea, A.; and Gomes, C. P. 2022. A GNN-RNN approach for harnessing geospatial and temporal information: application to crop yield prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 36, 11873–11881. Gao, F.; Masek, J.; Schwaller, M.; and Hall, F. 2006. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Transactions on Geoscience and Remote sensing, 44(8): 2207–2218. Gavahi, K.; Abbaszadeh, P.; and Moradkhani, H. 2021. DeepYield: A combined convolutional neural network with long short-term memory for crop yield forecasting. Expert Systems with Applications, 184: 115511. Gupta, Y.; Goyal, N.; Varghese, V. J.; and Goyal, P. 2023. Utilizing MODIS Fire Mask for Predicting Forest Fires Using Landsat-9/8 and Meteorological Data. In 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), 1–10. IEEE. Guruprasad, R. B.; Saurav, K.; and Randhawa, S. 2019. Machine learning methodologies for paddy yield estimation in India: a case study. In IGARSS 2019-2019, 7254–7257. IEEE. He, E.; Xie, Y.; Liu, L.; Chen, W.; Jin, Z.; and Jia, X. 2023. Physics Guided Neural Networks for Time-Aware Fairness: An Application in Crop Yield Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 14223–14231. Hubara, I.; Courbariaux, M.; Soudry, D.; El-Yaniv, R.; and Bengio, Y. 2016. Binarized neural networks. Advances in neural information processing systems, 29. ISRO. 2019. cartosat. https://www.isro.gov.in/CARTOSAT 1.html. Accessed: 2023-5-18. Jebli, I.; Belouadha, F.-Z.; Kabbaj, M. I.; and Tilioua, A. 2021. Prediction of solar energy guided by pearson correlation using machine learning. Energy, 224: 120109. Ji, Z.; Pan, Y.; Zhu, X.; Zhang, D.; and Wang, J. 2022. A generalized model to predict large-scale crop yields integrating satellite-based vegetation index time series and phenology metrics. Ecological Indicators. Jiang, T.; Huang, M.; Segovia-Dominguez, I.; Newlands, N.; and Gel, Y. R. 2022. Learning space-time crop yield patterns with zigzag persistence-based lstm: Toward more reliable digital agriculture insurance. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 12538– 12544. Kaur, A.; Goyal, P.; and Goyal, N. 2023. LSFuseNet: DualFusion of Landsat-8 and Sentinel-2 Multispectral Time Series for Permutation Invariant Applications. In 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), 1–10. IEEE. Kaur, A.; Goyal, P.; Rajhans, R.; Agarwal, L.; and Goyal, N. 2023. Fusion of multivariate time series meteorological and static soil data for multistage crop yield prediction using multi-head self attention network. Expert Systems with Applications, 226: 120098. Kaur, A.; Goyal, P.; Sharma, K.; Sharma, L.; and Goyal, N. 2022. A Generalized Multimodal Deep Learning Model for Early Crop Yield Prediction. In International Conference on Big Data, 1272–1279. IEEE. Khaki, S.; and Wang, L. 2019. Crop yield prediction using deep neural networks. Frontiers in plant science, 10: 621. M˚aløy, H.; Windju, S.; Bergersen, S.; Alsheikh, M.; and Downing, K. L. 2021. Multimodal performers for genomic selection and crop yield prediction. Smart Agricultural Technology, 1: 100017. Mohla, S.; Pande, S.; Banerjee, B.; and Chaudhuri, S. 2020. Fusatnet: Dual attention based spectrospatial multimodal fusion network for hyperspectral and lidar classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 92–93. NASA. 2000. MODIS Product. https://developers. google.com/earth-engine/datasets/catalog/MODIS 006 MOD10A1. Accessed: 2022-4-13. NASA. 2015. MODIS. https://lpdaac.usgs.gov/data/getstarted-data/collection-overview/. Accessed: 2022-12-06. NASA. 2016. Landsatwebpage. https://www.usgs.gov/ faqs/what-are-band-designations-landsat-satellites?qtnews science products=0#qt-news science products. Accessed: 2022-2-16. Ping, B.; Meng, Y.; and Su, F. 2018. An enhanced linear spatio-temporal fusion method for blending Landsat and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8443 MODIS data to synthesize Landsat-like imagery. Remote Sensing, 10(6): 881. planet. 2019. planetscope. https://www.planet. com/?utm source=google&utm medium=paidsearch&utm campaign=discovery&utm content=prosleads-responsive-search-0623&utm source=google& utm medium=paid-search&gad=1. Accessed: 2023-5-18. Poonam Goyal. 2022. github. https://github.com/ DrPoonamGoyal/FuSITSNet-at-AAAI2024. Accessed: 2023-7-15. Sakamoto, T. 2020. Incorporating environmental variables into a MODIS-based crop yield estimation method for United States corn and soybeans through the use of a random forest regression algorithm. ISPRS Journal of Photogrammetry and Remote Sensing, 160: 208–228. Shah, A.; Sra, S.; Chellappa, R.; and Cherian, A. 2022. MaxMargin Contrastive Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8220– 8230. Skakun, S.; Kalecinski, N. I.; Brown, M. G.; Johnson, D. M.; Vermote, E. F.; Roger, J.-C.; and Franch, B. 2021. Assessing within-field corn and soybean yield variability from WorldView-3, Planet, Sentinel-2, and Landsat 8 satellite imagery. Remote Sensing, 13(5): 872. Sun, J.; Di, L.; Sun, Z.; Shen, Y.; and Lai, Z. ???? Countylevel soybean yield prediction using deep CNN-LSTM model. Sensors, 19(20). Sun, J.; Lai, Z.; Di, L.; Sun, Z.; Tao, J.; and Shen, Y. 2020. Multilevel deep learning network for county-level corn yield estimation in the us corn belt. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13: 5048–5060. Tan, Z.; Gao, M.; Yuan, J.; Jiang, L.; and Duan, H. 2022. A Robust Model for MODIS and Landsat Image Fusion Considering Input Noise. IEEE Transactions on Geoscience and Remote Sensing, 60: 1–17. USDA. 2010. USDA/Nass QuickStats AD-hoc query tool. https://quickstats.nass.usda.gov/. Accessed: 2022-7-15. Verma, U.; Piepho, H.; Goyal, A.; Ogutu, J.; Kalubarme, M.; et al. 2016. Role of climatic variables and crop condition term for mustard yield prediction in Haryana. Int J Agric Stat Sci, 12: 45–51. Wang, Q.; Zhang, Y.; Onojeghuo, A. O.; Zhu, X.; and Atkinson, P. M. 2017. Enhancing spatio-temporal fusion of MODIS and Landsat data by incorporating 250 m MODIS data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(9): 4116–4123. Wei, J.; Wang, L.; Liu, P.; Chen, X.; Li, W.; and Zomaya, A. Y. 2017. Spatiotemporal fusion of MODIS and Landsat7 reflectance images via compressed sensing. IEEE Transactions on Geoscience and Remote Sensing, 55(12): 7126– 7139. Xiao, X.; He, T.; Liang, S.; Liu, X.; Ma, Y.; Liang, S.; and Chen, X. 2022. Estimating fractional snow cover in vegetated environments using MODIS surface reflectance data. International Journal of Applied Earth Observation and Geoinformation, 114: 103030. You, J.; Li, X.; Low, M.; Lobell, D.; and Ermon, S. 2017. Deep gaussian process for crop yield prediction based on remote sensing data. In Thirty-First AAAI conference on artificial intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8444
2024
938
18,782
Rethinking Reverse Distillation for Multi-Modal Anomaly Detection Zhihao Gu1*, Jiangning Zhang2, Liang Liu2, Xu Chen2, Jinlong Peng2, Zhenye Gan2, Guannan Jiang3, Annan Shu3, Yabiao Wang2, Lizhuang Ma1† 1School of Electronic and Electrical Engineering, Shanghai Jiao Tong University 2YouTu Lab, Tencent 3Contemporary Amperex Technology Co. Limited (CATL) [email protected], {vtzhang, cxxuchen, jeromepeng, wingzygan}@tencent.com, [email protected] Abstract In recent years, there has been significant progress in employing color images for anomaly detection in industrial scenarios, but it is insufficient for identifying anomalies that are invisible in RGB images alone. As a supplement, introducing extra modalities such as depth and surface normal maps can be helpful to detect these anomalies. To this end, we present a novel Multi-Modal Reverse Distillation (MMRD) paradigm that consists of a frozen multi-modal teacher encoder to generate distillation targets and a learnable student decoder targeting to restore multi-modal representations from the teacher. Specifically, the teacher extracts complementary visual features from different modalities via a siamese architecture and then parameter-freely fuses these information from multiple levels as the targets of distillation. For the student, it learns modality-related priors from the teacher representations of normal training data and performs interaction between them to form multi-modal representations for target reconstruction. Extensive experiments show that our MMRD outperforms recent state-of-the-art methods on both anomaly detection and localization on MVTec-3D AD and Eyecandies benchmarks. Codes will be available upon acceptance. Introduction Anomaly detection (AD) has received continuous attention for several decades due to its wide range of applications such as defect detection, autonomous driving, video surveillance, and medical diagnosis. It is usually formulated as an unsupervised problem for the scarcity of anomalous data. In recent years, vast efforts are dedicated to developing unsupervised anomaly detectors in images and tremendous progress has been made (Rudolph, Wandt, and Rosenhahn 2021; Roth et al. 2022; Li et al. 2021; Zavrtanik, Kristan, and Skoˇcaj 2021; Hou et al. 2021; Deng and Li 2022), where embedding-based methods, synthesis and reconstruction are the dominant trends for this task. Embedding-based methods (Rudolph, Wandt, and Rosenhahn 2021; Roth et al. 2022) characterize the corresponding distribution of the extracted features, and the anomalies are detected by measuring the distance between features of test images and the estimated distribution. The synthesis-based methods (Li et al. *Work done when Zhihao Gu is an intern at CATL. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Teacher Encoder Student Decoder Teacher Encoder Student Decoder Point Cloud Encoder RGB Encoder Late Fusion Baseline (RD) MMRD (Ours) Late Fusion (M3DM) Middle Fusion Teacher Encoder Multi-Modal Interaction Generating Priors Nearest Neighbor Search Pixel-wise Difference Pixel-wise Difference Shared Memory Memory Memory Figure 1: Illustration of different multi-modal anomaly detectors and corresponding anomaly maps (last row). Left: Reverse distillation. Middle: Two-stream structure with late fusion. Right: Our proposed paradigm. 2021; Zavrtanik, Kristan, and Skoˇcaj 2021) estimate the decision boundary between anomaly-free samples and the synthetically anomalous data for detection. Contrarily, methods by reconstruction (Hou et al. 2021; Deng and Li 2022) either recover the input (Hou et al. 2021) or restore middle-level features (Deng and Li 2022), as shown in Fig. 1-Left, where the pixel-wise similarity indicates the anomalies. However, extensive investigations in Invest3D (Horwitz and Hoshen 2022) show that some anomalies are hard to be detected on RGB images. Therefore, a few 3D-based methods are motivated to be developed, which directly deal with the 3D data for anomaly detection. For instance, Invest3D extracts orientation-invariant 3D features via FPFH (Rusu, Blodow, and Beetz 2009) operator and adopts PatchCore (Roth et al. 2022) for detection. And 3D-ST (Bergmann and Sattlegger 2023) extends the 2D teacher-student network to anomalyfree point clouds. Nevertheless, they usually produce inferior results than their RGB-based counterparts due to the complexity of 3D data. To improve the effectiveness, recent arts (Rudolph et al. 2023; Bonfiglioli et al. 2022; Wang et al. 2023) tend to utilize multiple modalities for AD, the necessity of which is illustrated in Fig. 2. The hole on the “cookies” and the protrusion on the “lollipop” is imperceptible on RGB images, but can be detected using depth and surface normals as the auxiliary modality. Besides, they also proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8445 MVTec 3D-AD Dataset Eyecandies Dataset RGB Image RGB Image Depth Depth RGB Image Normals RGB Image Normals Figure 2: First row: normal samples. Second row: defective samples. Depth and normals provide supplementary visual information to RGB images for revealing anomalies and reducing misidentification of anomaly-free areas. vide supplementary visual information reduce misidentification of anomaly-free areas. Among those methods, autoencoder (Bonfiglioli et al. 2022) is used to reconstrcut the concatenation of RGB and depth images. And M3DM (Wang et al. 2023) extends the 2D PatchCore to deal with different modalities by complicated networks and lately fuse them for multi-modal AD. However, the knowledge distillation (KD), as one of the mainstream approaches in 2D AD, has not been explored. A natural queston is: how can we develop an efficient KD paradigm from a multi-modal perspective? This paper answers it in the context of Reverse Distillation (RD) (Deng and Li 2022) and presents a novel MultiModal Reverse Distillation (MMRD) paradigm for multimodal anomaly detection. The main idea is to integrate information in the auxiliary modality to the frozen teacher encoder and learnable student decoder at multiple feature levels (Fig. 1-Right). The resulting multi-modal teacher encodes supplementary information from the auxiliary modality via a siamese structure, and parameter-freely fuses the RGB features with them as the multi-modal targets of distillation. Instead the multi-modal student learns modalityrelated priors from the normal data during training and interactively produces multi-modal representations to restore those targets. Consequently, the proposed MMRD achieves state-of-the-art results on two multi-modal AD benchmarks. What’s more, it is not only flexible, handling images, depth, and surface normals but also generalizable to another distillation paradigm, i.e., the forward distillation (Bergmann et al. 2020). To sum up, our main contributions are fourfold: • We develop a novel reverse distillation paradigm, named MMRD, for multi-modal anomaly detection. • We devise a frozen multi-modal teacher encoder to generate multi-modal distillation targets through a siamese structure and a parameter-free modulation module. • We design a learnable multi-modal student decoder to restore representations of the multi-modal teacher via generating multi-modal priors. • The proposed MMRD achieves state-of-the-art results on two multi-modal anomaly detection benchmarks. Related Work Unsupervised Anomaly Detection. Most existing works detect anomalies on RGB images and can be classified into three categories (Xie et al. 2023a): synthesis-based (Li et al. 2021; Zavrtanik, Kristan, and Skoˇcaj 2021), embeddingbased (Rudolph, Wandt, and Rosenhahn 2021; Roth et al. 2022; Gu et al. 2023; Xie et al. 2023b) and reconstructionbased (Hou et al. 2021; Deng and Li 2022; Liang et al. 2023) methods. Contrarily, limited methods perform unsupervised 3D anomaly detection (Liu et al. 2023; Chen et al. 2023a). Grid-VAE (Bengs et al. 2021) adopts the variational AutoEncoder (AE) to reconstruct 3D voxel grids and produces anomaly scores by comparing each voxel element of the input to its reconstruction. 3D-ST (Bergmann and Sattlegger 2023) adapts the 2D student-teacher framework to detect geometric anomalies in high-resolution 3D point clouds. However, they do not perform well on the challenging MVTec 3D-AD (Bergmann et al. 2022) benchmark due to the complexity of 3D data, and new methods are needed. Recent efforts tend to combine different modalities for better anomaly detection. AST (Rudolph et al. 2023) proposes an asymmetric student-teacher network to deal with the concatenation of image features and depth maps. Eyecandy (Bonfiglioli et al. 2022) directly concatenates different modalities along the channel dimension as the input and reconstructs it via an AE. M3DM (Wang et al. 2023) uses a two-stream structure to extract features from different modalities and lately fuses them for AD. Two aspects differ our method from the above ones: 1) we develop a novel multi-modal reverse distillation paradigm and 2) integrate features of different modalities at multiple feature levels. Knowledge Distillation. Knowledge distillation (Hinton, Vinyals, and Dean 2015; Gou et al. 2021, 2022) is originally used to transfer knowledge from a heavy teacher model to a lightweight student network and has achieved prominent progress in many fields. In AD, the student tends to unsuccessfully reconstruct the features of the teacher for anomalous samples. This insight is used to localize anomalies. US (Bergmann et al. 2020) first introduces KD for the task. Later, the forward distillation (Salehi et al. 2021; Wang et al. 2021) forms the Student-Teacher (S-T) feature pyramid and performs multi-scale feature distillation. Differences between multi-features are exploited for localization. However, the RD (Deng and Li 2022) argues that similar structures between the S-T harm the feature diversity and thus the student is built on top of the teacher. All these methods have difficulty in handling anomalies invisible in RGB images. Instead, for the first time, we explore KD to deal with these anomalies from a multi-modal perspective. Multi-Modal Fusion. Different modalities contain supplementary information and fusing them is beneficial to better understand visual scenes compared to methods with one modality as input (Liu et al. 2022; Zhang et al. 2023b; Chen, Han, and Zhang 2023; Chen et al. 2023b; Zhang et al. 2023a). CEN (Wang et al. 2020a) dynamically exchanges the channels between sub-networks for fusion based on the scaling factor of the batch normalization. AsymFusion (Wang et al. 2020b) performs asymmetric shuffle The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8446 RGB 𝐹! " 𝐹! # Depth … … 𝐹! $ 𝐹! " 𝐹! # 𝐹! Convolution Sharing Frozen Conv Updated RGB BN Updated Depth BN … … 𝐹!%& # 𝐹!%& " … … … … Frozen Multi-Modal Teacher Encoder Parameter-free Modality Modulation Norm Learnable Multi-Modal Student Decoder CA: Channel Attention R: Retrieval 𝐹! "𝐹! # Retrieval Weight Learning Prototypes from the Teacher ℒ! "/ ℒ! # R "𝐹! " "𝐹! # … … Prototypes Max Pooling 1×𝐻!×𝑊! Conv 3×3 1×𝐻!×𝑊! CA 𝐶!×1×1 "𝐹! # "𝐹! " 𝐷!$% 𝐹!$% & 𝐹!$% ' ℒ!$% () C 𝐹! & 𝐶!×𝐻!×𝑊! $𝐹! " Modality-Related Priors Generation 𝐶!×𝐻!×𝑊! 𝐶!×𝐻!×𝑊! Multi-Modal Priors Generation 𝐶!×𝐻!×𝑊! 𝐶!×𝐻!×𝑊! Generated Priors 𝑬𝒊 𝑬𝒊 𝑬𝒊"𝟏 𝑬𝒊"𝟏 𝐹! " 𝐹! # . Intra-Modal Interaction σ Siamese Structure Siamese Structure Multi-Modal Priors Modal-Related Priors σ Inter-Modal Interaction 𝑃& ' 𝑃& ( C Concatenation . . Element-wise Addition . Element-wise Multiplication Figure 3: Overview of the proposed multi-modal reverse distillation (MMRD). It comprises a frozen multi-modal teacher encoder and a learnable multi-modal student decoder, and each of them contains two important components. At ith stage, the teacher adopts a siamese encoder Ei with frozen convolutions and individual BNs to extract supplementary visual information, i.e., F R i and F A i , from RGB image and the auxiliary modality. A parameter-free modality modulation module then fuses them and produces the distillation target F T i . Instead, the student generates modality-related priors, i.e., ˆF R i and ˆF A i , by learning prototypes, i.e., P R i and P A I , from the teacher representations of normal data, i.e., F R i and F A i , and then performs interaction between ˆF R i and ˆF A i to generate multi-modal representation ¯F R i . Finally, ¯F R i is concatenated with the student representation F S i to restore target F T i . In inference, pixel-wise similarity between {F T i , F S i }K i=1 is computed for anomaly detection. and shift operations to exchange information between multimodal features. MGAF (Kim, Jones, and Hager 2021) fuses motion features with that from detection via the crossattention (Wang et al. 2018). In KD for multi-modal AD, we not only perform parameter-free modality modulation to form distillation targets in the teacher but also generate multi-modal representations to help the student better restore these targets. Proposed Method This section revisits knowledge distillation for anomaly detection as preliminaries. Then, details of the proposed frozen multi-modal teacher encoder and learnable multi-modal student decoder are presented one by one. The overall paradigm is shown in Fig. 3 and the algorithm table summarizing the proposed method is included in the supplementary material. Preliminaries: Knowledge Distillation for AD In AD, the Knowledge Distillation (KD) detects anomalies based on RGB images and contains a pre-trained teacher network and a learnable student network. It owns two types: 1) the Forward Distillation (FD) (Bergmann et al. 2020; Wang et al. 2021); 2) Reverse Distillation (RD) (Deng and Li 2022). Formally, given an RGB image IR ∈RC×H×W (C, H, and W is the channel, height, and width), the frozen teacher extracts feature {F R i }K i=1 ∈RCi×Hi×Wi (distillation targets) from its K stages and the student is trained to restore them, resulting in {F S i }K i=1 ∈RCi×Hi×Wi. Differently, the student in FD encodes IR but the student in RD decodes the one-class embedding of the teacher. Finally, a KD loss is used to supervise the reconstruction process: LKD i = 1 − flat(F R i ) ∥flat(F R i )∥2 · flat(F S i )T ∥flat(F S i )∥2 , (1) where flat(·) is the flatten function. In inference, pixel-wise cosine similarity between {F R i , F S i }K i=1 is computed to detect and localize anomalies, as shown in Fig. 1-Left. However, it is difficult for KD to detect anomalies invisible in RGB images. To handle it, based on RD, we develop a novel multi-modal reverse distillation paradigm, which contains a frozen multi-modal teacher encoder and a learnable multi-modal student decoder. Multi-Modal Teacher (MMT) Encoder For the teacher encoder, we generate multi-modal distillation targets by integrating supplementary information from an auxiliary modality with an RGB image. As illuminated in Fig. 3-Left, we adopt a cross-statistics siamese teacher network to extract those information and a modality modulation module to parameter-freely produce these targets. Cross-Statistics Siamese Teacher Network. Fig. 2 shows that auxiliary modalities provide supplementary visual information to RGB images for revealing anomalies and reducing misidentification of anomaly-free areas. To model such supplementarity, we adopt a shared encoder, known as the siamese network, to extract features from the RGB image and the corresponding auxiliary modality, denoted as {F R i , F A i }K i=1. Nevertheless, the teacher network in KD is pre-trained on RGB images, and statistics stored in Batch Normalization layers (BNs) are shifted for the auxiliary The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8447 modality. To mitigate this issue, we share the frozen convolutions for both modalities but maintain individual BNs for the auxiliary modality. Relevant statistics in these BNs are updated within several epochs with parameters of affine transformation unchanged, whose impacts are explored in Tab. 2 (b) and visualized in the supplementary material. In practice, we also adopt this strategy for RGB images. As a result, the extracted features are more modality-specific. Parameter-Free Modality Modulation. Note that since the frozen teacher in KD provides deterministic distillation targets for a given input, the modality fusion should contain no learnable parameters. Besides, as discussed before, the auxiliary modality owns supplementary visual information to RGB images and is integrated for an auxiliary purpose. Therefore, not all information in F A i is equally needed. To this end, we propose to estimate a fusion weight for F A i to decide how much information is needed to be fused and then compensate F R i with the selected information in a residual form. Concretely, we first exploit a normalization operation to generate the fusion weight αA i ∈RCi×Hi×Wi: α(F A i ) = Sigmoid( (F A i −µA i )2 (σA i )2 + 10−4 ), (2) where µA i = 1 HiWi P F A i and (σA i )2 = 1 HiWi P(F A i − µA i )2. Intuitively, the normalization operation helps reduce the disturbance from modality-specific information and better reflects the position-wise intensity. In practice, we find that αA i calculated from the sum of F R i and F A i , denoted as Fi, performs better than α(F A i ). It may be because Fi contains more comprehensive information than individual ones and thus is a better indicator for the fusion weight. We give the visual effects in Fig. 4. Finally, the multi-modal teacher representation (distillation target) F T i is formulated as: F T i = F R i + α(Fi) · F A i . (3) α(Fi) ∈[0, 1] flexibly controls the multi-modal information. Compared to F R i , the multi-modal F T i pays more attention to objects and suppresses the effects from the background, which is investigated in the supplementary material. Analysis. The devised siamese teacher encoder differs from AsymFusion (Wang et al. 2020b) in two aspects. First, ours extracts modality-specific features by a frozen architecture but their fully learnable structure instead encodes multimodal features in each branch. Second, we parameter-freely fuse features of each modality to generate multi-modal distillation targets while they fuse features for further encoding. Multi-Modal Student (MMS) Decoder For the student, we incorporate multi-modal prior information to help restore distillation targets. To this end, we first generate priors for each modality via a modality-related priors generation module and then perform interaction on them to produce multi-modal priors via a multi-modal priors generation module, as shown in Fig. 3-Right. Modality-Related Priors Generation. In KD, the student is expected to restore representations of the teacher encoder. Therefore, introducing information from the teacher to the student is helpful for the reconstruction. We then propose to learn a set of representative features (named “prototypes”) from the teacher representations of normal training data and generate modality-related priors to provide finer modal information. The prototypes are learned for both modalities and integrated via feature retrieval to generate priors for each modality. Formally, given the teacher representation of an RGB image F R i ∈RCi×Hi×Wi and N prototypes P R i = {(P R i )j ∈RCi}N j=1, the position-wise retrieval weight W R i ∈RN×Hi×Wi is measured as follows: (W R i )j,h,w = exp(d((F R i )h,w, (P R i )j)) PN j=1 exp(d((F R i )h,w, (P R i )j)) , (4) where (w, h) denotes spatial index and d(·, ·) is the cosine similarity. Aggregating P R i with weights at each location of W R i gives the reconstruction result ˆF R i : ( ˆF R i )w,h = X j(W R i )j,h,w · (P R i )j. (5) To ensure P R i learns representative information, we propose to enforce the similarity between the teacher representation F R i and the reconstruction ˆF R i in the training phase: LR i = 1 HWC X h,w,c ∥F R i −ˆF R i ∥2 2. (6) Note that Eq. (6) is applied to all normal training samples. Therefore, the learned P R i contains normal information and is representative enough. This is why we call them “prototypes”. In inference, the teacher representation F R i is used to generate the modality-specific priors ˆF R i via Eq. (5). For the auxiliary modality, we also learn a set of N prototypes P A i = {(P A i )j ∈RCi}N j=1 via a similar process, producing the loss LA i and priors ˆF A i . Multi-Modal Priors Generation. Next, we aim to provide multi-modal prior information for the student to reconstruct the distillation target F T i . To achieve this, we perform multi-modality interaction between the modality-related ˆF R i and ˆF A i to obtain a refiner representation. Since the auxiliary modality provides supplementary visual cues and the student is learnable, we use ˆF A i to enhance ˆF R i through the intra and inter-modal interaction, as demonstrated in Fig. 3Right. Specially, we first conduct the Channel Attention (CA) (Hu, Shen, and Sun 2018) on ˆF R i for intra-modal enhancement. Then the Spatial Attention (SA) map of size R1×Hi×Wi is generated from ˆF A i via the MaxPooling − Conv3×3 −Sigmoid procedure. Finally, we perform intermodal interaction by multiplying the enhanced ˆF R i with the SA map to highlight locations of interest, resulting in a finer multi-modal representation ¯F R i . The whole multi-modal interaction process can be formulated as follows: ¯F R i = SA( ˆF A i ) · (CA( ˆF R i ) · ˆF R i + ˆF R i ) + ˆF R i . (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8448 Method Bagel Cable Gland Carrot Cookie Dowel Foam Peach Potato Rope Tire Mean 3D FPFH 82.5/97.3 55.1/87.9 95.2/98.2 79.7/90.6 88.3/89.2 58.2/73.5 75.8/97.7 88.9/98.2 92.9/95.6 65.3/96.1 78.2/92.4 AST 88.1/95.2 57.6/74.1 96.5/97.3 95.7/90.4 67.9/83.0 79.7/83.1 99.0/97.8 91.5/98.1 95.6/89.1 61.1/77.8 83.3/88.6 M3DM 94.1/94.3 65.1/81.8 96.5/97.7 96.9/88.2 90.5/88.1 76.0/74.3 88.0/95.8 97.4/97.4 92.6/95.0 76.5/92.9 87.4/90.6 Ours 82.9/92.6 68.6/80.6 93.7/96.5 80.4/85.8 97.2/90.4 86.5/73.1 94.7/96.2 80.6/95.8 96.7/96.6 84.9/93.6 86.6/90.1 RGB PatchCore 87.6/90.1 88.0/94.9 79.1/92.8 68.2/87.7 91.2/89.2 70.1/56.3 69.5/90.4 61.8/93.2 84.1/90.8 70.2/90.6 77.0/87.6 AST 94.7/85.5 92.8/90.5 85.1/80.0 82.5/46.6 98.1/89.4 95.1/52.9 89.5/83.5 61.3/54.4 99.2/87.7 82.1/60.5 88.0/73.1 M3DM 94.4/95.2 91.8/97.2 89.6/97.3 74.9/89.1 95.9/93.2 76.7/84.3 91.9/97.0 64.8/95.6 93.8/96.8 76.7/96.6 85.0/94.2 Ours 98.7/97.0 93.7/98.3 94.3/98.2 77.0/92.4 98.1/97.6 84.7/87.5 91.3/98.1 75.3/97.5 99.3/98.4 85.3/97.3 89.8/96.2 RGB + 3D PatchCore 91.8/97.6 74.8/96.9 96.7/97.9 88.3/97.2 93.2/93.3 58.2/88.8 89.6/97.5 91.2/98.1 92.1/95.0 88.6/97.1 86.5/95.9 AST 98.3/97.1 87.3/94.4 97.6/98.1 97.1/93.9 93.2/91.3 88.5/91.4 97.4/98.1 98.1/98.3 100.0/89.0 79.7/94.0 93.7/94.6 M3DM 99.4/97.0 90.9/97.1 97.2/97.9 97.6/95.0 96.0/94.1 94.2/93.2 97.3/97.7 89.9/97.1 97.2/97.1 85.0/97.5 94.5/96.4 Ours 99.9/98.6 94.3/99.0 96.4/99.1 94.3/95.1 99.2/99.0 91.2/90.1 94.9/99.0 90.1/99.0 99.4/98.7 90.1/98.2 95.0/97.6 (a) Anomaly detection and localization performance on the MVTec 3D-AD dataset. Method Candy Cane Chocolate Cookie Chocolate Praline Confetto Gummy Bear Hazelnut Truffle Licorice Sandwich Lollipop Marsh. Peppermint Candy Mean 3D FPFH 69.3/90.4 87.0/92.1 80.6/79.5 92.8/94.7 86.4/87.2 59.7/63.3 90.9/91.8 91.0/91.5 85.0/87.4 89.8/90.7 83.2/86.9 Eyecandy 60.9/87.7 85.3/91.5 82.9/76.7 84.0/95.6 82.8/91.0 56.0/56.9 77.0/88.2 85.6/84.3 91.0/92.3 85.8/87.6 79.1/85.1 M3DM 48.2/91.1 58.9/64.5 80.5/58.1 84.5/74.8 78.0/74.8 53.8/48.8 76.6/60.8 82.7/90.4 80.0/64.6 82.2/75.0 72.5/70.2 Ours 84.4/96.1 94.4/93.6 91.5/91.6 89.4/90.2 87.5/88.4 73.3/67.2 95.4/96.3 93.8/92.6 90.1/93.8 95.0/96.1 89.5/90.6 RGB PatchCore 52.5/54.3 95.4/92.8 53.4/60.1 90.7/92.4 64.6/78.2 46.6/55.7 76.2/86.0 68.2/74.5 94.4/95.3 91.5/93.3 73.4/78.3 Eyecandy 52.7/60.7 84.8/90.4 77.2/80.5 73.4/98.2 59.0/87.1 50.8/66.2 69.3/83.6 76.0/80.5 85.1/90.7 73.0/76.2 70.1/81.4 M3DM 64.8/86.7 94.9/90.4 94.1/80.5 100.0/98.2 87.8/87.1 63.2/66.2 93.3/88.2 81.1/89.5 99.8/97.0 100.0/96.2 87.9/88.0 Ours 61.8/91.6 99.5/95.2 86.2/83.3 97.8/98.3 86.1/87.5 65.8/67.2 87.0/88.7 84.0/85.6 97.1/97.6 99.8/98.5 86.5/89.4 RGB + 3D PatchCore 44.8/70.9 95.0/93.3 77.9/73.7 92.8/95.2 88.8/90.2 41.6/40.7 91.2/91.9 83.1/86.6 100.0/96.9 96.3/92.9 81.1/84.0 Eyecandy 58.7/85.2 84.6/90.3 80.7/74.1 83.3/93.5 83.3/89.9 54.3/53.6 74.4/86.7 87.0/86.4 94.6/94.5 83.5/84.3 78.4/83.9 M3DM 62.4/90.6 95.8/92.3 95.8/80.3 100.0/98.3 88.6/85.5 78.5/68.8 94.9/88.0 83.6/90.6 100.0/96.6 100.0/95.5 89.7/88.2 Ours 85.4/97.5 100.0/97.0 94.6/94.2 99.8/98.5 90.8/91.7 74.7/68.0 96.6/97.0 98.4/94.1 100.0/99.0 100.0/99.2 94.0/93.6 (b) Anomaly detection and localization performance on the Eyecandies dataset. Table 1: Quantitative results on (a) MVTec 3D-AD and (b) Eyecandies datasets. We report Image-level AUROC (%) ↑/Pixellevel PRO (%) ↑and highlight methods achieving the best results in bold. Finally, ¯F R i is concatenated with F S i as the input of the student decoder Di−1 to restore F T i−1, resulting in F S i−1: F S i−1 = Di−1([F S i ; ¯F R i ]). (8) The F S i−1 and F T i−1 are used to compute the distillation loss in Eq. (1) during training and detect anomalies in inference. Analysis. We give some theoretical explanations on scores from priors. The student is trained to produce anomaly-free features and then anomaly-free areas are inside the convex combination of “prototypes”. Finally, anomalies fail to be inside the combination and features between the teacher and student have a higher reconstruction error. This insight is used for anomaly localization. Fig. 5 verifies the analysis. Loss Function and Anomaly Detection Loss Function. It consists of the distillation loss from K stages and the prototype learning loss of each modality: L = XK i=1 LKD i + λ XK i=1(LR i + LA i ), (9) where K = 3 and λ is the balance factor, set 0.1 by default. Anomaly Detection. In inference, pixel-wise cosine similarity between {F T i , F S i }K i=1 is computed and then a bilinear up-sampling operation Up(·) is conducted to generate an anomaly map Si. The final anomaly map A is given by: A = g( X i Up(1 −d(F T i , F S i ))), (10) where g(·) denotes the Gaussian filter (Roth et al. 2022). A gives the localization results and a larger score on it indicates a higher probability of anomaly. We simply take its maximum value as the image-level anomaly score. Experiments Experimental Settings Datasets. We conduct experiments on two multi-modal benchmarks, i.e., the MVTec 3D-AD (Bergmann et al. 2022) and the Eyecandies (Bonfiglioli et al. 2022). The former contains 4,147 scans captured from 10 object categories and provides modality of RGB images and Point Clouds (PCs). The latter consists of 10 categories with 1,500 samples for each type and provides RGB images, depth maps, and surface normals. Pixel-level annotations are available in both datasets to evaluate the anomaly localization performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8449 Component ROCAD ROCAL PRO Baseline 84.0 96.7 88.9 +MMT 91.5 97.2 91.5 +MMT+PG 92.6 97.6 92.0 All 94.0 98.3 93.6 (a) Study on key components. Modality ROCAD ROCAL PRO None 93.2 97.6 92.8 3D 93.4 97.7 93.0 RGB 93.8 98.2 93.4 All 94.0 98.3 93.6 (b) Study on individual BNs. Method ROCAD ROCAL PRO FD 70.8 85.6 78.0 MMFD 82.5 90.2 84.4 RD 84.0 96.7 88.9 MMRD 94.0 98.3 93.6 (c) Study on distillation paradigms. Modality ROCAD ROCAL PRO Depth 74.5 90.8 86.2 Normals 89.5 96.0 90.6 RGB 86.5 94.5 89.4 +Depth 92.8 97.2 91.3 +Normal 94.0 98.3 93.6 All 94.4 98.8 93.9 (d) Study on modalities. Method ROCAD ROCAL PRO CSA 91.2 97.9 92.4 SSA 92.5 98.1 91.6 α = 1 93.0 98.1 92.8 α(F A i ) 93.4 98.2 93.1 α(Fi) 94.0 98.3 93.6 SEM(Fi) 90.2 97.0 92.6 (e) Study on fusion strategies. {N1, N2, N3} ROCAD ROCAL PRO {0, 0, 0} 91.5 97.2 91.5 {10, 10, 10} 93.4 97.8 92.3 {10, 50, 102} 94.2 98.2 93.0 {50, 50, 50} 94.0 98.3 93.6 {102, 50, 10} 94.0 98.2 93.3 {102, 102, 102} 93.3 98.0 93.1 (f) Study on number of prototype. Table 2: Ablation study on the Eyecandies dataset. “PG”, “MMFD”, “SEM”, “CSA” and “SSA” refer to the modality-related prior generation, multi-modal forward distillation, SE module, channel, and spatial self-attention, respectively. Baseline Methods. We compare ours with several SOTA multi-modal detectors, i.e., AST (Rudolph et al. 2023) using depths and RGBs, M3DM (Wang et al. 2023) using PCs and RGBs, PatchCore (Roth et al. 2022) with FPFH (Rusu, Blodow, and Beetz 2009) using PCs and RGBs, and Eyecandy (Bonfiglioli et al. 2022) using normals and RGBs. Evaluation Metrics. The Area Under the Receiver Operator Curve (AUROC) and Precision Recall (AUPR) are used to quantify anomaly detection and localization capacity. The Per-Region Overlap (PRO) is also adopted for localization. Implemental Details. Images are resized into 256 × 256 and Adam is used as the optimizer with a learning rate of 0.001. The model is trained for 400 epochs of batch size 16. the number of prototypes is set 50. The teacher network is a pre-trained WideResNet50 and the student is the same as RD. We adopt the depth and normals as auxiliary modalities for MVTec 3D-AD and Eyecandies datasets, respectively. Main Results Results on the MVTec 3D-AD. Tab. 1 (a) shows experimental results for anomaly detection using 3D data, RGB images, or their combination on the MVTec 3D-AD dataset. Image-level AUROC and pixel-level PRO for all classes are reported. First, we find that by solely relying on RGB images for detection, our method outperforms all 3D-based counterparts (with improvements of 2.4% on AUROCAD and 3.8% on PROAL) in terms of mean values. This is likely due to the complexity of 3D data and the limited efforts put into its development. However, it is also observed that geometric information in some targets, e.g., foam and peach, play a more important role in detecting anomalies (86.5% versus 84.7% on foam, and 94.7% versus 91.3% on peach) since these anomalies are visually unperceived in the 2D view. Finally, integrating 3D information gives larger improvements. Results on the Eyecandies. The proposed method is also evaluated on the Eyecandies dataset and image-level AUROC and pixel-level PRO for all classes are reported in Method ROCAD ROCAL PRAD PRAL PRO GPUH/FPS AST† 93.7 97.5 97.4 33.7 94.6 10.4/41.0 M3DM† 93.6 99.2 97.7 43.9 96.2 12.6/0.10 Ours 95.0 99.2 98.1 42.1 97.6 5.8/10.2 Table 3: More comprehensive results on the MVTec 3D-AD dataset. AD and AL are short for anomaly detection and localization. † means re-implementation. “GPUH” and FPS refer to GPU hours and frame per second, respectively. Tab. 1 (b). We observe that the overall performance on normals is higher than that on RGB images. This is because the normals describe the geometric shape of the target object and some geometric anomalies that are hard to be perceived from images can thus become visually identifiable, as demonstrated in Fig. 2. Additionally, introducing the normals to images further improves the performance. Compared to methods such as AST and Eyecandy that fuse multiple modalities via concatenation, our strategy performs featurelevel fusion, surpassing them by a clear margin. More comprehensive results. In Tab. 3, our method outperforms AST and M3DM in four out of five AD metrics. Moreover, it consumes less training time and the inference speed is 1⁄4x compared to AST and 100x compared to M3DM, demonstrating both the effectiveness and efficiency. Ablation Study Study on key components. We study the effectiveness of the multi-modal teacher (MMT) and two key components in multi-modal student (MMS), i.e., modality-related Prior Generation (PG) and multi-modal interaction, in Tab. 2 (a). RD is the baseline. Since RGB images contain limited information for geometric anomalies, the baseline thus owns inferior results. Instead, introducing an auxiliary modality to the teacher brings large improvement (7.5% ↑on AUROCAD and 2.6% ↑on PROAL). For the student, generating modality-related priors from normal samples and conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8450 ducting multi-modal interaction give improvements of different degrees. Finally, combining them all performs best. Study on individual BNs in MMT. They are used to learn modality-related statistics for adaption and their impacts are listed in Tab. 2 (b). Adopting individual BNs benefits both anomaly detection and localization while applying them to surface normals alone contributes less to final results than to RGB images, implying that the network may have difficulty further adapting Image-Net pre-trained convolutions to other modalities. Visualizations in the supplementary material show that learning RGB-related information helps the pre-trained convolutions better describe anomalies, resulting in finer multi-modal representations for the teacher. Study on distillation paradigms. We explore the generalization of our multi-modal strategies to the Forward Distillation (FD) and Reverse Distillation (RD), as listed in Tab. 2 (c). How to apply them to FD can be found in the supplementary material. It is observed that integrating an auxiliary modality to the RGB data via our strategies gives consistent improvement to different distillation paradigms, which implies the flexibility and expandability of our method. Study on different modalities. Tab. 2 (d) studies the effects of different modalities and how to extend our method to more modalities can be found in the supplementary material. First, compared to depth, both normals and images provide useful information for AD and thus achieve better results. Second, fusing RGB data with depth or normals all bring significant improvement whereas the normals own larger gains (6.3% v.s. 7.5% on AUROCAD, 2.7% v.s. 3.8% on AUROCAL and 1.9% v.s. 4.2% on PROAL). Instead, integrating depth into images and normals produces limited improvement since depth introduces minor extra information. Study on different fusion strategies for F T i . In Tab. 2 (e), we explore different ways to generate the multi-modal representation F T i , including the parameter-free Channel SelfAttention (CSA) and Spatial Self-Attention (SSA) (Wang et al. 2018)), and the learnable SE Module (Hu, Shen, and Sun 2018) (SEM). We observe that no learnable transformations in CSA and SSA result in inaccurate attention computation and thus lead to unsatisfactory results. Besides, they can only handle two modalities. Surprisingly, element-wise addition between F R i and F A i (α = 1) outperforms above strategies. Contrarily, fusion with adaptive weight α produces better results, indicating that not all the information in the auxiliary modality is important. The SEM instead underperforms the vanilla addition. We guess the parameterized SEM produces unstable representations for the teacher. Study on number of prototypes. The number of prototypes controls the amount of normal information to be learned for each modality, which is explored in Tab. 2 (f). We find that learning normal information benefits both anomaly detection and localization. And more prototypes lead to better detection while owning similar AUROCAL. Instead, a larger Ni leads to more parameters and optimization difficulty, resulting in more performance drops. For the sake of higher localization results, we adopt Ni = 50 by default. (a) RGB Image (b) 𝛼(𝐹!) (d) 𝛼(𝐹) (c) from 𝛼(𝐹!) Anomaly Map (e) from 𝛼(𝐹) Anomaly Map 𝛼from 𝐹! 𝛼from 𝐹 Figure 4: Visualization on α from different sources and corresponding detection results. Red boxes highlight anomalous areas. α(F) pays attention to anomalous regions and special patterns in RGB, owning more accurate localization. Test Images Training Samples Auxiliary Modality w/o MultiModal Priors Auxiliary Modality Normal data Anomalous Samples w/ MultiModal Priors Figure 5: Visualization on multi-modal priors from training data. They help suppress sensitivity to anomaly-free patterns and give accurate localization capacity. Visualization Analysis Sources for generating fusion weight α. Note that α in Eq. (3) can also be obtained from F A. To explore the difference, Fig. 4 visualizes α on the depth map and their corresponding anomaly map on the image. As shown in Fig. 4 (b) and (d), α(F) highlights not only anomalous regions which are visible in auxiliary modality but also some regions with special patterns in RGB (the chocolate on the “cookie”). In this sense, α(F A) fails to introduce auxiliary modality information in special pattern regions and leads to wrong results in Fig. 4 (c). On the contrary, α(F) enables the model to consult the composite information in special pattern regions and get a more accurate anomaly map in Fig. 4 (e). How multi-modal priors work? To investigate it, we visualize its impacts in Fig. 5. The multi-modal priors suppress responses to normal patterns in both anomaly-free and anomalous samples, e.g., the chocolate on the “cookie” and the hollow on the “potato”. This is mainly because the multimodal priors contain normal information and are trained to help the student decoder restore anomaly-free features. Therefore, anomalous regions are highlighted and responses to normal patterns are mitigated after calculating pixel-wise feature similarity between the teacher and student networks. Conclusion We present a novel MMRD paradigm for anomaly detection, which integrates an auxiliary modality into RGB images for better detection. It uses a frozen multi-modal teacher encoder to generate multi-modal distillation targets for the learnable student decoder to restore. As a result, it achieves superior results on two multi-modal benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8451 Acknowledgments This research is supported in part by National Natural Science Foundation of China (No. 61972157 and 72192821), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Science and Technology Commission (21511101200), Shanghai Sailing Program (22YF1420300 and 23YF1410500), CCF-Tencent Open Research Fund (RAGR20220121) and Young Elite Scientists Sponsorship Program by CAST (2022QNRC001). References Bengs, M.; Behrendt, F.; Kr¨uger, J.; Opfer, R.; and Schlaefer, A. 2021. Three-dimensional deep learning with spatial erasing for unsupervised anomaly segmentation in brain MRI. International journal of Computer Assisted Radiology and Surgery, 16(9): 1413–1423. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2020. Uninformed Students: Student-Teacher Anomaly Detection With Discriminative Latent Embeddings. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4182–4191. Computer Vision Foundation / IEEE. Bergmann, P.; Jin, X.; Sattlegger, D.; and Steger, C. 2022. The MVTec 3D-AD Dataset for Unsupervised 3D Anomaly Detection and Localization. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 202–213. SCITEPRESS. Bergmann, P.; and Sattlegger, D. 2023. Anomaly Detection in 3D Point Clouds using Deep Geometric Descriptors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2612–2622. IEEE. Bonfiglioli, L.; Toschi, M.; Silvestri, D.; Fioraio, N.; and Gregorio, D. D. 2022. The Eyecandies Dataset for Unsupervised Multimodal Anomaly Detection and Localization. In Proceedings of the Asian Conference on Computer Vision, volume 13845, 459–475. Springer. Chen, R.; Xie, G.; Liu, J.; Wang, J.; Luo, Z.; Wang, J.; and Zheng, F. 2023a. Easynet: An easy network for 3d industrial anomaly detection. In Proceedings of the 31st ACM International Conference on Multimedia, 7038–7046. Chen, X.; Han, Y.; and Zhang, J. 2023. A Zero-/FewShot Anomaly Classification and Segmentation Method for CVPR 2023 VAND Workshop Challenge Tracks 1&2: 1st Place on Zero-shot AD and 4th Place on Few-shot AD. arXiv preprint arXiv:2305.17382. Chen, X.; Zhang, J.; Tian, G.; He, H.; Zhang, W.; Wang, Y.; Wang, C.; Wu, Y.; and Liu, Y. 2023b. CLIP-AD: A Language-Guided Staged Dual-Path Model for Zero-shot Anomaly Detection. arXiv preprint arXiv:2311.00453. Deng, H.; and Li, X. 2022. Anomaly Detection via Reverse Distillation from One-Class Embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9727–9736. IEEE. Gou, J.; Sun, L.; Yu, B.; Du, L.; Ramamohanarao, K.; and Tao, D. 2022. Collaborative Knowledge Distillation via Multiknowledge Transfer. In IEEE Transactions on Neural Networks and Learning Systems. IEEE. Gou, J.; Yu, B.; Maybank, S. J.; and Tao, D. 2021. Knowledge Distillation: A Survey. International Journal of Computer Vision, 129(6): 1789–1819. Gu, Z.; Liu, L.; Chen, X.; Yi, R.; Zhang, J.; Wang, Y.; Wang, C.; Shu, A.; Jiang, G.; and Ma, L. 2023. Remembering Normality: Memory-guided Knowledge Distillation for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16401–16409. Hinton, G. E.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. CoRR, abs/1503.02531. Horwitz, E.; and Hoshen, Y. 2022. An Empirical Investigation of 3D Anomaly Detection and Segmentation. CoRR, abs/2203.05550. Hou, J.; Zhang, Y.; Zhong, Q.; Xie, D.; Pu, S.; and Zhou, H. 2021. Divide-and-Assemble: Learning Block-wise Memory for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8771–8780. IEEE. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-Excitation Networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141. Computer Vision Foundation / IEEE Computer Society. Kim, T. S.; Jones, J. D.; and Hager, G. D. 2021. Motion Guided Attention Fusion to Recognize Interactions from Videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13056–13066. IEEE. Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9664–9674. Liang, Y.; Zhang, J.; Zhao, S.; Wu, R.; Liu, Y.; and Pan, S. 2023. Omni-frequency channel-selection representations for unsupervised anomaly detection. IEEE Transactions on Image Processing. Liu, H.; Liu, H.; Wang, Y.; Sun, F.; and Huang, W. 2022. Fine-grained multilevel fusion for anti-occlusion monocular 3d object detection. IEEE Transactions on Image Processing, 31: 4050–4061. Liu, J.; Xie, G.; Chen, R.; Li, X.; Wang, J.; Liu, Y.; Wang, C.; and Zheng, F. 2023. Real3d-ad: A dataset of point cloud anomaly detection. arXiv preprint arXiv:2309.13226. Roth, K.; Pemula, L.; Zepeda, J.; Sch¨olkopf, B.; Brox, T.; and Gehler, P. V. 2022. Towards Total Recall in Industrial Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14298–14308. IEEE. Rudolph, M.; Wandt, B.; and Rosenhahn, B. 2021. Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1906– 1915. IEEE. Rudolph, M.; Wehrbein, T.; Rosenhahn, B.; and Wandt, B. 2023. Asymmetric Student-Teacher Networks for Industrial The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8452 Anomaly Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2591– 2601. IEEE. Rusu, R. B.; Blodow, N.; and Beetz, M. 2009. Fast Point Feature Histograms (FPFH) for 3D registration. In 2009 IEEE International Conference on Robotics and Automation, 3212–3217. IEEE. Salehi, M.; Sadjadi, N.; Baselizadeh, S.; Rohban, M. H.; and Rabiee, H. R. 2021. Multiresolution Knowledge Distillation for Anomaly Detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 14902–14912. Computer Vision Foundation / IEEE. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention Is All You Need. arXiv:1706.03762. Wang, G.; Han, S.; Ding, E.; and Huang, D. 2021. StudentTeacher Feature Pyramid Matching for Anomaly Detection. In British Machine Vision Conference, 306. BMVA Press. Wang, X.; Girshick, R. B.; Gupta, A.; and He, K. 2018. NonLocal Neural Networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7794– 7803. Computer Vision Foundation / IEEE Computer Society. Wang, Y.; Huang, W.; Sun, F.; Xu, T.; Rong, Y.; and Huang, J. 2020a. Deep Multimodal Fusion by Channel Exchanging. In Advances in Neural Information Processing Systems, 4835–4845. MIT. Wang, Y.; Peng, J.; Zhang, J.; Yi, R.; Wang, Y.; and Wang, C. 2023. Multimodal Industrial Anomaly Detection via Hybrid Fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8032–8041. Wang, Y.; Sun, F.; Lu, M.; and Yao, A. 2020b. Learning Deep Multimodal Feature Representation with Asymmetric Multi-layer Fusion. In Proceedings of the 28th ACM International Conference on Multimedia, 3902–3910. ACM. Xie, G.; Wang, J.; Liu, J.; Lyu, J.; Liu, Y.; Wang, C.; Zheng, F.; and Jin, Y. 2023a. Im-iad: Industrial image anomaly detection benchmark in manufacturing. arXiv preprint arXiv:2301.13359. Xie, G.; Wang, J.; Liu, J.; Zheng, F.; and Jin, Y. 2023b. Pushing the limits of fewshot anomaly detection in industry vision: Graphcore. arXiv preprint arXiv:2301.12082. Zavrtanik, V.; Kristan, M.; and Skoˇcaj, D. 2021. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8330–8339. Zhang, J.; Chen, X.; Xue, Z.; Wang, Y.; Wang, C.; and Liu, Y. 2023a. Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly Detection. arXiv preprint arXiv:2311.02612. Zhang, J.; Liu, R.; Shi, H.; Yang, K.; Reiß, S.; Peng, K.; Fu, H.; Wang, K.; and Stiefelhagen, R. 2023b. Delivering Arbitrary-Modal Semantic Segmentation. CoRR, abs/2303.01480. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8453
2024
939
18,783
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept Alignment Yequan Bie1, Luyang Luo1, Hao Chen1,2,3* 1Department of Computer Science and Engineering, Hong Kong University of Science and Technology 2Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology 3HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute [email protected], [email protected], [email protected] Abstract Black-box deep learning approaches have showcased significant potential in the realm of medical image analysis. However, the stringent trustworthiness requirements intrinsic to the medical field have catalyzed research into the utilization of Explainable Artificial Intelligence (XAI), with a particular focus on concept-based methods. Existing concept-based methods predominantly apply concept annotations from a single perspective (e.g., global level), neglecting the nuanced semantic relationships between sub-regions and concepts embedded within medical images. This leads to underutilization of the valuable medical information and may cause models to fall short in harmoniously balancing interpretability and performance when employing inherently interpretable architectures such as Concept Bottlenecks. To mitigate these shortcomings, we propose a multi-modal explainable disease diagnosis framework that meticulously aligns medical images and clinical-related concepts semantically at multiple strata, encompassing the image level, token level, and concept level. Moreover, our method allows for model intervention and offers both textual and visual explanations in terms of humaninterpretable concepts. Experimental results on three skin image datasets demonstrate that our method, while preserving model interpretability, attains high performance and label efficiency for concept detection and disease diagnosis. The code is available at https://github.com/Tommy-Bie/MICA. 1 Introduction Black-box deep learning methods have surfaced as powerful instruments in medical image analysis, offering significant potential to revolutionize healthcare diagnostics and treatments (Kermany et al. 2018; Esteva et al. 2017). These methods excel at handling the extensive and intricate data inherent to the medical field, rendering them suitable for many tasks (Litjens et al. 2017). Despite the encouraging performance, their end-to-end prediction nature leads to a lack of transparency, raising critical issues of trust and interpretability in high-stakes domains like healthcare (Rudin 2019). The healthcare field, with its rigorous demands for trustworthiness, requires models that not only perform well but are also understandable and trustable by practitioners, which necessitates research into Explainable Artificial Intelligence *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 𝑪𝑨𝑽 Image level Token level Concept level blue-whitish veil regression structure dots and globules Blue-Whitish Veil Label: present Predict: present Figure 1: Our method learns image and concept semantic correspondences at the image, token, and concept levels. (XAI). Within XAI, several approaches have been proposed to explain the neural networks using saliency map (Zhou et al. 2016; Selvaraju et al. 2017) highlighting the contribution of each pixel or region in the model’s prediction, while others leverage inspection of the learned features (AbbasiAsl and Yu 2017), feature interactions (Tsang, Cheng, and Liu 2017), and influence functions (Koh and Liang 2017) to explain the models. However, the reliability of these posthoc analyses, which offer explanations for a trained AI model, has been under considerable scrutiny recently. Some studies (Laugel et al. 2019; Rudin 2019) have shown that post-hoc explanation techniques often yield inconsistent results across different runs and are sensitive to slight changes in the input, making the post-hoc methods misleading as they could provide explanations that do not accurately reflect the model’s decision-making process. Thus, ante-hoc explainable methods have garnered researchers’ interest, with a particular emphasis placed on concept-based methods. These methods aim to integrate interpretability into machine learning models by linking their predictions to human-understandable concepts (Koh et al. 2020; Yuksekgonul et al. 2023; Fang et al. 2020; Yan et al. 2023). For example, the Concept Bottleneck Model (CBM) (Koh et al. 2020) first predicts an intermediate set of predefined concepts and then uses these concepts to predict the final output. Yan et al. (2023) introduced a human-inthe-loop framework to eliminate confounding factors and improve model performance. These inherently interpretable methods offer concept-based explanations, which are generThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 837 ally more understandable than post-hoc approaches. However, concept-based methods are not devoid of limitations. A major challenge these methods face is that the model performance (e.g., classification accuracy) is sacrificed when designing an explainable architecture like Concept Bottlenecks, compared to the black-box methods. We argue this is caused by inefficient usage of valuable medical information, i.e., clinical-related concepts. Most existing methods apply concept annotations at a single level, e.g., Sarkar et al. (2022) only utilize concept labels to supervise the concept prediction results of the whole image, neglecting the intricate semantic connections between images’ subregions and concepts. This narrow focus can limit the models’ performance and interpretability, leading to unreliable concept detection and inaccurate diagnosis. To address the mentioned challenges, we introduce a multi-modal explainable disease diagnosis framework to meticulously align medical images and clinical-related concepts semantically at multiple levels, encompassing the global image level, the regional token level, and the concept subspace level as shown in Figure 1. Specifically, imagelevel alignment encourages the model to learn the correspondences between images and concepts from a global perspective. Token-level alignment focuses on the similarity between sub-regions within images and concept tokens using an attention-based mechanism. Concept-level alignment leverages concept activation vectors (CAVs) (Kim et al. 2018) to project the concept-based attention-weighted image representations to a concept subspace and subsequently aligns the aggregated image representations with clinical concepts. It is noteworthy that since the utilized concepts are human-interpretable, we leverage the knowledge of a medical large language model (LLM) by employing it as a concept encoder to enable the model to comprehend the latent conceptual semantics. During disease diagnosis, our model detects the concepts before making the decision. In this manner, our method makes full use of concept-based medical semantics through multi-level image-concept joint learning and achieves better performance and interpretability. We summarize our main contributions as follows: (1) We propose MICA, a novel explainable disease diagnosis framework that semantically aligns medical images and clinical concepts at three different levels, i.e., global image level, regional token level, and concept subspace level. (2) To the best of our knowledge, we are the first to encode dermoscopic concepts using medical LLM. (3) As an ante-hoc explainable framework, our method is capable of performing disease diagnosis and concept detection concurrently while offering both visual and textual explanations. (4) Experimental results on three skin lesion datasets show that our method achieves superior performance and label efficiency, benefiting from the high-quality semantic correlations between images and concepts learned within our framework. 2 Related Works 2.1 XAI & Concept-based Methods With an increasing number of high-stake scenarios (e.g., healthcare, finance, law enforcement) requiring trustworthiness, XAI has been gaining attraction. One general approach in XAI is post-hoc analysis, which aims to interpret a trained model by fitting explanations to the model outputs, such as LIME (Ribeiro, Singh, and Guestrin 2016), SHAP (Lundberg and Lee 2017), and SENN (Alvarez Melis and Jaakkola 2018). Particularly for CNN, many researchers focus on saliency visualization (Zhou et al. 2016; Selvaraju et al. 2017; Sundararajan, Taly, and Yan 2017) and activation maximization (Van Den Oord, Kalchbrenner, and Kavukcuoglu 2016; Yosinski et al. 2015; Nguyen et al. 2016). However, post-hoc methods, which typically provide explanations based on pixels, regions or features of input images, do not genuinely enable medical experts or patients to understand which specific symptoms contribute to the decision process. This premise has sparked researchers’ interest in the exploration of concept-based methods that integrate high-level human-interpretable concepts into decision process. Several researchers work on automatically discovering the concepts (Lang et al. 2021; Yeh et al. 2020), which can reduce the need for concept annotations but may not be suitable for healthcare, since the semantic meanings of discovered concepts can be unclear and unreliable. Concept activation vectors (CAVs) like approaches (Kim et al. 2018; Lucieri et al. 2020; Patr´ıcio et al. 2023; Yan et al. 2023) train linear classifiers, e.g., SVM (Cortes and Vapnik 1995), to the model’s features to verify whether the representations can separate the human-defined concept examples. The inherently interpretable Concept Bottleneck Model (Koh et al. 2020; Patr´ıcio et al. 2023; Rigotti et al. 2021) first predicts concepts, then uses the detected concepts to predict task labels. We argue CBM is an essential research direction in trustworthy medical image analysis, since it mimics the process wherein medical experts first assess symptoms before diagnosing diseases during clinical treatments. 2.2 Trustworthy Skin Disease Diagnosis The diagnosis of skin diseases, especially skin cancer, has been a significant research area within the intersection of deep learning and healthcare. Many of explanation approaches for skin lesion diagnosis are based on saliency map (Young et al. 2019; Xiang and Wang 2019) and attention mechanisms (Barata, Celebi, and Marques 2021; Gu et al. 2020). However, considering the stringent demands for model decision interpretability in healthcare (Lipton 2017), some researchers have taken efforts in designing conceptbased models upon the ABCD-rule (Nachbar et al. 1994) and the 7-point checklist (Argenziano et al. 1998), which are authoritative criteria established by dermatologists. For instance, Lucieri et al. (2020) predict dermoscopic concepts from a pre-trained network to explain its predictions using TCAV. Coppola et al. (2020) propose to predict dermoscopic features with information sharing between different subnetworks to increase interpretability through multi-task learning. Yan et al. (2023) discover and eliminate confounding concepts within the datasets using spectral relevance analysis (Lapuschkin et al. 2019). CBE (Patr´ıcio et al. 2023) uses an extra segmentation module to preprocess images and encourages the feature maps obtained by 1 × 1 convolutional kernels to learn the representations of each dermoThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 838 𝐺𝑙𝑜𝑏𝑎𝑙𝑡𝑒𝑥𝑡𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛 ℒ!"# 𝐶𝑜𝑛𝑐𝑒𝑝𝑡 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 𝑰𝒎𝒂𝒈𝒆 𝑬𝒏𝒄𝒐𝒅𝒆𝒓 Concept Activation Vectors Learning Explainable Disease Diagnosis 𝒓𝒆𝒈𝒖𝒍𝒂𝒓 𝑫𝑨𝑮 𝒓𝒆𝒈𝒖𝒍𝒂𝒓 𝑷𝑰𝑮 𝒂𝒕𝒚𝒑𝒊𝒄𝒂𝒍 𝑷𝑵 𝐼𝑚𝑎𝑔𝑒 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 𝐶𝑜𝑛𝑐𝑒𝑝𝑡$ 𝐶𝑜𝑛𝑐𝑒𝑝𝑡% 𝐶𝑁𝑁 𝑪𝑨𝑽𝒔 Multi-Level Image-Concept Alignment 1. Pigmentation 2. Dots & Globules 3. Vascular Structures 4. … 𝑪𝒐𝒏𝒄𝒆𝒑𝒕𝒔 𝑫𝒊𝒂𝒈𝒏𝒐𝒔𝒊𝒔 ℒ&"# ℒ'"# K V Q × pigmentation dots & globules vascular structures This image has been identified as Nevus due to the presence of regular dots and globules (42.14%), regular pigmentation (31.19%), despite the presence of atypical pigment network (26.67%). 𝑐𝑟𝑜𝑠𝑠 𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 𝐺𝑙𝑜𝑏𝑎𝑙𝑖𝑚𝑎𝑔𝑒𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛 Frozen Figure 2: The overall pipeline of our proposed framework. scopic concept, then diagnoses diseases in CBM architecture. However, most existing methods primarily utilize concept annotations from a single perspective and focus on the analysis of dermoscopic images. In contrast, our method semantically aligns skin images and concepts at multiple levels without using extra models to obtain disease masks, which can be applied to both dermoscopic images and raw, clinical images, e.g., SkinCon dataset (Daneshjou et al. 2022). 3 Method 3.1 Overall Framework Figure 2 presents the overall architecture of our multi-level image-concept alignment framework for explainable disease diagnosis. Our method mainly consists of two stages: multimodal representation learning through image-concept alignment and explainable disease diagnosis. Specifically, in the first stage, we utilize a CNN-based image encoder and a large language model (LLM)-based concept encoder to extract semantic visual and textual features from the input medical images and corresponding clinical-related concepts. Then we align the images and concepts at three levels, i.e., image level, token level and concept level, directing the feature extractor to more effectively leverage the correspondences between images and concepts. To elaborate, we employ an image-level alignment module to maximize the similarity of the global representation between accurate image-concept pairs versus random pairs. Then, an attention-based token-level image-concept alignment module is proposed to cultivate fine-grained alignments between image sub-regions and concept word tokens. Moreover, to further refine the image and concept matches established by the first two modules, we introduce a concept-level alignment module based on concept activation vectors (Kim et al. 2018), which maps the aggregated attention-weighted image representation onto the concept subspace and subsequently enhances the match with the concept ground truth. In the second stage, we add a single layer on top of the image encoder trained in the first stage to predict the concepts, and use the detected concepts to predict the diagnosis through a linear probe. Finally, concept-based explanations, including concept contributions and localization, are generated. 3.2 Multi-Level Image-Concept Alignment Image-level Image-concept Alignment. To encourage the model to learn global correspondences between images and concepts, we employ an image-level alignment module. Specifically, given training set D = {(I1, C1), (I2, C2), ..., (IN, CN)}, where (Ii, Ci) denotes the i-th image-concept pairs, and N denotes the number of training samples, we adopt a CNN-based image encoder EI : I →V , which takes an image Ii as input and outputs the global average pooling result of the last layer’s feature map, i.e., the visual representation Vi. To extract the semantic representations of concepts, we utilize a medical LLMbased concept encoder EC : C →{t, T}, which outputs the encoded concept text tokens ti and the aggregated representation Ti given the concept Ci. The same dimensional image representation FI and concept representation AI are obtained using two projection layers that transform V and T into latent space embeddings, respectively. Then, the similarity sij = (FIi)T AIj between image representation FIi and concept representation AIj is calculated by cosine similarity. To maximize the alignment between the correct pairs The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 839 of images and concept versus random pairs, we follow previous work (Zhang et al. 2022) to maximize the posterior probability of the image representation given its corresponding concept representation using contrastive loss function: L(v|t) I = N X i=1 −log exp(sii/τ1) PN j=1 exp(sij/τ1) , (1) where τ1 is a scaling temperature parameter. Considering the mutual relationship between the image and concept pairs, we also maximize the posterior probability of the concept given its corresponding image by minimizing the symmetrical contrastive loss function: L(t|v) I = N X i=1 −log exp(sii/τ1) PN j=1 exp(sji/τ1) (2) The overall objective function of the image-level imageconcept alignment module LILA is the average of the two symmetrical losses. Token-level Image-concept Alignment. In medical imaging, the primary focus is often on specific, small regions within an image. Diagnosis typically hinges on the symptoms observable within these particular image regions, which is different from the natural image domain. Therefore, considering that different clinical concepts may correspond to distinct sub-regions of a medical image, we propose the attention-based token-level image-concept alignment module. This module aims to address the limitation of image-level alignment and additionally explore the correlation between the image features of sub-regions and the respective concept tokens. Given the image encoder EI, we extract the region-level visual feature maps from the intermediate convolution layer and vectorize to get the features of each image sub-region. A projection layer is also applied to the features to get the low-dimensional embeddings FT . Similarly, the concept token features t extracted by concept encoder EC are projected into token-level text embeddings AT . We then calculate the concept-based attention-weighted image representation gi using the cross attention between the region-level visual embeddings FT i and the token-level concept embedding AT i. Specifically, we regard AT as queries, FT as keys and values, then the attention weight αij is obtained by calculating the following formulation: αij = exp (FT i)T AT j/τ2  PM k=1 exp ((FT i)T AT k/τ2) , (3) where τ2 is a scaling temperature parameter. Then the concept-based attention weighted image representation gi = PM j=0 αijFT j is the weighted sum of sub-region features. In order to align image and concept representations at the token level, we aggregate the similarity between all W concept features and their corresponding attention-weighted image representations using the token-wise matching function: G(gi, AT i) = log( W X i=1 exp(⟨gi, AT i⟩/τ3))τ3 (4) where τ3 is a scaling temperature parameter. Then, similar to the image-level alignment module, we define the token-level image-concept alignment contrastive losses as: L(v|t) T = N X i=1 −log exp(G(gi, AT i)/τ2) PN j=1 exp(G(gi, AT j)/τ2) , L(t|v) T = N X i=1 −log exp(G(gi, AT i)/τ2) PN j=1 exp(G(gj, AT i)/τ2) (5) The overall objective of the token-level alignment module LTLA is the average of the two symmetrical losses. Concept-level Image-concept Alignment. The ILA and TLA modules encode concepts based on the knowledge of the LLM, yet they come with certain limitations: (1) For the sake of efficiency, the parameters of the LLM are fixed in our framework; (2) We cannot guarantee that all knowledge derived from the LLM is entirely accurate. To alleviate these issues, we design a concept-level image-concept alignment module to fully exploit concept annotations directly sourced from the labels for more effective learning of the correlation and to further refine the alignment of attention-weighted image representations and concepts. To enhance the matching in the concept subspace, we initially make use of concept activation vectors (CAVs) to learn concept representations. Specifically, given concepts C = {c1, c2, ..., cNc}, where ci denotes the i-th concept (e.g., Blue-Whitish Veil), Nc denotes the number of concepts, we first split the concept samples Sc = {P c, N c} into positive concept examples P c = {P c i }Np i=1 and negative concept examples N c = {N c i }Nn i=1, where P c i and N c i are the CNN features of the images that contain or not contain the given concept c, respectively. Np and Nn denote the number of positive and negative examples, respectively. We train an SVM to obtain the classification boundary, which separates features in P c and N c. We learn the corresponding CAV b with weights ωc and bias ϕc, defined as the vector normal to the boundary, which satisfies (ωc)T P c i + ϕc > 0 and (ωc)T N c i + ϕc < 0 for all examples. Given the concept-based attention weighted image representation gi and the CAV b ∈Rd×Nc, where d is the dimension of concept subspace, we project gi onto the concept subspace spanned by concept vectors using b: hi = ⟨gi, b⟩ ||b||2 b, (6) where hi is the projected concept embedding, which can also be regarded as concept scores. Then, cross-entropy loss is applied to estimate the discrepancy between the concept scores and the concept ground truth: LCLA = − N X i=1 (Cilog(hi) + (1 −Ci)log(1 −hi)) (7) Overall Objective In the first stage of our method, we introduce Multi-Level Image-Concept Alignment modules to make full use of the concept annotations and jointly learn the correspondences between images and concepts. The overall training objective of the first stage is represented as: L = λ1 ∗LILA + λ2 ∗LTLA + λ3 ∗LCLA, (8) where λ1, λ2, and λ3 are hyperparameters. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 840 METHOD Derm7pt PH2 SkinCon AUC (%) ACC (%) F1 (%) AUC (%) ACC (%) F1 (%) AUC (%) ACC (%) F1 (%) Sarkar et al. (2022) 76.222.06 73.891.47 66.811.23 79.330.62 88.003.26 79.662.11 68.211.44 71.141.21 71.321.38 PCBM (2023) 72.962.19 76.981.39 71.041.15 78.331.17 89.331.89 81.492.57 68.941.59 71.041.13 70.470.75 PCBM-h (2023) 83.271.14 79.890.89 74.481.37 92.321.47 90.671.89 83.302.55 69.531.67 72.281.39 72.281.29 CBE (2023) 76.600.35 83.750.26 78.130.44 97.500.00 96.000.00 93.890.00 72.751.15 73.751.10 73.561.31 MICA (w bot) 84.111.10 82.201.31 78.081.22 97.661.24 96.003.26 94.401.48 75.891.11 74.291.09 74.741.21 MICA (w/o bot) 85.591.11 83.940.99 79.381.34 98.181.43 98.671.89 95.341.17 75.921.13 75.631.07 75.431.24 Table 1: Quantitative comparison on disease diagnosis with the concept-based state-of-the-arts. The performance is reported as meanstd of three random runs. Our method is highlighted, where w bot and w/o bot represent with or without concept bottleneck, respectively. The best and second-best results are highlighted in bold and underlined, respectively. 3.3 Explainable Disease Diagnosis As illustrated in Figure 2, the second stage of our method is to perform an explainable disease diagnosis. In order to mimic the process wherein medical experts first assess symptoms before making a final diagnosis during clinical treatments, we propose an explainable decision module with a concept bottleneck. Given the image encoder, which learned the image-concept association in the first stage, we initially utilize it to predict the clinical concepts in the images, i.e., detecting the presence, absence, and types of various symptoms within the images. Then, we apply a linear predictor that maps the concept subspace to the disease prediction based on the detected concepts. It is worth noting that the linear predictor is highly interpretable because its decision is based on the detected clinical concepts, which is consistent with human medical experts. In addition, the weight matrix of the linear predictor denotes the importance of each concept to the final decision. Moreover, a human medical expert can easily edit the predictor to get a more reliable diagnosis when observing a wrong or counter-intuitive phenomenon. Specifically, given the image representation vi, of an input image Ii encoded by the freezing image encoder Ec, we first detect the concepts and get the concept scores through an FC layer fc, then the disease classification is performed based on the detected concepts through a linear layer fd. We jointly train both the concept detection and disease classification layer through the following objective function: ˆfc, ˆfd = arg min fc,fd N X i=1 [CE(fc(vi), Ci)+ βCE(fd(fc(vi)), yi)] , (9) where CE(·) is the cross-entropy loss, yi is the diagnosis label of the i-th image, and β is the hyperparameter to balance the concept detection and disease diagnosis. Discussion What happens when we directly perform disease diagnosis using the trained image encoder without a concept bottleneck? Ideally, we would like to achieve higher performance while retaining the explainability of the conceptbased methods. Thus, we simply apply a classification head on top of the image encoder to predict diagnosis labels. Since we only use the concept annotations when training the image encoder, it can be regarded as our model leveraging the concept knowledge to perform disease diagnosis, which is still an emulation of the image →concept →diagnosis decision-making process. 4 Experiments 4.1 Experimental Setup Datasets: Derm7pt (Kawahara et al. 2018) is a dermoscopic image dataset contains 1011 images with clinical concepts for melanoma skin lesions in dermatology. We consider all 7 dermoscopic concepts. Following Patr´ıcio et al. (2023), we filter the dataset to obtain 827 images of Nevus and Melanoma classes. Only the dermoscopic images are used. The specific names of concepts can be found in section 4.2. PH2 (Mendonc¸a et al. 2013) contains a total of 200 dermoscopic images of melanocytic lesions, including 80 common nevi, 80 atypical nevi, and 40 melanomas. We consider 5 concepts that are also included in the Derm7pt dataset. Following previous work (Patr´ıcio et al. 2023), we combine the Common Nevi and Atypical Nevi classes of the PH2 dataset into one global class label called Nevus. SkinCon (Daneshjou et al. 2022) is a skin disease dataset with 3230 images densely annotated by experts for fine-grained model debugging and analysis. We choose 22 concepts that have at least 50 images representing the concept from the F17k (Groh et al. 2021) part. The classification categories are malignant, benign and non-neoplastic. The dataset is split into training set, validation set and test set according to the proportion of 70%, 15% and 15%, respectively. Compared Approaches: Sarkar et al. (2022) design an ante-hoc model where the output of the concept encoder is passed to a decoder that reconstructs the image, encouraging the model to capture the semantic features of the input image. PCBM(-h) (Yuksekgonul et al. 2023) allows transfering concepts from other datasets and designs a residual modeling step to preserve performance of CBM. CBE (Patr´ıcio et al. 2023) proposes a coherence loss to improve the visual coherence of concept activations. CAV (Lucieri et al. 2020) uses concept activation vectors to perform a detailed concept analysis for skin tumor classification. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 841 Dataset Method AUC (%) ACC (%) F1 (%) Derm7pt CAV (2020) 73.8 71.2 61.5 CBE (2023) 72.2 74.1 71.0 MICA (Ours) 78.6 76.0 72.6 PH2 CAV (2020) 71.2 67.2 66.4 CBE (2023) 81.3 71.6 70.0 MICA (Ours) 83.6 75.2 68.4 SkinCon CAV (2020) 76.5 86.4 60.2 CBE (2023) 79.3 89.0 62.1 MICA (Ours) 82.6 91.7 63.8 Table 2: Quantitative results in concept detection. Dataset Method PN DaG STR RS BWV PIG VS Avg. Derm7pt CAV 72.7 69.1 74.3 65.5 74.6 68.6 73.8 71.2 CBE 78.5 64.5 77.0 71.0 81.1 75.4 73.7 74.1 MICA 74.4 79.1 71.3 72.8 84.4 68.8 81.3 76.0 PH2 CAV 68.0 60.0 56.0 76.0 76.0 N/A N/A 67.2 CBE 64.0 58.0 80.0 80.0 76.0 N/A N/A 71.6 MICA 60.0 60.0 80.0 88.0 88.0 N/A N/A 75.2 Table 3: Comparison of concept detection accuracy (%). PN, DaG, etc. denote each clinical concept of the corresponding dataset and Avg. presents the average values of the previous columns. The best results are highlighted in bold. Implementation Details: Our framework uses ResNet-50 (He et al. 2016) as the image encoder. For concept encoder, we use a BERT encoder (Devlin et al. 2018) with the ClinicalBERT weights (Alsentzer et al. 2019). During training in the first stage (i.e., multi-level alignment), we train the image encoder using only concept labels. In the second stage (i.e., disease diagnosis), the parameters of the image encoder are fixed, and we only train the classification heads. We adopt Adam (Kingma and Ba 2014) optimizer with learning rate of 5e-5 in the first stage and 1e-4 in the second stage. For the hyperparameter selection, we use grid search and set τ1 = 0.25, τ2 = 0.2, τ3 = 0.1. We set β = 1 for Derm7pt and SkinCon dataset, and set β = 0.5 for PH2 dataset. 4.2 Experimental Results To demonstrate that our method’s competitive performance on disease diagnosis and concept detection, we first compare with other state-of-the-art concept-based approaches on various datasets. Then we conduct comprehensive ablation experiments to validate the effectiveness of each module designed in our method. Finally, we evaluate the explainability of our method using multiple XAI metrics. Disease Diagnosis. In Table 1, we report the classification comparison results of our method under three metrics (AUROC, Accuracy and F1 Score) on the considered datasets. MICA outperforms other methods in overall performance especially AUC. As discussed in section 3.3, the results of MICA without concept bottleneck are also presented. ILA TLA CLA Derm7pt PH2 SkinCon AUCD AUCC AUCD AUCC AUCD AUCC % % % 71.2 63.1 88.3 70.6 67.4 70.7 " % % 81.1 75.5 95.0 80.5 72.9 78.5 " " % 82.8 77.3 95.6 84.5 74.7 81.5 " % " 83.3 78.0 96.4 82.7 75.1 81.8 % " " 82.9 77.7 96.1 82.9 75.0 82.1 " " " 84.1 78.6 97.7 83.6 75.9 82.6 Table 4: Ablation study of MICA on disease diagnosis (AUCD [%]) and concept detection (AUCC [%]). ILA, TLA, and CLA represent the image-level, token-level, and concept-level alignment modules, respectively. Concept Detection. Table 2 shows the quantitative results of clinical concept detection. MICA outperforms other methods at most metrics in considered datasets. We also report the test classification accuracy of each concept in Derm7pt and PH2 dataset in Table 3. These two datasets have five common dermoscopic concepts from Seven-point Checklist (Argenziano et al. 1998), where PN stands for “Pigment Network”, DaG stands for “Dots and Globules”, STR stands for “Streaks”, RS stands for “Regression Structures”, BWV stands for “Blue-Whitish Veil”, PIG stands for “Pigmentation”, VS stands for “Vascular Structures”. Test accuracy of each concept reported is the average of the finegrained labels that belong within each criteria. For example, PN (Pigment Network) denotes the mean accuracy of subclasses “Typical Pigment Network” and “Atypical Pigment Network”. The “avg” column reports the mean value of the test accuracy of all concepts. Ablation Study. As shown in Table 4, we observe that our framework can benefit from all three alignment modules, including ILA, TLA and CLA. The ablation results show that without any one of the three modules, the performance of both disease diagnosis and concept detection may suffer. The last configuration of Table 4 demonstrates that our method achieves the best overall performance with all three designed image-concept alignment components. 4.3 Analysis of Explainability In this section, we evaluate and analyze the explainability of our method. Specifically, inspired by prior research (Hsiao et al. 2021; Guidotti et al. 2018; Rigotti et al. 2021; Jin et al. 2023), we outline and evaluate our framework on multiple essential metrics for XAI techniques, including faithfulness, plausibility (understandability) and efficiency. Faithfulness. Faithfulness requires that the explanation should be highly faithful with the designed model mechanism and thus reflects the model decision process (Lakkaraju et al. 2019). In this paper, we employ test-time intervention on concepts to assess faithfulness. During inference time, we first predict the concepts and obtain the corresponding concept scores, then we intervene on a concept by changing its value. The diagnosis labels are predicted based on the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 842 [ ... 0.29 ... ] Nev [ ... 1.0 ... ] Mel 𝐼𝑆𝑇𝑅 Melanoma [ ... 0.76 ... ] Mel [ ... 0 ... ] Nev 𝐼𝐷𝐴𝐺 Melanoma Image Concepts Diagnosis Intervention Intervention (a) Intervention examples. W/O t1 t2 t3 Intervention Threshold 0 20 40 60 80 100 Diagnosis Accuracy Derm7pt PH2 SkinCon (b) Intervention results. Image TPN IDaG Ulcer Erythema Visual Explanation Textual Explanation Papule This image has been identified as Nevus due to the presence of typical pigment network (74.27%) despite the presence of irregular dots and globules (25.73%). This image has been identified as Melanoma due to the presence of ulcer (41.72%), erythema (30.19%) and papule (28.09%). (c) Visual and textual explanations. Figure 3: Illustration of our model’s faithfulness, plausibility and understandability. (a)(b) Test time concept-intervention examples and results. (c) Examples of visual and textual explanations provided by our method given skin images from different datasets. Correct prediction results are marked in green, while red highlights incorrect predictions. concepts after intervention. Figure 3(a) shows two cases of test-time intervention. In the first case, we replace the concept score with the ground-truth label of irregular streaks and subsequently the diagnosis result changes from Nevus to Melanoma, which amends the diagnosis prediction. In the second case of Figure 3(a), we set the correct predicted value of concept irregular dots and globules to 0 and the model decision changes from Melanoma to Nevus, which is consistent with the dermatologists’ findings (Argenziano et al. 1998; Kawahara et al. 2018). We also report test-time intervention results in Figure 3(b), which reflects the change of diagnosis accuracy when we zero those above the threshold values. The accuracy decreases when the threshold values decrease demonstrates that the predicted concepts are faithfully explaining the model’s decisions. Plausibility & Understandability. Plausibility refers to how believable or likely the given explanation seems, given what human-being know about the world or the domain of the problem (Carvalho, Pereira, and Cardoso 2019; Guidotti et al. 2018), while understandability refers to how easily a human user can comprehend the provided explanation without requiring technical knowledge (Jin et al. 2023). In this paper, our model achieves both plausibility and understandability by providing concept-based visual and textual explanations. Figure 3(c) shows the examples of explanations in detail. Given an input image, our method predicts the diagnosis labels with each predicted concept’s localization and contribution (in %). The concept localization is generated by visualizing the token-level correspondences with the images. As for the concept contribution, we leverage the softmax result of the last linear layer, which multiplies the concept logits and the corresponding weights. Our framework aggregates the prediction results to get textual explanations. It is worth noting that a “despite” (underlined) is used in case of negative class fluence to signalize contradiction. Efficiency. High label efficiency can allow the explainable model to be practically implemented in real-world applications without using extra data or annotations. In this paper, we assess our model’s label efficiency by using different proportions of training data in the second stage. As shown in Table 5, we report the disease diagnosis results using 100%, Dataset Derm7pt PH2 SkinCon Num# 34 / 173 / 346 15 / 75 / 150 225 / 1126 / 2252 PCT (%) 10 / 50 / 100 10 / 50 / 100 10 / 50 / 100 Proportion AUC ACC AUC ACC AUC ACC 10% 77.9 76.9 65.7 81.3 70.1 71.5 50% 83.2 81.0 96.5 93.3 73.5 73.2 100% 84.1 82.2 97.7 96.0 75.9 74.3 Table 5: Disease diagnosis performance based on different portion of training data. PCT represents percentage. 50%, and 10% of training data. For Derm7pt and SkinCon datasets, it can be observed that the diagnosis performance of the model does not exhibit significant decline when only 50% or 10% of the diagnosis labels are used. The obvious decrease for PH2 dataset with 10% training data may be because only 15 labels are used to train the classifier. Therefore, our method can achieve competitive diagnosis results only using a small proportion of diagnosis labels, signifying that our method encourages the model to learn the correspondences between medical images and clinical-related concepts, thus facilitating disease diagnosis effectively. This experimental observation is faithfully consistent with the doctors’ diagnostic process wherein medical experts make a diagnosis decision based on the detected symptoms. 5 Conclusion In this paper, we propose MICA, a multi-modal explainable concept-based framework for skin disease diagnosis, which semantically aligns medical images and clinical-related concepts at multiple levels. By thoroughly learning the correspondences between images and concepts at the global image level, regional token level, and concept subspace level, our method outperforms other concept-based models while preserving inherent interpretability and offering both visual and textual explanations. Extensive experiments and explainability analysis conducted on skin image datasets demonstrate that our method simultaneously achieves superior performance, label efficiency, and interpretability. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 843 Acknowledgments This work was supported by the Hong Kong Innovation and Technology Fund (Project No. ITS/028/21FP), Shenzhen Science and Technology Innovation Committee Fund (Project No. SGDX20210823103201011), and the Project of Hetao Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone (HZQB-KCZYB-2020083). References Abbasi-Asl, R.; and Yu, B. 2017. Structural compression of convolutional neural networks. arXiv preprint arXiv:1705.07356. Alsentzer, E.; Murphy, J. R.; Boag, W.; Weng, W.-H.; Jin, D.; Naumann, T.; and McDermott, M. 2019. Publicly available clinical BERT embeddings. arXiv preprint arXiv:1904.03323. Alvarez Melis, D.; and Jaakkola, T. 2018. Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems, 31. Argenziano, G.; Fabbrocini, G.; Carli, P.; De Giorgi, V.; Sammarco, E.; and Delfino, M. 1998. Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Archives of dermatology, 134(12): 1563–1570. Barata, C.; Celebi, M. E.; and Marques, J. S. 2021. Explainable skin lesion diagnosis using taxonomies. Pattern Recognition, 110: 107413. Carvalho, D. V.; Pereira, E. M.; and Cardoso, J. S. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8): 832. Coppola, D.; Lee, H. K.; and Guan, C. 2020. Interpreting mechanisms of prediction for skin cancer diagnosis using multi-task learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 734–735. Cortes, C.; and Vapnik, V. 1995. Support-vector networks. Machine learning, 20: 273–297. Daneshjou, R.; Yuksekgonul, M.; Cai, Z. R.; Novoa, R.; and Zou, J. Y. 2022. Skincon: A skin disease dataset densely annotated by domain experts for fine-grained debugging and analysis. Advances in Neural Information Processing Systems, 35: 18157–18167. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Esteva, A.; Kuprel, B.; Novoa, R. A.; Ko, J.; Swetter, S. M.; Blau, H. M.; and Thrun, S. 2017. Dermatologist-level classification of skin cancer with deep neural networks. nature, 542(7639): 115–118. Fang, Z.; Kuang, K.; Lin, Y.; Wu, F.; and Yao, Y.-F. 2020. Concept-based explanation for fine-grained images and its application in infectious keratitis classification. In Proceedings of the 28th ACM international conference on Multimedia, 700–708. Groh, M.; Harris, C.; Soenksen, L.; Lau, F.; Han, R.; Kim, A.; Koochek, A.; and Badri, O. 2021. Evaluating deep neural networks trained on clinical images in dermatology with the fitzpatrick 17k dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1820–1828. Gu, R.; Wang, G.; Song, T.; Huang, R.; Aertsen, M.; Deprest, J.; Ourselin, S.; Vercauteren, T.; and Zhang, S. 2020. CA-Net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE transactions on medical imaging, 40(2): 699–711. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; and Pedreschi, D. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5): 1–42. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hsiao, J. H.-w.; Ngai, H. H. T.; Qiu, L.; Yang, Y.; and Cao, C. C. 2021. Roadmap of designing cognitive metrics for explainable artificial intelligence (XAI). arXiv preprint arXiv:2108.01737. Jin, W.; Li, X.; Fatehi, M.; and Hamarneh, G. 2023. Guidelines and evaluation of clinical explainable AI in medical image analysis. Medical Image Analysis, 84: 102684. Kawahara, J.; Daneshvar, S.; Argenziano, G.; and Hamarneh, G. 2018. Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE journal of biomedical and health informatics, 23(2): 538–546. Kermany, D. S.; Goldbaum, M.; Cai, W.; Valentim, C. C.; Liang, H.; Baxter, S. L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. 2018. Identifying medical diagnoses and treatable diseases by image-based deep learning. cell, 172(5): 1122–1131. Kim, B.; Wattenberg, M.; Gilmer, J.; Cai, C.; Wexler, J.; Viegas, F.; et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, 2668–2677. PMLR. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Koh, P. W.; and Liang, P. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning, 1885–1894. PMLR. Koh, P. W.; Nguyen, T.; Tang, Y. S.; Mussmann, S.; Pierson, E.; Kim, B.; and Liang, P. 2020. Concept bottleneck models. In International conference on machine learning, 5338–5348. PMLR. Lakkaraju, H.; Kamar, E.; Caruana, R.; and Leskovec, J. 2019. Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 131–138. Lang, O.; Gandelsman, Y.; Yarom, M.; Wald, Y.; Elidan, G.; Hassidim, A.; Freeman, W. T.; Isola, P.; Globerson, A.; Irani, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 844 M.; et al. 2021. Explaining in style: Training a gan to explain a classifier in stylespace. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 693–702. Lapuschkin, S.; W¨aldchen, S.; Binder, A.; Montavon, G.; Samek, W.; and M¨uller, K.-R. 2019. Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1): 1096. Laugel, T.; Lesot, M.-J.; Marsala, C.; Renard, X.; and Detyniecki, M. 2019. The dangers of post-hoc interpretability: Unjustified counterfactual explanations. arXiv preprint arXiv:1907.09294. Lipton, Z. C. 2017. The doctor just won’t accept that! arXiv preprint arXiv:1711.08037. Litjens, G.; Kooi, T.; Bejnordi, B. E.; Setio, A. A. A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J. A.; Van Ginneken, B.; and S´anchez, C. I. 2017. A survey on deep learning in medical image analysis. Medical image analysis, 42: 60–88. Lucieri, A.; Bajwa, M. N.; Braun, S. A.; Malik, M. I.; Dengel, A.; and Ahmed, S. 2020. On interpretability of deep learning based skin lesion classifiers using concept activation vectors. In 2020 international joint conference on neural networks (IJCNN), 1–10. IEEE. Lundberg, S. M.; and Lee, S.-I. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Mendonc¸a, T.; Ferreira, P. M.; Marques, J. S.; Marcal, A. R.; and Rozeira, J. 2013. PH 2-A dermoscopic image database for research and benchmarking. In 2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC), 5437–5440. IEEE. Nachbar, F.; Stolz, W.; Merkle, T.; Cognetta, A. B.; Vogt, T.; Landthaler, M.; Bilek, P.; Braun-Falco, O.; and Plewig, G. 1994. The ABCD rule of dermatoscopy: high prospective value in the diagnosis of doubtful melanocytic skin lesions. Journal of the American Academy of Dermatology, 30(4): 551–559. Nguyen, A.; Dosovitskiy, A.; Yosinski, J.; Brox, T.; and Clune, J. 2016. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in neural information processing systems, 29. Patr´ıcio, C.; Neves, J. a. C.; Teixeira, L. F.; et al. 2023. Coherent Concept-Based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 3798–3807. Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144. Rigotti, M.; Miksovic, C.; Giurgiu, I.; Gschwind, T.; and Scotton, P. 2021. Attention-based interpretability with concept transformers. In International Conference on Learning Representations. Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5): 206–215. Sarkar, A.; Vijaykeerthy, D.; Sarkar, A.; and Balasubramanian, V. N. 2022. A Framework for Learning AnteHoc Explainable Models via Concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10286–10295. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626. Sundararajan, M.; Taly, A.; and Yan, Q. 2017. Axiomatic attribution for deep networks. In International conference on machine learning, 3319–3328. PMLR. Tsang, M.; Cheng, D.; and Liu, Y. 2017. Detecting statistical interactions from neural network weights. arXiv preprint arXiv:1705.04977. Van Den Oord, A.; Kalchbrenner, N.; and Kavukcuoglu, K. 2016. Pixel recurrent neural networks. In International conference on machine learning, 1747–1756. PMLR. Xiang, A.; and Wang, F. 2019. Towards interpretable skin lesion classification with deep learning models. In AMIA annual symposium proceedings, volume 2019, 1246. American Medical Informatics Association. Yan, S.; Yu, Z.; Zhang, X.; Mahapatra, D.; Chandra, S. S.; Janda, M.; Soyer, P.; and Ge, Z. 2023. Towards Trustable Skin Cancer Diagnosis via Rewriting Model’s Decision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11568–11577. Yeh, C.-K.; Kim, B.; Arik, S.; Li, C.-L.; Pfister, T.; and Ravikumar, P. 2020. On completeness-aware concept-based explanations in deep neural networks. Advances in neural information processing systems, 33: 20554–20565. Yosinski, J.; Clune, J.; Nguyen, A.; Fuchs, T.; and Lipson, H. 2015. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579. Young, K.; Booth, G.; Simpson, B.; Dutton, R.; and Shrapnel, S. 2019. Deep neural network or dermatologist? In Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support: Second International Workshop, iMIMIC 2019, and 9th International Workshop, ML-CDS 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Proceedings 9, 48–55. Springer. Yuksekgonul, M.; Wang, M.; Zou, J.; et al. 2023. Post-hoc Concept Bottleneck Models. In The Eleventh International Conference on Learning Representations. Zhang, Y.; Jiang, H.; Miura, Y.; Manning, C. D.; and Langlotz, C. P. 2022. Contrastive learning of medical visual representations from paired images and text. In Machine Learning for Healthcare Conference, 2–25. PMLR. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2921–2929. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 845
2024
94
18,784
LGMRec: Local and Global Graph Learning for Multimodal Recommendation Zhiqiang Guo1, Jianjun Li1*, Guohui Li2*, Chaoyang Wang3, Si Shi4, Bin Ruan1 1 School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China 2 School of Software Engineering, Huazhong University of Science and Technology, Wuhan, China 3 Wuhan Digital Engineering Institute, Wuhan, China 4 Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China {zhiqiangguo, jianjunli, guohuili}@hust.edu.cn, [email protected], [email protected], [email protected] Abstract The multimodal recommendation has gradually become the infrastructure of online media platforms, enabling them to provide personalized service to users through a joint modeling of user historical behaviors (e.g., purchases, clicks) and item various modalities (e.g., visual and textual). The majority of existing studies typically focus on utilizing modal features or modal-related graph structure to learn user local interests. Nevertheless, these approaches encounter two limitations: (1) Shared updates of user ID embeddings result in the consequential coupling between collaboration and multimodal signals; (2) Lack of exploration into robust global user interests to alleviate the sparse interaction problems faced by local interest modeling. To address these issues, we propose a novel Local and Global Graph Learning-guided Multimodal Recommender (LGMRec), which jointly models local and global user interests. Specifically, we present a local graph embedding module to independently learn collaborativerelated and modality-related embeddings of users and items with local topological relations. Moreover, a global hypergraph embedding module is designed to capture global user and item embeddings by modeling insightful global dependency relations. The global embeddings acquired within the hypergraph embedding space can then be combined with two decoupled local embeddings to improve the accuracy and robustness of recommendations. Extensive experiments conducted on three benchmark datasets demonstrate the superiority of our LGMRec over various state-of-the-art recommendation baselines, showcasing its effectiveness in modeling both local and global user interests. Introduction With the explosive growth of massive multimedia information (e.g., images, texts, and videos) on online media platforms, such as YouTube and Tiktok, a lot of efforts have been devoted to multimodal recommender systems (MRSs) to assist these platforms in providing personalized services to users. Nowadays, the primary task of MRSs is to design an effective way to integrate item multimodal information into traditional user-item interaction modeling frameworks to capture comprehensive user interests. Some early studies on MRSs adopt either the linear fusion between item modal features and their ID embeddings (He *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ID ID Image Text (c) Local user-item interaction graph (d) Global dependencies color style shape common attributes .... I-V I-T U-ID I-ID Multimodal Model Collaborative Model Optimization (a) Sharing of user ID embeddings (b) Gradient comparison .... item visual space Figure 1: Illustrations of (a) sharing of user ID embeddings, (b) the gradient comparison of user ID embeddings updated from different models during training, (c) local user-item interaction graph, and (d) global dependencies between users and attributions. Darker lines indicate greater user interest. and McAuley 2016; Liu, Wu, and Wang 2017; Wei et al. 2021) or the attention mechanism on item modalities (Chen et al. 2017, 2019; Liu et al. 2019) to model representations of users and items. However, The efficacy of these models is somewhat constrained as they only model low-order user-item interactions. The surge of research on graph-based recommendations (Wang et al. 2019; He et al. 2020; Mao et al. 2021; Wu et al. 2021) has sparked a wave of explorations in using graph neural networks (GNN) to enhance multimodal recommendations. These works typically capture higher-order user interests from the user-item graph that integrates multimodal contents (Wei et al. 2019, 2020; Wang et al. 2021; Yi et al. 2022; Tao et al. 2022; Wei et al. 2023), or construct modality-aware auxiliary graph structures to transfer multimodal knowledge into item and user embeddings (Zhang et al. 2021, 2022a; Zhou et al. 2023). Though achieving remarkable progress, existing studies on MRSs still suffer from the following two limitations in modeling user interests. (1) Coupling. Firstly, collaboraThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8454 tion and multimodal information provide different avenues for exploring user interests. In general, collaborative signals emphasize similar user behavior patterns, while modal knowledge is reflected through content similarity. However, prior works (Wei et al. 2019; Yi et al. 2022) often overlook this matter and share user ID embeddings in both collaborative and multimodal modeling modules (red line in Figure 1 (a)) to learn user interests that couple collaborative and multimodal signals. Experimentally, we randomly select two users from the Baby dataset and exhibit the gradient comparison of their ID embeddings (with 64 dimensions) from the collaborative and multimodal modeling modules in Figure 1 (b). In the early stages of training, the ratio of gradients with opposite directions (orange bar) from the two modules in all dimensions exceeds 50% for each user, which demonstrates that collaborative and multimodal signals generally have different guidance for user embedding learning1. Though this ratio slightly decreases as the training continues, the coupling design still restricts stable updates of user embeddings. (2) Locality. Secondly, most existing methods (Tao et al. 2022; Zhou et al. 2023) only learn local user interests from the interaction graph (Figure1 (c)), lacking the exploration of user global interests. Sparse user-item interactions limit their modeling of robust user interests. As shown in Figure1 (d), user global (general) interests are usually related to item attribute labels that do not rely on the local interactions. Specifically, items usually have multiple common attributions from visual space, such as color, style, shape. Users have different interests in various attributes. For example, u1 may like clothes with bright colors, while u2 prefers a simple style. A method that modeling only local interests may recommend the shirt i1 to u2 based on similar behaviors, i.e., same purchases (i2, i3, i4) between u1 and u2. But, the global interests of u2 can provide additional guidance, making it more likely to recommend the outerwear i5 with simple style that match u2’s true interests. To address the aforementioned issues, we propose a novel Local and Global Graph Learning-guided Multimodal Recommender (LGMRec), which explores capturing and exploiting both local and global representations of users and items to facilitate multimodal recommendation. Specifically, to address the first limitation, we present the local graph embedding module to independently capture collaborativerelated and modality-related local user interests by performing message propagation on user-item interaction graphs with ID embeddings and modal features, respectively. In view of the many-to-many dependency relationship between attributes and items is similar to that between hyperedges and nodes in hypergraphs, we further consider each implicit attribute as a hyperedge, and present a global hypergraph embedding module to model hypergraph structure dependencies, so as to address the second limitation. Extensive experimental results on three real-world datasets demonstrate that LGMRec surpasses various recommendation baselines significantly, and verify its effectiveness and robustness in 1In fact, approximately 94.26% of users in Baby dataset present such a situation, that is, more than 50% of the embedding dimensions have opposite gradient directions during the training process. modeling local and global user interests. Related Work Graph-based Recommendation The powerful ability of graph neural networks (Kipf and Welling 2016; Hu et al. 2019) in modeling high-order connectivity has greatly promoted the development of recommender systems. Specifically, graph-based recommendation methods model user and item representations by naturally converting the user history interactions into a user-item bipartite graph. Early studies directly inherit the message propagation mechanism of vanilla graph neural network to aggregate high-order neighbor information to represent users and items (Berg, Kipf, and Welling 2017; Ying et al. 2018; Wang et al. 2019). Later, by simplifying the message propagation process, some graphbased recommendation methods further improve recommendation performance (Chen et al. 2020; He et al. 2020; Mao et al. 2021). Additionally, some other methods explore more node dependencies to enhance the representations of users and items (Ma et al. 2019; Sun et al. 2019, 2020a; Li et al. 2022). Later, contrastive learning is also adopted to enhance graph-based recommendations (Lee et al. 2021; Wu et al. 2021; Yu et al. 2022; Lin et al. 2022; Yang et al. 2021; Cai et al. 2023) to construct contrastive views. However, since no modality features are considered, their modeling abilities are limited by sparse interactions. Hypergraph learning for Recommendation By constructing the hyperedge structure containing more than two nodes, hypergraph learning (Feng et al. 2019; Gao et al. 2020) can enhance the generalization ability of the model via capturing complex node dependencies. Some recommendation methods (Ji et al. 2020; Wang et al. 2020; He et al. 2021; Yu et al. 2021; Xia et al. 2021; Zhang et al. 2022b) try to build hypergraph structures and nodehyperedge connections to capture high-order interaction patterns and achieve substantial performance improvements. To further improve performance, several recently developed methods (Xia et al. 2022; Xia, Huang, and Zhang 2022) combine self-supervised learning and hypergraph learning to model robust user and item representations. For example, HCCF (Xia et al. 2022) enhances collaborative filtering with the hypergraph-guided self-supervised learning paradigm. Different from these works that generate hypergraph dependencies via only collaborative embeddings, our work achieves hypergraph structure learning with the modeling of modality-aware global relations. Multi-modal Recommendation The multi-modal recommendation has become the basic application on online media platforms to provide personalized services to users by analyzing the massive multi-modal information (e.g., images and textual descriptions) and user historical behaviors (e.g., reviews, clicks). Early studies on MRSs usually incorporate multi-modal contents as side information to extend the vanilla CF framework (He and McAuley 2016; Chen, He, and Kan 2016; Gao, Zhang, and Xu 2017; Du et al. 2020) or utilize deep autoencoder to model modal features (Guo et al. 2022; Liu et al. 2022). Inspired by the great success of graph-based recommendation methods (He et al. 2020; The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8455 ... ... ... ... User Textual Visual Item ID Embeddings Modal Features Interaction Matrix 1 0 1 1 0 0 0 0 1 0 0 0 0 0 1 1 0 Adjacency Matrix Layer Combination ... Textual Visual Embeddings hyperedges Modal Features Modal Features Hypergraph Learning Hypergraph Learning ... Collaborative Item Embeddings ... ... Local Graph Embedding Module Global Hypergraph Embedding Module ... ... ... ... ... Fusion and ... ... Prediction ... Nomalize ... ... Nomalize Figure 2: The framework of the proposed LGMRec with visual and textual modalities of items (i.e., m ∈{v, t}). Ma et al. 2019; Mao et al. 2021), some studies directly model user high-order interests on modality-specific interaction graphs (Wei et al. 2019, 2020; Sun et al. 2020b; Du et al. 2022; Kim et al. 2022). For instance, MMGCN (Wei et al. 2019) incorporates modality information into the graph message passing to infer modality-related user preferences. Another line utilizes auxiliary semantic graph structures learned from multimodal features to enhance user or item representations (Wang et al. 2021; Zhang et al. 2021). For example, LATTICE (Zhang et al. 2021) is a representative method that exploits modal content similarity to generate auxiliary latent item semantic relations to promote recommendation. Recently, Some works (Wei et al. 2021; Yi et al. 2022; Tao et al. 2022; Zhang et al. 2022a; Zhou et al. 2023; Wei et al. 2023) introduce contrastive learning into MRSs to model robust user and item representations. However, these methods usually perform message passing along the edges of user-item interactions to obtain local user interests, failing to explore modality-aware comprehensive user interests. Methodology In this section, we first formulate the problem of multimodal recommendation and present the overall framework of our LGMRec, and then introduce each component in detail. Problem Statement and Overview We set the user set as U = {u} and item set as I = {i}. The ID embeddings of each user u ∈U and item i ∈I are denoted as eu ∈Rd and ei ∈Rd, respectively, where d is the embedding dimension. The user-item interactions can be represented as a matrix R ∈R|U|×|I|, in which the element ru,i = 1 if user u interacts with item i, and ru,i = 0 otherwise. Based on interaction matrix R, we can construct the user-item interaction graph G = {U ∪I, E}, where E is edge set build on observed interactions, i.e., a nonzero ru,i corresponds to an edge between user u and item i on the graph G. Further, we incorporate item multimodal contents and denote the original modality feature of item i generated from pre-trained models as em i ∈Rdm under modality m ∈ M, where M is the set of modalities and dm denotes the dimension of modal features. In this work, we consider two mainstream modalities, vision v and text t, i.e., M = {v, t}. Given the above settings, the multimodal recommendation aims to learn a prediction function to forecast the score ˆru,i of an item i adopted by a user u via joint modeling user behaviors and multimodal contents. Formally, ˆru,i = PREDICTION R, Eid, {Em i }m∈M  (1) where PREDICTION(·) is the prediction function, Eid = [eu1, . . . , eu|U|, ei1, . . . , ei|I|] ∈R(|U|+|I|)×d denotes the ID embedding matrix by stacking all the ID embeddings of users and items, Em i = [em i1, . . . , em i|I|] ∈R|I|×dm is the item modal feature matrix under modality m. Overview. As illustrated in Figure 2, the framework of LGMRec consists of three major components: (i) Local graph embedding (LGE) module, which adopts GNN to capture collaborative-related and modality-related user local interests on user-item interaction graph with ID embeddings and modal features, respectively; (ii) Global hypergraph embedding (GHE) module, which learns the global user and item representations by capturing the global hypergraph structure dependencies from different item modal feature spaces; and (iii) Fusion and prediction module, which fuses both local and global embeddings to predict final user preference scores for items. Local Graph Embedding (LGE) Module The LGE module is designed to independently learn the collaborative-related and modality-related user and item repThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8456 resentations with local topology structure for avoiding unstable updates of user embeddings and promoting decoupled user interest learning. Collaborative Graph Embedding (CGE) We first capture the high-order connectivity via the message propagation on the user-item interaction graph with ID embeddings. In particular, the collaborative graph propagation function CGPROG(·) in the (l + 1)-th layer can be formatted as, El+1 = CGPROG(El) =  D−1 2 AD−1 2  El, (2) where CGPROG(·) function inherits the lightweight form of the simplified graph convolutional network (Chen et al. 2020; He et al. 2020), A ∈R(|U|+|I|)×(|U|+|I|) is the adjacency matrix constructed from interaction matrix R, and D is the diagonal matrix of A. Each diagonal element Dj,j in D denotes the number of nonzero entries in the j-th row vector of matrix A. The initial embeddings matrix is set as E0 = Eid. Then, we adopt the layer combination (He et al. 2020) to integrate all embeddings from hidden layers, Eid lge = LAYERCOMB E0, E1, E2, . . . , EL , (3) where Eid lge ∈R(|U|+|I|)×d is collaborative-related embeddings of users and items with local neighborhood information. We use the mean function to achieve LAYERCOMB(·) for embedding integration. Modality Graph Embedding (MGE) Considering the semantic differences between modalities, we further independently infer the modality-related embeddings of users and items on the interaction graphs with modal features. The original modal features of items are usually generated from different pre-trained models, e.g., ResNet (He et al. 2016), BERT (Kenton and Toutanova 2019), they have different dimensions in different feature spaces. We require the projection of high-dimensional modal feature em i of each item into a unified embedding space Rd as, eem i = TRANSFORM(em i ) = em i · Wm, (4) where eem i is item i’s transformed modal feature, TRANSFORM(·) is a projection function parameterized by a transformation matrix Wm ∈Rdm×d. Due to the difficulty in obtaining user modal information, existing methods often reuse user ID embedding as input for modality-specific graphs, resulting in coupling of collaborative and modal signals. Different from them, we initialize the user modal features by aggregating item modal features, eem u = 1 |Nu| X i∈Nu eem i , (5) where Nu denotes the neighbor set of user u ∈U on useritem interaction graph G. This operation ensures the separate updates of ID embedding and modal features. Thereafter, we can construct the modal feature matrix eEm = [eem u1, . . . , eem u|U|, eem i1, . . . , eem i|I|] ∈R(|U|+|I|)×d as initial input eEm,0 to learn modality-related embeddings via implementing a light graph propagation function MGPROG(·), eEm,k+1 = MGPROG(eEm,k) =  D−1 2 AD−1 2  eEm,k. (6) Here, we choose high-order modal embeddings eEm,K in the last K-th layer as the modality-related embeddings (i.e., Em lge = eEm,K) with local modal information. Global Hypergraph Embedding (GHE) Module The GHE module is designed to capture the modality-aware global representations of users and items against sparse and noisy user behaviors. Hypergraph Dependency Constructing Explicit attribute information of item modalities is often unavailable, especially for visual modalities. Hence, we define learnable implicit attribute vectors {vm a }A a=1 (vm a ∈Rdm) as hyperedge embeddings under modality m to adaptively learn the dependencies between implicit attributes and items/users , where A is the number of hyperedges. Specifically, We obtain hypergraph dependency matrices in low-dimensional embedding space by, Hm i = Em i · Vm⊤, Hm u = Au · Hm i ⊤, (7) where Hm i ∈R|I|×A and Hm u ∈R|U|×A are the itemhyperedge and user-hyperedge dependency matrices, respectively. Em i is the raw item modal feature matrix, Vm = [vm 1 , . . . , vm A ] ∈RA×dm is the hyperedge vector matrix, and Au ∈R|U|×|I| is the user-related adjacency matrix extracted from A. Intuitively, items with similar modal features are more likely to be connected to the same hyperedge. The user-hyperedge dependencies are indirectly derived through the user-item interactions, which implies the user behavior intention, i.e., the more frequently users interact with items under a certain attribute, the more they may prefer the attribute. To further avoid the negative impact of meaningless relationships, we employ the Gumbel-Softmax reparameterization (Jang, Gu, and Poole 2017) to ensure that an item is attached to only one hyperedge as much as possible, ehm i,∗= SOFTMAX log δ −log(1 −δ) + hm i,∗ τ  , (8) where hm i,∗∈RA is the i-th row vector of Hm i that reflects the relations between item i and all hyperedges. δ ∈RA is a noise vector, where each value δj ∼Uniform(0, 1), and τ is a temperature hyperparameter. Afterwards, we can get the augmented item-attribute hypergraph dependency matrix eHm i . By performing similar operations on Hm u , we can obtain the augmented user-attribute relation matrix eHm u . Hypergraph Message Passing By taking the attribute hyperedge as an intermediate hub, we achieve hypergraph message passing to deliver global information to users and items without being limited by hop distances. Formally, Em,h+1 i = DROP( eHm i ) · DROP( eHm⊤ i ) · Em,h i , (9) where Em,h i is the global embedding matrix of items in the h-th hypergraph layer, and DROP(·) denotes a dropout function. We take collaborative embedding matrix Eid i,lge of items The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8457 as the initial global embedding matrix when h = 0. Further, we can calculate the global user embedding matrix as, Em,h+1 u = DROP( eHm u ) · DROP( eHm⊤ i ) · Em,h i . (10) Apparently, the hypergraph passing explicitly enables global information transfer by taking the item collaborative embedding and modality-aware hypergraph dependencies as input. Then, we can obtain the global embeddings matrix Eghe by aggregating global embeddings from all modalities, Eghe = X m∈M Em,H, Em,H = [Em,H u , Em,H i ], (11) where Em,H u ∈R|U|×d and Em,H i ∈R|I|×d are global embedding matrices of user u and item i obtained in the H-th hypergraph layer under modality m, respectively. To further achieve the robust fusion of global embeddings among different modalities, we develop cross-modal hypergraph contrastive learning to distill the self-supervision signals for global interest consistency. Specifically, we take the global embeddings of users acquired in different modalities as positive pairs and different users as negative pairs, and then employ the InfoNCE (Gutmann and Hyv¨arinen 2010) to formally define user-side hypergraph contrastive loss as, Lu HCL = X u∈U −log exp(s(Ev,H u , Et,H u )/τ) P u′∈U exp(s(Ev,H u , Et,H u′ )/τ) , (12) where s(·) is the cosine function, and τ is the temperature factor, generally set to 0.2. Note here we only consider visual and textual modalities, i.e., m ∈{v, t}. Similarly, we can define item-side cross-modal contrastive loss Li HCL. Fusion and Prediction We acquire the final representations E∗of users and items by fusing their two types of local embeddings Eid lge, Em lge and global embeddings Eghe, E∗= Eid lge + X m∈M NORM(Em lge)+α· NORM(Eghe), (13) where NORM(·) is a normalization function to alleviate the value scale difference among embeddings, α is an adjustable factor to control the integration of global embeddings. We then use inner product to calculate the preference score ˆru,i of user u towards item i, i.e., ˆru,i = e∗ u · e∗ i ⊤. The Bayesian personalized ranking (BPR) loss (Rendle et al. 2012) is employed to optimize model parameters, LBPR = − X (u,i+,i−)∈R ln σ ˆru,i+ −ˆru,i− + λ1∥Θ∥2 2, (14) where R = {(u, i+, i−)|(u, i+) ∈G, (u, i−) /∈G} is a set of triples for training, σ(·) is the sigmoid function, and λ1 and Θ represent the regularization coefficient and model parameters, respectively. Finally, we integrate hypergraph contrastive loss with the BPR (Rendle et al. 2012) loss into a unified objective as, L = LBPR + λ2 · (Lu HCL + Li HCL) (15) where λ2 is a hyperparameter for loss term weighting. We minimize the joint objective L by using Adam optimizer (Kingma and Ba 2014). The weight-decay regularization term is applied over model parameters Θ. Dataset #User #Item #Interaction Sparsity Baby 19,445 7,050 160,792 99.883% Sports 35,598 18,357 296,337 99.955% Clothing 39,387 23,033 278,677 99.969% Table 1: Statistics of the three evaluation datasets Experiment Experimental Settings Datasets To evaluate our proposed model, we conduct comprehensive experiments on three widely used Amazon datasets (McAuley et al. 2015): Baby, Sports and Outdoors, Clothing Shoes and Jewelry. We refer to them as Baby, Sports, Clothing for brief. We adopt the 5-core setting to filter users and items for each dataset. The three datasets include both visual and textual modal features. In this work, we use the 4096-dimensional original visual features and 384-dimensional original textual features that have been extracted and published in prior work (Zhou et al. 2023). The statistics of the three datasets are summarized in Table 1. Evaluation Protocols For each dataset, we randomly split historical interactions into training, validation, and testing sets with 8 : 1 : 1 ratio. Two widely used protocols are used to evaluate the performance of top-n recommendation: Recall (R@n) and Normalized Discounted Cumulative Gain (He et al. 2015) (N@n). We tune n in {10, 20} and report the average results for all users in the testing set. Parameter Settings For a fair comparison, we optimize all models with the default batch size 2048, learning rate 0.001, and embedding size d = 64. For all graph-based methods, the number L of collaborative graph prorogation layers is set to 2. In addition, we initialize the model parameters with the Xavier method (Glorot and Bengio 2010). For our model, the optimal hyper-parameters are determined via grid search on the validation set. Specifically, the number of modal graph embedding layers and hypergraph embedding layers (K and H) are tuned in {1, 2, 3, 4}. The number A of hyperedge is searched in {1, 2, 4, 8, 16, 32, 64, 128, 256}. The dropout ratio ρ and the adjust factor α are tuned in {0.1, 0.2, . . . , 1.0}. We search both the adjust weight λ2 of contrastive loss and the regularization coefficient λ1 in {1e−6, 1e−5, . . . , 0.1}. The early stop mechanism is adopted, i.e., the training will stop when R@20 on the verification set does not increase for 20 successive epochs. We implement LGMRec2 with MMRec (Zhou 2023). Baselines We compare our proposed LGMRec with the following four groups of recommendation baselines, including (1) General CF Models: BPR (Rendle et al. 2012); (2) Graph-based Recommenders: LightGCN (He et al. 2020), SGL (Wu et al. 2021), NCL (Lin et al. 2022); (3) Hypergraph-based Recommenders: HCCF (Xia et al. 2022), SHT (Xia, Huang, and Zhang 2022); and (4) MultiModal Recommenders: VBPR (He and McAuley 2016), MMGCN (Wei et al. 2019), GRCN (Wei et al. 2020), 2https://github.com/georgeguo-cn/LGMRec The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8458 Datasets Baby Sports Clothing Metrics R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 BPR 0.0379 0.0607 0.0202 0.0261 0.0452 0.0690 0.0252 0.0314 0.0211 0.0315 0.0118 0.0144 LightGCN 0.0464 0.0732 0.0251 0.0320 0.0553 0.0829 0.0307 0.0379 0.0331 0.0514 0.0181 0.0227 SGL 0.0532 0.0820 0.0289 0.0363 0.0620 0.0945 0.0339 0.0423 0.0392 0.0586 0.0216 0.0266 NCL 0.0538 0.0836 0.0292 0.0369 0.0616 0.0940 0.0339 0.0421 0.0410 0.0607 0.0228 0.0275 HCCF 0.0480 0.0756 0.0259 0.0332 0.0573 0.0857 0.0317 0.0394 0.0342 0.0533 0.0187 0.0235 SHT 0.0470 0.0748 0.0256 0.0329 0.0564 0.0838 0.0306 0.0384 0.0345 0.0541 0.0192 0.0243 VBPR 0.0424 0.0663 0.0223 0.0284 0.0556 0.0854 0.0301 0.0378 0.0281 0.0412 0.0158 0.0191 MMGCN 0.0498 0.0749 0.0261 0.0315 0.0582 0.0825 0.0305 0.0382 0.0329 0.0564 0.0219 0.0253 GRCN 0.0531 0.0835 0.0291 0.0370 0.0600 0.0921 0.0324 0.0407 0.0431 0.0664 0.0230 0.0289 LATTICE 0.0536 0.0858 0.0287 0.0370 0.0618 0.0950 0.0337 0.0423 0.0459 0.0702 0.0253 0.0306 MMGCL 0.0522 0.0778 0.0289 0.0355 0.0660 0.0994 0.0362 0.0448 0.0438 0.0669 0.0239 0.0297 MICRO 0.0570 0.0905 0.0310 0.0406 0.0675 0.1026 0.0365 0.0463 0.0496 0.0743 0.0264 0.0332 SLMRec 0.0540 0.0810 0.0296 0.0361 0.0676 0.1007 0.0374 0.0462 0.0452 0.0675 0.0247 0.0303 BM3 0.0538 0.0857 0.0301 0.0378 0.0659 0.0979 0.0354 0.0437 0.0450 0.0669 0.0243 0.0295 LGMRec 0.0644* 0.1002* 0.0349* 0.0440* 0.0720* 0.1068* 0.0390* 0.0480* 0.0555* 0.0828* 0.0302* 0.0371* Improv. 12.98% 10.72% 12.58% 8.37% 6.51% 4.09% 4.28% 3.67% 11.90% 11.44% 14.39% 1.75% Table 2: Overall performances of LGMRec and other baselines on three datasets. The best result is in boldface and the second best is underlined. The t-tests validate the significance of performance improvements with p-value ≤0.05. LATTICE (Zhang et al. 2021), MMGCL (Yi et al. 2022), MICRO (Zhang et al. 2022a) SLMRec (Tao et al. 2022), BM3 (Zhou et al. 2023). Performance Comparison The performance comparison for all methods on the three datasets is summarized in Table 2, from which we have the following key observations: (1) The superiority of LGMRec. LGMRec substantially outperforms all other baselines and achieves promising performance across different datasets. We attribute such significant improvements to: i) The modeling of separated local embeddings that excavates user decoupled interests; ii) The hypergraph learning injects the modality-related global dependencies to local graph embeddings to mitigate interactive sparsity. (2) The effectiveness of modal features. Introducing knowledge-rich modality information is beneficial for boosting performance. Experimentally, though only linearly fusing the ID embeddings and modal features of items, the performance of VBPR still outperforms its counterpart (i.e., BPR). By effectively modeling the modal information, the multimodal recommenders (e.g., MMGCN, LATTICE, SLMRec, BM3) with LightGCN as the backbone network basically achieve better results than LightGCN. (3) The effectiveness of hypergraph learning. Hypergraph-based recommenders (i.e., HCCF and SHT) outperform the graph-based CF model LightGCN, suggesting the effectiveness of modeling global dependencies under hypergraph architecture. Besides, the significant improvement of LGMRec over competitive baselines further demonstrates the potential of hypergraph networks in modeling modality-aware global dependencies. Ablation Study We conduct ablation studies to explore the compositional effects of LGMRec. From the results reported in Table 3, we Components Baby Sports Clothing Metrics R@20 N@20 R@20 N@20 R@20 N@20 w/o MM 0.0732 0.0320 0.0829 0.0379 0.0514 0.0227 w/o LGE 0.0806 0.0351 0.0851 0.0392 0.0741 0.0327 w/o CGE 0.0947 0.0423 0.0997 0.0448 0.0807 0.0360 w/o MGE 0.0929 0.0417 0.0988 0.0440 0.0804 0.0357 w/o GHE 0.0972 0.0430 0.1032 0.0468 0.0803 0.0364 w/o HCL 0.0992 0.0434 0.1051 0.0474 0.0812 0.0368 w/ SUID 0.0869 0.0379 0.0895 0.0395 0.0713 0.0307 LGMRec 0.1002 0.0440 0.1068 0.0480 0.0828 0.0371 Table 3: Ablation of different components on LGMRec. can find: (1) The variant w/o MM without multimodal contents degenerates into LightGCN and achieves the worst performance, indicating that introducing modality features can greatly improve accuracy. (2) Removing either LGE or GHE can cause performance drops of LGMRec, demonstrating the benefits of modeling both local and global user interests. Notably, the variant w/o LGE performs worse than w/o GHE, which indicates that local interests directly related to user behavior are more important, and global interests can serve as a supplement. (3) In local graph embeddings, the variant w/o CGE (with MGE only) achieves better performance than w/o MGE (with CGE only) on all datasets, which reveals the importance of integrating multimodal features into user-item interaction modeling. (4) The variant w/o HCL removes hypergraph contrastive learning and only linearly adds all global embeddings. Its performances indicate that contrastive fusion of global embeddings of different modalities can improve performance by modeling the inter-modal global semantic consistency. (5) The variant w/ SUID that still shares user ID embeddings in both MGE and CGE modules performs worse than LGMRec, verifying the benefits of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8459 (0,5] (5,7] (7,10](10,16] (16,) Sparsity Degree 0 3 6 9 Num.(×1000) MMGCN LATTICE MMGCL SLMRec BM3 LGMRec 0.02 0.05 0.08 0.11 R@20 Baby (0,5] (5,7] (7,10](10,16] (16,) Sparsity Degree 0 5 10 15 Num.(×1000) MMGCN LATTICE MMGCL SLMRec BM3 LGMRec 0.03 0.06 0.09 0.12 R@20 Sports Figure 3: Performance w.r.t. different user interaction sparsity degrees in terms of R@20 on Baby and Sports datasets. 1 2 4 8 16 32 64 128 256 Hyperedge Numbers A 0.078 0.080 0.082 0.084 R@20 R@20 N@20 0.032 0.034 0.036 0.038 N@20 Clothing 0.1 0.4 0.7 1.0 Adjust Weight α 0.078 0.080 0.082 0.084 R@20 R@20 N@20 0.032 0.034 0.036 0.038 N@20 Clothing Figure 4: Performances under different settings of two key hyperparameters (A and α) on Clothing datasets. independently modeling user decoupled interests. In-Depth Analysis Performance with Different Data Sparsity We further study the influence of sparse user interactions by comparing LGMRec with five representative multimodal recommendation baselines: MMGCN, LATTICE, MMGCL, SLMRec, and BM3, on Baby and Sports datasets. Multiple user groups are constructed according to the number of their interactions. For example, the first user group contains users interacting with 0−5 items. From the results in Figure 3, we can observe that: (1) The superior performance of LGMRec is consistent across user groups with different sparsity degrees, revealing the effectiveness of LGMRec in alleviating interaction sparsity by modeling local and global representations. (2) LGMRec achieves more performance gains on sparser user groups. Specifically, LGMRec realizes 19.95% and 10.83% improvements over the best baseline for the sparsest and densest group on Baby, respectively, verifying the robustness of LGMRec in dealing with sparser user interactions. Hyperparameter Analysis Figure 4 reports the impact of two key hyperparameters of LGMRec on Clothing dataset: Hyperedge number A. From the left figure in Figure 4, we can observe that LGMRec presents performance promotion as the number of hyperedges increases, demonstrating the effectiveness of capturing multi-hyperedge global structures, especially for sparser Clothing datasets. Adjustable weight α. Impact of weight α of fusing global embeddings is also investigated in Figure 4. We can find that the performance first rises to an optimal value (α = 0.2) and then declines, which suggests that an appropriate α can improve accuracy by properly supplementing global embeddings, but a too large α may negatively affect performance. Users Hyperedge Dependencies Interacted Items Visual Modality Textual Modality 698 952 536 1422 2264 51 906 4663 1031 2131 3931 952 51 2415 4663 906 1167 536 1422 3931 1031 1344 Users Hyperedge Dependencies Interacted Items 4351 199 2131 2264 199 698 1167 2415 2131 2131 199 199 Figure 5: Case study of learned global dependencies of two users u1344 and u4351 with four hyperedges on Baby dataset. Case Study We qualitatively study the global hypergraph dependencies. Specifically, we randomly select two users u1344, u4351 with similar global embeddings learned on Baby dataset. Hypergraph dependencies under visual and textual modalities for the two users and the items they interact with are presented in Figure 5. The four hyperedges (squares) are shaded depending on the user-hyperedge dependency score. Moreover, the interacted items (circles) are arranged below the corresponding hyperedges in order, according to the maximum item-hyperedge dependency score. From Figure 5, we can observe that: (1) The user-hyperedge dependencies differ in different modalities. For example, the global interests of user u1344 in the visual modality are mainly related to the 4-th attribute hyperedge. Under the textual modality, user u1344 has larger dependency scores with the 3-rd hyperedges. Thus, we guess that the four items (i51, i906, i1167, and i2131) closely related to head hyperedges can reflect user u1344’s true preferences, while item i4663 attached to the 1-st hyperedge may be a noise interaction. (2) Although the interacted items are largely non-overlapping, user u4351 and user u1344 still have similar hyperedge dependencies, demonstrating why their global embeddings are similar. The results further reveal that LGMRec can exploit global hypergraph learning to distill similar knowledge of item modal features for performance improvement. Conclusion In this work, we proposed a novel model LGMRec for MRSs, which captures and utilizes local embeddings with local topological information and global embeddings with hypergraph dependencies. Specifically, we adopted a local graph embedding module to independently learn collaborative-related and modality-related local user interests. A global hypergraph embedding module is further designed to mine global user interests. Extensive experiments on three datasets demonstrated the superiority of our model over various baselines. For future work, we intend to seek better means of modeling the differences and commonalities among modalities for further performance improvement. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8460 Acknowledgements We would like to thank all anonymous reviewers for their valuable comments. The work was partially supported by the National Key R&D Program of China under Grant No. 2022YFC3802101 and the National Natural Science Foundation of China under Grant No. 62272176. References Berg, R. v. d.; Kipf, T. N.; and Welling, M. 2017. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263. Cai, X.; Huang, C.; Xia, L.; and Ren, X. 2023. LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. arXiv preprint arXiv:2302.08191. Chen, J.; Zhang, H.; He, X.; Nie, L.; Liu, W.; and Chua, T.S. 2017. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In Proceedings of SIGIR, 335–344. Chen, L.; Wu, L.; Hong, R.; Zhang, K.; and Wang, M. 2020. Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach. In Proceedings of AAAI, volume 34, 27–34. Chen, T.; He, X.; and Kan, M.-Y. 2016. Context-aware image tweet modelling and recommendation. In Proceedings of ACM MM, 1018–1027. Chen, X.; Chen, H.; Xu, H.; Zhang, Y.; Cao, Y.; Qin, Z.; and Zha, H. 2019. Personalized fashion recommendation with visual explanations based on multimodal attention network: Towards visually explainable recommendation. In Proceedings of SIGIR, 765–774. Du, X.; Wang, X.; He, X.; Li, Z.; Tang, J.; and Chua, T.S. 2020. How to learn item representation for cold-start multimedia recommendation? In Proceedings of ACM MM, 3469–3477. Du, X.; Wu, Z.; Feng, F.; He, X.; and Tang, J. 2022. Invariant Representation Learning for Multimedia Recommendation. In Proceedings of the 30th ACM International Conference on Multimedia, 619–628. Feng, Y.; You, H.; Zhang, Z.; Ji, R.; and Gao, Y. 2019. Hypergraph neural networks. In Proceedings of AAAI, volume 33, 3558–3565. Gao, J.; Zhang, T.; and Xu, C. 2017. A unified personalized video recommendation via dynamic recurrent neural networks. In Proceedings of ACM MM, 127–135. Gao, Y.; Zhang, Z.; Lin, H.; Zhao, X.; Du, S.; and Zou, C. 2020. Hypergraph learning: Methods and practices. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(5): 2548–2566. Glorot, X.; and Bengio, Y. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of AISTATS, 249–256. Guo, Z.; Li, G.; Li, J.; and Chen, H. 2022. TopicVAE: Topic-aware Disentanglement Representation Learning for Enhanced Recommendation. In Proceedings of the 30th ACM International Conference on Multimedia, 511–520. Gutmann, M.; and Hyv¨arinen, A. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of AISTATS, 297–304. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of CVPR, 770–778. He, L.; Chen, H.; Wang, D.; Jameel, S.; Yu, P.; and Xu, G. 2021. Click-through rate prediction with multi-modal hypergraphs. In Proceedings of CIKM, 690–699. He, R.; and McAuley, J. 2016. VBPR: visual bayesian personalized ranking from implicit feedback. In Proceedings of AAAI, volume 30. He, X.; Chen, T.; Kan, M.-Y.; and Chen, X. 2015. Trirank: Review-aware explainable recommendation by modeling aspects. In Proceedings of CIKM, 1661–1670. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of SIGIR, 639–648. Hu, F.; Zhu, Y.; Wu, S.; Wang, L.; and Tan, T. 2019. Hierarchical graph convolutional networks for semi-supervised node classification. arXiv preprint arXiv:1902.06667. Jang, E.; Gu, S.; and Poole, B. 2017. Categorical Reparameterization with Gumbel-Softmax. In Proceedings of ICLR. Ji, S.; Feng, Y.; Ji, R.; Zhao, X.; Tang, W.; and Gao, Y. 2020. Dual channel hypergraph collaborative filtering. In Proceedings of SIGKDD, 2020–2029. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT, 4171–4186. Kim, T.; Lee, Y.-C.; Shin, K.; and Kim, S.-W. 2022. MARIO: Modality-Aware Attention and ModalityPreserving Decoders for Multimedia Recommendation. In Proceedings of CIKM, 993–1002. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Lee, D.; Kang, S.; Ju, H.; Park, C.; and Yu, H. 2021. Bootstrapping user and item representations for one-class collaborative filtering. In Proceedings of SIGIR, 317–326. Li, G.; Guo, Z.; Li, J.; and Wang, C. 2022. MDGCF: Multi-Dependency Graph Collaborative Filtering with Neighborhood-and Homogeneous-level Dependencies. In Proceedings of CIKM, 1094–1103. Lin, Z.; Tian, C.; Hou, Y.; and Zhao, W. X. 2022. Improving graph collaborative filtering with neighborhoodenriched contrastive learning. In Proceedings of WWW, 2320–2329. Liu, F.; Cheng, Z.; Sun, C.; Wang, Y.; Nie, L.; and Kankanhalli, M. 2019. User diverse preference modeling by multimodal attentive metric learning. In Proceedings of the 27th ACM International Conference on Multimedia, 1526–1534. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8461 Liu, Q.; Wu, S.; and Wang, L. 2017. Deepstyle: Learning user preferences for visual recommendation. In Proceedings of SIGIR, 841–844. Liu, X.; Tao, Z.; Shao, J.; Yang, L.; and Huang, X. 2022. EliMRec: Eliminating Single-modal Bias in Multimedia Recommendation. In Proceedings of the 30th ACM International Conference on Multimedia, 687–695. Ma, J.; Cui, P.; Kuang, K.; Wang, X.; and Zhu, W. 2019. Disentangled graph convolutional networks. In Proceedings of ICML, 4212–4221. Mao, K.; Zhu, J.; Xiao, X.; Lu, B.; Wang, Z.; and He, X. 2021. UltraGCN: ultra simplification of graph convolutional networks for recommendation. In Proceedings of CIKM, 1253–1262. McAuley, J.; Targett, C.; Shi, Q.; and Van Den Hengel, A. 2015. Image-based recommendations on styles and substitutes. In Proceedings of SIGIR, 43–52. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618. Sun, J.; Zhang, Y.; Guo, W.; Guo, H.; Tang, R.; He, X.; Ma, C.; and Coates, M. 2020a. Neighbor interaction aware graph convolution networks for recommendation. In Proceedings of SIGIR, 1289–1298. Sun, J.; Zhang, Y.; Ma, C.; Coates, M.; Guo, H.; Tang, R.; and He, X. 2019. Multi-graph convolution collaborative filtering. In Proceedings of ICDM, 1306–1311. Sun, R.; Cao, X.; Zhao, Y.; Wan, J.; Zhou, K.; Zhang, F.; Wang, Z.; and Zheng, K. 2020b. Multi-modal knowledge graphs for recommender systems. In Proceedings of CIKM, 1405–1414. Tao, Z.; Liu, X.; Xia, Y.; Wang, X.; Yang, L.; Huang, X.; and Chua, T.-S. 2022. Self-supervised learning for multimedia recommendation. IEEE Transactions on Multimedia. Wang, J.; Ding, K.; Hong, L.; Liu, H.; and Caverlee, J. 2020. Next-item recommendation with sequential hypergraphs. In Proceedings of SIGIR, 1101–1110. Wang, Q.; Wei, Y.; Yin, J.; Wu, J.; Song, X.; and Nie, L. 2021. Dualgnn: Dual graph neural network for multimedia recommendation. IEEE Transactions on Multimedia. Wang, X.; He, X.; Wang, M.; Feng, F.; and Chua, T.-S. 2019. Neural graph collaborative filtering. In Proceedings of SIGIR, 165–174. Wei, W.; Huang, C.; Xia, L.; and Zhang, C. 2023. MultiModal Self-Supervised Learning for Recommendation. In Proceedings of WWW. Wei, Y.; Wang, X.; Li, Q.; Nie, L.; Li, Y.; Li, X.; and Chua, T.-S. 2021. Contrastive learning for cold-start recommendation. In Proceedings of ACM MM, 5382–5390. Wei, Y.; Wang, X.; Nie, L.; He, X.; and Chua, T.-S. 2020. Graph-refined convolutional network for multimedia recommendation with implicit feedback. In Proceedings of the 28th ACM International Conference on Multimedia, 3541– 3549. Wei, Y.; Wang, X.; Nie, L.; He, X.; Hong, R.; and Chua, T.-S. 2019. MMGCN: Multi-modal graph convolution network for personalized recommendation of micro-video. In Proceedings of the 27th ACM International Conference on Multimedia, 1437–1445. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; and Xie, X. 2021. Self-supervised graph learning for recommendation. In Proceedings of SIGIR, 726–735. Xia, L.; Huang, C.; Xu, Y.; Zhao, J.; Yin, D.; and Huang, J. 2022. Hypergraph contrastive collaborative filtering. In Proceedings of SIGIR, 70–79. Xia, L.; Huang, C.; and Zhang, C. 2022. Self-supervised hypergraph transformer for recommender systems. In Proceedings of SIGKDD, 2100–2109. Xia, X.; Yin, H.; Yu, J.; Wang, Q.; Cui, L.; and Zhang, X. 2021. Self-supervised hypergraph convolutional networks for session-based recommendation. In Proceedings of AAAI, volume 35, 4503–4511. Yang, Y.; Wu, L.; Hong, R.; Zhang, K.; and Wang, M. 2021. Enhanced graph learning for collaborative filtering via mutual information maximization. In Proceedings of SIGIR, 71–80. Yi, Z.; Wang, X.; Ounis, I.; and Macdonald, C. 2022. Multimodal graph contrastive learning for micro-video recommendation. In Proceedings of SIGIR, 1807–1811. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W. L.; and Leskovec, J. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of SIGKDD, 974–983. Yu, J.; Yin, H.; Li, J.; Wang, Q.; Hung, N. Q. V.; and Zhang, X. 2021. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proceedings of WWW, 413–424. Yu, J.; Yin, H.; Xia, X.; Chen, T.; Cui, L.; and Nguyen, Q. V. H. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of SIGIR, 1294–1303. Zhang, J.; Zhu, Y.; Liu, Q.; Wu, S.; Wang, S.; and Wang, L. 2021. Mining latent structures for multimedia recommendation. In Proceedings of the 29th ACM International Conference on Multimedia, 3872–3880. Zhang, J.; Zhu, Y.; Liu, Q.; Zhang, M.; Wu, S.; and Wang, L. 2022a. Latent Structure Mining with Contrastive Modality Fusion for Multimedia Recommendation. IEEE Transactions on Knowledge and Data Engineering. Zhang, X.; Xu, B.; Yang, L.; Li, C.; Ma, F.; Liu, H.; and Lin, H. 2022b. Price does matter! modeling price and interest preferences in session-based recommendation. In Proceedings of SIGIR, 1684–1693. Zhou, X. 2023. MMRec: Simplifying Multimodal Recommendation. arXiv preprint arXiv:2302.03497. Zhou, X.; Zhou, H.; Liu, Y.; Zeng, Z.; Miao, C.; Wang, P.; You, Y.; and Jiang, F. 2023. Bootstrap Latent Representations for Multi-Modal Recommendation. In Proceedings of WWW, 845–854. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8462
2024
940
18,785
Intra- and Inter-group Optimal Transport for User-Oriented Fairness in Recommender Systems Zhongxuan Han1, Chaochao Chen1, Xiaolin Zheng1*, Meng Li2, Weiming Liu1, Binhui Yao3, Yuyuan Li1, Jianwei Yin1 1College of Computer Science and Technology, Zhejiang University 2Harbin Institute of Technology (Shenzhen) 3Midea {zxhan, zjuccc, xlzheng}@zju.edu.cn, [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Recommender systems are typically biased toward a small group of users, leading to severe unfairness in recommendation performance, i.e., User-Oriented Fairness (UOF) issue. Existing research on UOF exhibits notable limitations in two phases of recommendation models. In the training phase, current methods fail to tackle the root cause of the UOF issue, which lies in the unfair training process between advantaged and disadvantaged users. In the evaluation phase, the current UOF metric lacks the ability to comprehensively evaluate varying cases of unfairness. In this paper, we aim to address the aforementioned limitations and ensure recommendation models treat user groups of varying activity levels equally. In the training phase, we propose a novel Intra- and InterGrOup Optimal Transport framework (II-GOOT) to alleviate the data sparsity problem for disadvantaged users and narrow the training gap between advantaged and disadvantaged users. In the evaluation phase, we introduce a novel metric called 𝜉-UOF, which enables the identification and assessment of various cases of UOF. This helps prevent recommendation models from leading to unfavorable fairness outcomes, where both advantaged and disadvantaged users experience subpar recommendation performance. We conduct extensive experiments on three real-world datasets based on four backbone recommendation models to prove the effectiveness of 𝜉-UOF and the efficiency of our proposed II-GOOT. Introduction Fairness is a critical research field in Machine Learning (ML) (Binns 2018; Dai et al. 2022; Mehrabi et al. 2021; Hutchinson and Mitchell 2019; Verma and Rubin 2018), and is also widely investigated in Recommender Systems (RSs) (Deldjoo et al. 2022; Chen et al. 2023; Han et al. 2023a). RS is a complex field involving frequent interactions between users and items (Su et al. 2023; Zheng et al. 2022b; Li et al. 2022, 2023). Fairness issues commonly arise from both the users’ side (Li et al. 2021; Rahmani et al. 2022) and the items’ side (Dash et al. 2021; Deldjoo et al. 2021a). In this paper, we focus on the fairness issue related to performance disparities among different user groups. *Xiaolin Zheng is the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Gradient Distribution UOF ISSUE More Satisfying Recommendation Worst Best : Our : Existing Metric: (b) Different Levels of UOF Figure 1: (a) visualizes the norm (i.e., 𝐿2 −𝑛𝑜𝑟𝑚) of gradients coming from different users in a training epoch of LightGCN (He et al. 2020) in the Amazon Health dataset. (b) shows the best and worst cases of UOF that both share the same objective of facilitating equitable recommendation results for different user groups. Though the worst fairness will lead to dissatisfaction in both user groups, the existing metric treats these two cases as equally favorable. RSs are always biased toward a small group of users, resulting in significant unfairness in the quality of recommendations (Li et al. 2021; Rahmani et al. 2022; Wen et al. 2022), i.e., the User-Oriented Fairness (UOF) issue. We define the users with more satisfied recommendation results as advantaged users and other users as disadvantaged users. Existing research has proved that advantaged users constitute only a small proportion of the total user base (Li et al. 2021) since many users suffer from the data sparsity (Han et al. 2023b; Zheng et al. 2022a) problem and fail to receive satisfying recommendation results. Therefore, addressing the UOF issue becomes crucial in RSs to enhance the overall quality of recommendation services. To date, the relevant work of UOF is quite limited and exhibits notable limitations in both the training phase and evaluation phase. In the training phase, the existing methods fail to tackle the root cause of the UOF issue. The root of the UOF issue lies in the unfairness of the training process for recommendation models. Since recommendation models are always trained based on the interactions between users and items, we identify the advantaged users and disadvantaged users based on their interaction numbers (Li et al. 2021; Dai et al. 2022) and show the gradient distriThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8463 bution in a training epoch of LightGCN in Figure 1(a). The majority of users (i.e., disadvantaged users) fail to provide sufficient training data for recommendation models due to the data sparsity problem. Consequently, recommendation models become dominated by advantaged users who contribute more to the model’s updating process. Although existing research has proposed some re-ranking methods (Li et al. 2021; Dai et al. 2022) that adjust recommendation results after model training to achieve fairness, they cannot mitigate the unfair training process. Thus, the UOF issue cannot be solved well. Recently, researchers proposed a distributionally-robust optimization-based method (Wen et al. 2022) that aims to improve the worst-case user experience. Nevertheless, the limited availability of training samples from disadvantaged users restricts the performance of this method. In the evaluation phase, The existing UOF metric fails to provide a comprehensive evaluation of recommendation models As depicted in Figure 1(b), it is evident that the best case of UOF involves improving the quality of recommendation results for disadvantaged users to reach that of advantaged users, significantly surpassing the worst UOF. We argue that the UOF metric should be able to capture the differences among recommendation models with the worst and best fair. However, the existing metric (Li et al. 2021; Dai et al. 2022) solely compares whether different user groups receive nearly the same quality of recommendation services, treating both the best and worst fairness scenarios as equally favorable. This metric will encourage recommendation models to close the worst fair and lead to dissatisfaction in both user groups. In this paper, we propose a comprehensive solution to the aforementioned limitations through the introduction of the Intra- and Inter-GrOup Optimal Transport (II-GOOT) framework and a new metric 𝜉-UOF. In detail, In the training phase, we propose the II-GOOT framework to enhance the training process for disadvantaged users. Therefore, we can tackle the root cause of the UOF issue by reducing the training gap between advantaged users and disadvantaged users. The II-GOOT framework comprises two stages: the intra-group stage and the inter-group stage. (1) In the intragroup stage, our objective is to facilitate mutual assistance between pairs of similar disadvantaged users, thereby enhancing the training process. Firstly, we leverage the Optimal Transport (OT) (Liu et al. 2022b; Villani et al. 2009) mechanism to identify one-to-one similarities among disadvantaged users. Secondly, we facilitate the sharing of training samples between the two most similar disadvantaged users, thus alleviating the problem of data sparsity. Nevertheless, due to the significant data sparsity issue among disadvantaged users, the effectiveness of the intra-group stage might be constrained. Therefore, (2) in the inter-group stage, we further enable each disadvantaged user to learn from similar advantaged users who have been well-trained in the recommendation model. We propose the novel inter-group optimal clustering mechanism to explore the similarities between disadvantaged and advantaged users based on their shared interactions with items. Subsequently, we minimize the distance of embeddings between advantaged users and their similar disadvantaged users, aiming to narrow the training gap between these two user groups. In the evaluation phase, we define the Best UOF and the Worst UOF for a recommendation model, with different levels of recommendation accuracy. We emphasize that the optimal fair direction for a recommendation model is to achieve the Best UOF. However, attaining the ideal Best UOF in real-world scenarios is not feasible. Therefore, we introduce the 𝜉-UOF metric, which assesses the gap between the existing model and the model with Best UOF. 𝜉-UOF takes into account both the fairness between advantaged and disadvantaged users and the accuracy of the recommendation model. This metric aims to strike a balance between fairness and accuracy, providing a comprehensive evaluation of the UOF issue. We have conducted extensive experiments based on four backbone models on three widely used real-world datasets. The experimental results demonstrate that II-GOOT outperforms State-Of-The-Art (SOTA) methods in addressing the UOF issue. Moreover, we substantiate that the proposed 𝜉UOF has the ability to identify different cases of UOF, which are overlooked by the existing metric. We summarize our contributions as follows: (1) We propose the II-GOOT framework to address the root cause of the UOF issue in the training phase. (2) We introduce the novel 𝜉-UOF metric, providing a comprehensive evaluation of the UOF issue in recommendation models in the evaluation phase. (3) We conduct extensive experiments to demonstrate the efficiency of II-GOOT and the effectiveness of 𝜉UOF. Related Work Fair Recommendation Fairness among different stakeholders in recommendation systems has attracted considerable attention in recent years. Considering the subject of fairness, fairness in recommender systems can be decoupled into user fairness, item fairness, and provider fairness (Deldjoo et al. 2023). The ultimate goal of fair recommendation is to mitigate disparities among different subject groups. For user fairness, many works strive to provide similar users with similar recommendation results, e.g., ranking accuracy (Deldjoo, Bellogin, and Di Noia 2021), diversity, coverage (Melchiorre et al. 2021), under-ranking (Gorantla, Deshpande, and Louis 2021), and selection rate (S¨uhr, Hilgard, and Lakkaraju 2021). For item fairness, similar items should receive equal exposure regardless of sensitive attributes (Rastegarpanah, Gummadi, and Crovella 2019; Deldjoo et al. 2021b; Dash et al. 2021) or past exposure (Biega, Gummadi, and Weikum 2018), like the typical cold-start scene. For provider fairness, providers with more history interactions may be recommended more often than the rest (Ferraro 2019; Gharahighehi, Vens, and Pliakos 2021), leading to the superstar effect. Exposure disparity caused by the correspondence between providers and items (S¨uhr, Hilgard, and Lakkaraju 2021) and private characteristics (Shakespeare et al. 2020) should also be mitigated to create an equal market. In this paper, we focus on the rarely explored fairness issue among users with different activity levels, i.e., the UOF The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8464 problem. Different from existing work (Li et al. 2021; Rahmani et al. 2022; Wen et al. 2022), we dive into the training process to mitigate the learning gap between advantaged and disadvantaged user groups and propose a novel metric. Optimal Transport Optimal transport has garnered significant attention due to its excellent ability to match between two distributions or spaces. Concerning OT as a field of mathematics, a broad range of literature is available (Villani et al. 2009; Santambrogio 2015; Figalli and Glaudo 2021). Notably, (Santambrogio 2015) unified the two classical formulations of OT: Monge formulation and Kantorovich formulation. Recent advances in accelerating OT computation have unveiled its potential in Machine Learning. Computation of Wasserstein distances and Wasserstein Barycenters was greatly sped up by (Cuturi 2013; Cuturi and Doucet 2014). Many attempts have been made to utilize OT to improve some downstream tasks in natural language processing (Asano, Rupprecht, and Vedaldi 2019; Chen et al. 2019), transfer learning (Flamary et al. 2016; Courty et al. 2017; Damodaran et al. 2018; Xu et al. 2020), adversarial learning (Arjovsky, Chintala, and Bottou 2017), neural architecture search (Yang, Liu, and Xu 2023), and recommendation systems (Liu et al. 2021, 2022a; Liu, Fang, and Wu 2023). As the user embeddings in collaborative filtering models can be seen as a kind of latent space, in this paper, we apply OT to match between users in the latent space. The matching results are then utilized to enhance the training process. Methodology In this section, we introduce the proposed II-GOOT framework to solve the UOF issue in the training phase of recommendation models. Problem Formulation We use U and I to represent the user set and the item set. We divide users into disadvantaged user group D and advantaged user group A based on their interaction numbers according to (Li et al. 2021; Rahmani et al. 2022). Users with more interactions are more likely to be advantaged. We denote the initial average recommendation performance (e.g., HitRatio, NDCG) of these two groups of users as 𝑃D and 𝑃A with 𝑃A > 𝑃D in most cases. In this paper, we aim to narrow the gap in the recommendation performance between D and A to achieve UOF and maintain the overall recommendation performance simultaneously. Overview In this section, we proposed a novel Intra- and Inter-GrOup Optimal Transport framework, namely II-GOOT to solve the UOF issue in the training phase. II-GOOT is a general framework that can be integrated with any recommendation models (i.e., backbone models) to achieve UOF. The overall architecture of the framework is depicted in Figure 2, and it is divided into two key stages: the intra-group stage and the inter-group stage. (1) The intra-group stage aims to address the data sparsity problem encountered by disadvantaged users, thereby enhancing the modeling process for this group. To achieve this, we divide the disadvantaged users into two distinct groups and introduce the intra-group Optimal Transport (OT) to explore one-to-one similarities between these groups. Consequently, each pair of disadvantaged users can share their training samples, effectively mitigating the data sparsity issue. (2) In the inter-group stage, we introduce the novel inter-group optimal clustering mechanism to explore the similarities between advantaged and disadvantaged users. This step enables disadvantaged users to learn from their similar advantaged counterparts, further enhancing the training process for the disadvantaged group. By employing these two stages, we successfully reduce the training gap between advantaged and disadvantaged users, thereby mitigating the root cause of the UOF issue. Intra-Group Stage In this stage, we aim to alleviate the data sparsity problem of disadvantaged users. As depicted in Figure 2, firstly, we utilize the intra-group optimal transport mechanism to explore one-to-one similarities among disadvantaged users. Then, we enable disadvantaged users to share their training samples with their most similar users. Intra-Group Optimal Transport. To ensure users with limited training samples can benefit from those with more extensive training data, we sort the disadvantaged users based on their interactions with items and divide them into two subgroups G1 and G2. Each subgroup comprises half of the disadvantaged users (i.e., |G1| = |G2|) with users in G1 having fewer interactions with items compared to those in G2. The primary goal of the intra-group optimal transport is to ascertain users’ similarities based on their interactions with items. We achieve this objective in several steps: Firstly, we construct one-hot interaction embeddings 𝐻∈ {0, 1}|U|×|I| of users, where 𝐻𝑖𝑗= 1 indicates that U𝑖has interacted with I𝑗, and 𝐻𝑖𝑗= 0 otherwise. We denote onehot embeddings of G1 and G2 as 𝐻G1 and 𝐻G2, respectively. Secondly, we extract the optimal transport matrix by solving the Monge-Kantorovich Problem (Bogachev and Kolesnikov 2012) to explore similar user pairs between G1 and G2. Considering ℎG1 and ℎG2 are two variables respectively sampled from 𝐻G1 and 𝐻G2. Then, the MongeKantorovich Problem is defined as follows: Problem 1 (Monge-Kantorovich Problem) Given the transport cost matrix 𝐶∈R| G1|×| G2| + , the objective of the Monge-Kantorovich Problem is to find the joint probability 𝑊∈R| G1|×| G2| + that minimizes the total transport cost: 𝑑𝐶(𝐻G1, 𝐻G2) = min 𝑊 ∫ 𝐻G1 ×𝐻G2 𝐶(ℎG1, ℎG2)𝑑𝑊(ℎG1, ℎG2). (1) Here, 𝑊𝑖𝑗indicates the possibility of transporting U𝑖in G1 to U𝑗in G2, which reflects the similarity between U𝑖and U𝑗. To identify the most similar user in another group for a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8465 Figure 2: The overall framework of II-GOOT. In the intra-group stage, we enable each disadvantaged user to share training samples with his/her most similar disadvantaged user to mitigate the data sparsity problem. In the inter-group stage, we let disadvantaged users learn from advantaged users to further narrow the training gap between them. given user, we introduce a constraint on 𝑊: | G1| ∑︁ 𝑖=1 | G2| ∑︁ 𝑗=1 𝑊𝑖𝑗= 1, 𝑊𝑖𝑗∈  0, 1 |G1|  , | G1| ∑︁ 𝑖=1 𝑊𝑖𝑗= 1 |G1| , | G1| ∑︁ 𝑗=1 𝑊𝑖𝑗= 1 |G1| , (2) where 𝑊𝑖𝑗= 1 | G1| indicates that U𝑗is the most similar user in G2 for U𝑖, and vice versa. However, solving the Monge-Kantorovich Problem can be time-consuming, with a worst-case time complexity of 𝑂(|G1|3). To overcome this, we introduce the sinkhorn divergence (Cuturi 2013) to smooth the objective with an entropic regularization: 𝑑𝜖 𝐶(𝐻G1, 𝐻G2) = min 𝑊 ∫ 𝐻G1 ×𝐻G2 𝐶(ℎG1, ℎG2)𝑑𝑊(ℎG1, ℎG2) + 𝜖· | G1| ∑︁ 𝑖=1 | G2| ∑︁ 𝑗=1 𝑊𝑖𝑗(log(𝑊𝑖𝑗) −1). (3) The derived new objective can be efficiently solved through Sinkhorn’s matrix scaling algorithm with a complexity of 𝑂(|G1| · |G2|) (Cuturi 2013). We introduce the detailed optimization process for Equation (3) in Appendix A. Thirdly, we construct the cost matrix 𝐶based on cosine similarities between 𝐻G1 and 𝐻G2: 𝐶𝑖𝑗= 𝐻G1 𝑖 · 𝐻G2 𝑗 |𝐻G1 𝑖| × |𝐻G2 𝑗| . (4) Therefore, 𝐶can reflect the initial similarities between each user and give additional constraints to the probability measure. By calculating 𝑊in Equation(3) based on 𝐶, we can explore the one-to-one similar user pairs between G1 and G2. Sharing Training Samples. During the training process of the backbone recommendation model, we enable similar users in G1 and G2 to share their training samples to mitigate the data sparsity problem of disadvantaged users based on the result of 𝑊. For example, if 𝑊𝑖𝑗= 1 | G1| , then U𝑖in G1 and U𝑗in G2 will train together. Inter-Group Stage In this stage, we enable disadvantaged users to learn from their similar advantaged users, thereby reducing the training gap between these two user groups. While the intragroup stage helps mitigate data sparsity among disadvantaged users, it alone may be insufficient to address the UOF issue due to the limited training samples available for disadvantaged users. Therefore, as shown in Figure 2, firstly, we propose a novel inter-group optimal clustering mechanism to explore similar disadvantaged users for each advantaged user. Then, disadvantaged users will learn from the corresponding similar advantaged user to receive better recommendation results. Inter-Group Optimal Clustering. To explore 𝑛-to-one similarities among disadvantaged users and disadvantaged users, we propose the novel inter-group optimal clustering mechanism. In this approach, each advantaged user acts as a cluster center, while disadvantaged users are considered nodes to be clustered around these centers. To achieve this goal, we need to solve the following Monge-Kantorovich problem smoothed with the sinkhorn divergence: 𝑑𝜖 𝐶(𝐻D, 𝐻A) = min 𝑋 ∫ 𝐻D ×𝐻A 𝑀(ℎD, ℎA)𝑑𝑋(ℎD, ℎA) + 𝜖 |D| ∑︁ 𝑖=1 |A| ∑︁ 𝑗=1 𝑋𝑖𝑗(log(𝑋𝑖𝑗) −1), (5) where 𝑀∈R|𝐷|×| 𝐴| + is the cost matrix, similar to 𝐶. ℎD and ℎA are two embeddings sampled from 𝐻D and 𝐻A, respectively. The above problem aims to find similarities between disadvantaged and advantaged users. To achieve our objective of clustering each disadvantaged user into a specific advantaged user, we design a restriction of 𝑋as follows: |D| ∑︁ 𝑖=1 |A| ∑︁ 𝑗=1 𝑋𝑖𝑗= 1, 𝑋𝑖𝑗∈{0, 1 |D| }, |D| ∑︁ 𝑖=1 𝑋𝑖𝑗= 1 |A| , |A| ∑︁ 𝑗=1 𝑋𝑖𝑗= 1 |D| . (6) By restricting Í|A| 𝑗=1 𝑋𝑖𝑗= 1 |D| , we ensure balanced clusters and avoid too many disadvantaged users being clustered together, which will cause over-smoothing of features. After The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8466 solving the above optimization problem, we can get 𝑋which represents the clustering result. For example, 𝑋𝑖𝑗= 1 |D| indicates that D𝑖belongs to the cluster centered with A 𝑗. Embedding Training. During the training of the backbone recommendation model, we enable disadvantaged users to learn from advantaged users by enhancing the cohesion of each cluster. We calculate the inter-group loss as follows: 𝐿𝑖𝑛𝑡𝑒𝑟= |D| ∑︁ 𝑖 ||𝐸D𝑖−𝐸T𝑖||2, (7) where T𝑖represents the clustering center of D𝑖, 𝐸represents the user embedding. By minimizing 𝐿𝑖𝑛𝑡𝑒𝑟, disadvantaged users can learn from their similar advantaged users, and the distributions of disadvantaged users and advantaged users will be closer, which will improve the recommendation performance of disadvantaged users. Theorem 1 Let H be a hypothesis space of recommendation models with ℎ∈H. Let 𝑅D(ℎ) and 𝑅A(ℎ) be the expected error in user group D and A, ˆ𝑅A(ℎ) be the empirical estimate of 𝑅A(ℎ): 𝑅D(ℎ) ⩽ˆ𝑅A(ℎ) + ˆ𝑑H(D, A) + 𝛾, (8) where 𝛾is a constant. Theorem 1 tells us that, to obtain a recommendation model with a small 𝑅D(ℎ), it is necessary to minimize the H-divergence ˆ𝑑H(D, A) together with ˆ𝑅A(ℎ). As pointedout by (Ben-David et al. 2006), a strategy to control Hdivergence is to find two user groups that are as indistinguishable as possible. Therefore, by aligning distributions of disadvantaged and advantaged users to be similar, we can improve the recommendation performance of disadvantaged users and narrow the performance gap. The proof of Theorem 1 can be found in Appendix B. We combine 𝐿𝑖𝑛𝑡𝑒𝑟together with the recommendation loss 𝐿𝑢𝑡𝑖𝑙𝑖𝑡𝑦of the backbone model to achieve fairness and maintain the overall recommendation performance simultaneously: 𝐿= 𝐿𝑢𝑡𝑖𝑙𝑖𝑡𝑦+ 𝐿𝑖𝑛𝑡𝑒𝑟. (9) Through the intra-group stage and the inter-group stage, we enhance the training process for disadvantaged users and effectively address the root cause of the UOF issue. It’s important to highlight that both the intra-group optimal transport and the inter-group optimal clustering processes can be executed ahead of the actual model training. Therefore, the II-GOOT framework is time-efficient. A Novel Metric: 𝜉-UOF In this section, we solve the limitations of UOF research in the evaluation phase by introducing a novel metric. Existing UOF User-Oriented Fairness (UOF) is a kind of group fairness (Dwork et al. 2012; Hardt, Price, and Srebro 2016), that strives to establish equitable treatment for both advantaged and disadvantaged users within a recommendation model. Given M that indicates a metric (e.g., NDCG and HitRatio) that can evaluate the recommendation performance, UOF is defined as follows (Li et al. 2021; Rahmani et al. 2022): Definition 1 (User-Oriented Fairness (UOF)) E[M(A)] = E[M(D)]. (10) UOF aims to offer users with different activity levels the same recommendation performance, which is usually impossible in real-world RS. Therefore, researchers (Li et al. 2021; Rahmani et al. 2022) always calculate the difference in average recommendation performance for different user groups to evaluate the fairness of a model: Definition 2 (The UOF metric) M𝑈𝑂𝐹(A, D) = 1 |A| |A| ∑︁ 𝑖=1 M(𝐴𝑖) − 1 |D| |D| ∑︁ 𝑖=1 M(𝐷𝑖) . (11) However, the above M𝑈𝑂𝐹only evaluates the recommendation performance gap between advantaged and disadvantaged users, overlooking whether recommendation results are satisfying. For instance, if both groups receive equally poor recommendations, the model would still be considered favorable according to M𝑈𝑂𝐹. Such a metric may encourage recommendation models to achieve fairness at a low accuracy level. Our Proposed 𝜉-UOF To address the limitations and provide a comprehensive evaluation, we propose our metric, referred to as 𝜉-UOF, which takes both fairness and accuracy into account. To begin with, we define the Best UOF and the Worst UOF of a recommendation model as Figure 1(b) shows: Definition 3 (The Best UOF) E[M(A)] = E[M(D)] = 𝑃A. (12) Definition 4 (The Worst UOF) E[M(A)] = E[M(D)] = 𝑃D. (13) Clearly, if a recommendation model achieves the Best UOF, it can provide a fair recommendation result with high accuracy. Genuine fairness entails enhancing the recommendation outcomes for disadvantaged users in order to narrow the recommendation gap between them and advantaged users, i.e., the Best UOF. It does not involve reducing the recommendation quality of advantaged users to match the lower level experienced by disadvantaged users, i.e., the Worst UOF. Nevertheless, attaining the ideal Best UOF is impractical within real-world recommender systems due to limitations in training samples. Therefore, we define 𝜉-UOF, which evaluates the gap between a recommendation model and the model with Best UOF. Definition 5 (𝜉-UOF) 𝜉⩾M𝑈𝑂𝐹(A, D) = |A| |U| 𝑚𝑎𝑥  0, 𝑃A − 1 |A| |A| ∑︁ 𝑖=1 M(𝐴𝑖)  + |D| |U| 𝑚𝑎𝑥  0, 𝑃A − 1 |D| |D| ∑︁ 𝑖=1 M(𝐷𝑖)  , (14) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8467 where the majority user group (i.e., disadvantaged users) contributes more to the value of 𝜉-UOF. A smaller value of 𝜉indicates a fairer model, and when 𝜉= 0, it signifies that a model has achieved the Best UOF and even surpassed it. Through 𝜉-UOF, we encourage recommendation models to achieve the Best UOF and provide satisfying recommendation results for both advantaged and disadvantaged users. We conduct experiments in section to further analyze the limitations of the existing metric and advantages of 𝜉-UOF. Experiments and Analysis We conduct extensive experiments to answer the following questions: Q1: Does II-GOOT outperform the existing methods in effectively addressing the UOF issue and enhancing recommendation performance? Q2: Can 𝜉-UOF provide a robust evaluation of the UOF issue? Q3: What is the respective impact of the intra-group and inter-group stages on the performance of II-GOOT? Q4: How robust is the generalizability of the II-GOOT framework when subjected to variations in the categorization of advantaged and disadvantaged users? Q5: Can II-GOOT narrow the training gap between advantaged users and disadvantaged users? Datasets and Experimental Settings Dataset. We conduct our experiments on three public Amazon datasets Beauty, Grocery & Gourmet Food (Grocery), and Health & Personal Care (Health), which are widely used to evaluate the UOF issue (Li et al. 2021). We give a detailed description of the datasets in Appendix C.1. Baselines and Backbone Models. We compare IIGOOT with the SOTA methods UFR (Li et al. 2021) and S-DRO (Wen et al. 2022). Besides, we choose four backbone models, including MF (Koren, Bell, and Volinsky 2009), NeuMF (He et al. 2017), VAECF (Liang et al. 2018), and LightGCN (He et al. 2020) to evaluate the performance. We give a detailed introduction of baseline models and backbone models in Appendix C.2. Evaluation Protocols and Parameter Settings. We extract the top 5% users as advantaged users according to their interaction numbers, leaving others as disadvantaged users. Besides, we adopt Normalized Discounted Cumulative Gain (NDCG) (Wang et al. 2013) and Hit Ratio (HR) (Waters 1976) to evaluate the recommendation performance (Li et al. 2021; Dai et al. 2022). Then, we utilize our proposed 𝜉UOF to evaluate the UOF level of a recommendation model, with a lower value of 𝜉-UOF means a fairer performance. We give detailed evaluation protocols and parameter settings in Appendix C.3. Overall Comparison (Q1, Q2) We conduct extensive experiments on three public datasets. The results are reported in Table 1. To Answer Q1. The experimental results demonstrate that II-GOOT outperforms all baselines with fairer recommendation results and higher overall performance. Compared with original backbone models, II-GOOT particularly enhances the training process for disadvantaged users. Since model training involves both advantaged and disadvantaged users, the accuracy of recommendations for advantaged users also experiences an improvement. With both user groups being more satisfied with recommendation results, II-GOOT effectively fosters fairness in recommendation models, moving closer to the Best Fair and enhancing the overall recommendation performance. Compared with UFR, II-GOOT has the ability to solve the root cause of UOF, i.e., the training bias between advantaged and disadvantaged users. UFR aims to narrow the recommendation gap between these two groups of users by re-ranking recommendation results. Its effectiveness is hindered by inadequate training of disadvantaged users. Compared with S-DRO, II-GOOT solves the data sparsity problem of disadvantaged users by expanding the pool of training samples. While S-DRO focuses solely on minimizing the loss function for disadvantaged users alongside advantaged users, its potential is curtailed by the limitation of training data for the former group. To Answer Q2. The experimental results prove that 𝜉UOF has the ability to identify different levels of fairness and comprehensively evaluate the UOF issue. As shown in Table 1, recommendation models closer to the Best UOF have a lower value of 𝜉-UOF. This trend is reasonable since by optimizing recommendation models to the Best UOF, disadvantaged users can receive more satisfying recommendation results and the recommendation gap between advantaged and disadvantaged users can be narrowed. Among baseline models, UFR aims to simply ensure that both advantaged and disadvantaged users experience similar recommendation performance, as shown in Equation (11). However, since the re-ranking method, UFR, cannot mitigate the training bias during model training, it reduces the recommendation quality of advantaged users to match the low level experienced by disadvantaged users. For instance, in the Beauty dataset, UFR reduces the NDCG value for advantaged users from 0.3046 to 0.1956 in the NeuMF model, while the value for disadvantaged users shows a marginal increase from 0.1861 to 0.1863. Such performance will lead to dissatisfaction in both user groups and should not be encouraged. 𝜉-UOF recognizes it to be unfavorable with the high metric value of 0.1178. Ablation Study (Q3) In this section, we choose LightGCN as the backbone model to demonstrate the effectiveness of the intra-group stage and the inter-group stage. The experimental results are reported in Table 2, with Intra and Inter indicating the model with only the intra-group stage and the inter-group stage, respectively. Both the intra-group stage and the inter-group stage yield a fairer model, accompanied by a better overall recommendation performance. These outcomes underscore the significance of enhancing the training process for disadvantaged users. Compared with the inter-group stage, the intragroup stage offers a more substantial improvement. The reason is that the key issue of insufficient training for disadvantaged users is the data sparsity problem. The intra-group stage expands the pool of training samples, giving a more efficient solution. II-GOOT has the best performance, serving as evidence that both stages play integral roles in achieving The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8468 Beauty Grocery Health Ove. Adv. Dis. 𝜉. Ove. Adv. Dis. 𝜉. Ove. Adv. Dis. 𝜉. MF NDCG Original 0.152 0.266 0.146 0.114 0.176 0.320 0.168 0.144 0.171 0.341 0.162 0.170 UFR 0.158 0.154 0.158 0.109 0.171 0.191 0.170 0.149 0.162 0.163 0.162 0.179 S-DRO 0.159 0.266 0.153 0.108 0.180 0.316 0.172 0.141 0.170 0.342 0.161 0.171 II-GOOT 0.194* 0.271* 0.189* 0.073* 0.228* 0.350* 0.222* 0.093* 0.199* 0.347* 0.191* 0.143* HR Original 0.256 0.439 0.247 0.182 0.316 0.478 0.308 0.162 0.293 0.463 0.284 0.170 UFR 0.251 0.277 0.250 0.187 0.305 0.313 0.304 0.174 0.288 0.310 0.287 0.175 S-DRO 0.260 0.426 0.251 0.179 0.320 0.479 0.312 0.158 0.289 0.447 0.281 0.174 II-GOOT 0.286* 0.445* 0.278* 0.153* 0.364* 0.483* 0.358* 0.115* 0.322* 0.469* 0.315* 0.141* NeuMF NDCG Original 0.192 0.305 0.186 0.113 0.200 0.344 0.193 0.144 0.196 0.359 0.188 0.162 UFR 0.187 0.196 0.186 0.118 0.200 0.292 0.196 0.144 0.192 0.231 0.190 0.167 S-DRO 0.196 0.311 0.190 0.109 0.200 0.331 0.193 0.144 0.207 0.352 0.199 0.152 II-GOOT 0.219* 0.323* 0.213* 0.087* 0.219* 0.351* 0.212* 0.125* 0.234* 0.391* 0.225* 0.127* HR Original 0.282 0.481 0.272 0.199 0.328 0.504 0.319 0.176 0.298 0.520 0.287 0.221 UFR 0.262 0.259 0.262 0.219 0.331 0.357 0.329 0.173 0.294 0.301 0.293 0.226 S-DRO 0.289 0.467 0.280 0.191 0.336 0.487 0.328 0.168 0.298 0.510 0.286 0.222 II-GOOT 0.321* 0.509* 0.311* 0.162* 0.356* 0.510* 0.348* 0.148* 0.349* 0.530* 0.340* 0.171* VAECF NDCG Original 0.208 0.343 0.201 0.135 0.206 0.351 0.199 0.145 0.242 0.410 0.233 0.168 UFR 0.205 0.212 0.205 0.139 0.208 0.216 0.208 0.143 0.233 0.260 0.231 0.177 S-DRO 0.216 0.332 0.210 0.127 0.208 0.352 0.200 0.144 0.240 0.401 0.231 0.171 II-GOOT 0.228* 0.357* 0.221* 0.116* 0.235* 0.364* 0.228* 0.117* 0.261* 0.427* 0.252* 0.150* HR Original 0.324 0.528 0.313 0.204 0.350 0.529 0.340 0.179 0.323 0.555 0.311 0.232 UFR 0.320 0.322 0.320 0.208 0.345 0.368 0.343 0.185 0.314 0.356 0.312 0.241 S-DRO 0.334 0.529 0.323 0.194 0.357 0.527 0.348 0.173 0.328 0.560 0.316 0.227 II-GOOT 0.353* 0.537* 0.343* 0.175* 0.361* 0.541* 0.352* 0.169* 0.349* 0.562* 0.338* 0.206* LightGCN NDCG Original 0.245 0.435 0.235 0.190 0.257 0.401 0.249 0.144 0.263 0.501 0.250 0.238 UFR 0.243 0.309 0.239 0.193 0.251 0.307 0.248 0.150 0.264 0.355 0.259 0.237 S-DRO 0.248 0.421 0.239 0.187 0.260 0.405 0.252 0.141 0.272 0.492 0.260 0.229 II-GOOT 0.286* 0.443* 0.278* 0.150* 0.289* 0.407* 0.283* 0.112* 0.309* 0.502* 0.299* 0.192* HR Original 0.375 0.625 0.362 0.250 0.393 0.622 0.380 0.230 0.376 0.633 0.362 0.257 UFR 0.368 0.404 0.366 0.257 0.396 0.441 0.394 0.226 0.383 0.419 0.381 0.250 S-DRO 0.376 0.602 0.365 0.249 0.414 0.620 0.403 0.208 0.387 0.639 0.374 0.246 II-GOOT 0.433* 0.641* 0.422* 0.193* 0.443* 0.628* 0.434* 0.180* 0.448* 0.637* 0.438* 0.185* Table 1: Experimental result. Ove. indicates the overall recommendation performance. 𝜉. indicates the value of 𝜉-UOF. Adv. indicates advantaged users. Dis. indicates disadvantaged users. The results of II-GOOT are highlighted in bold. The best results are marked with *. The second-best results are underlined. Beauty Grocery Health Ove. 𝜉. Ove. 𝜉. Ove. 𝜉. NDCG Original 0.245 0.190 0.257 0.144 0.263 0.238 Intra 0.274 0.160 0.279 0.124 0.290 0.207 Inter 0.269 0.159 0.277 0.128 0.280 0.219 II-GOOT 0.286 0.149 0.289 0.112 0.309 0.192 HR Original 0.375 0.250 0.393 0.230 0.376 0.257 Intra 0.420 0.214 0.424 0.191 0.425 0.205 Inter 0.410 0.220 0.420 0.193 0.417 0.212 II-GOOT 0.433 0.193 0.443 0.180 0.448 0.185 Table 2: Ablation study. Ove. indicates the overall recommendation performance. 𝜉. indicates the value of 𝜉-UOF. optimal outcomes. Generalizability of II-GOOT (Q4) We conduct experiments in Appendix D.1 to prove that IIGOOT has strong generalizability in narrowing the recommendation gap across various user distributions. The Change in 𝐿𝑖𝑛𝑡𝑒𝑟(Q5) We conduct experiments in Appendix D.2 to prove that IIGOOT has the ability to narrow the training gap between advantaged and disadvantaged users. Conclusion This paper focuses on rarely studied User-Oriented Fairness (UOF) in recommender systems, with the objective of reducing the recommendation performance gap between advantaged and disadvantaged users. We address the UOF issue in two phases of recommendation models. In the training phase, we propose an Intra- and Inter-GrOup Optimal Transport (II-GOOT) framework. This framework effectively narrows the training gap between advantaged and disadvantaged users through the intra-group stage and the intergroup stage. In the evaluation phase, we propose a novel 𝜉UOF metric to give a comprehensive evaluation of the UOF issue. We conduct extensive experiments on three real-world datasets based on four backbone models, demonstrating the efficiency of II-GOOT and the effectiveness of 𝜉-UOF. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8469 Acknowledgments This work was supported in part by the National Natural Science Foundation of China (No. 72192823), the “Ten Thousand Talents Program” of Zhejiang Province for Leading Experts (No. 2021R52001). References Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein generative adversarial networks. In International conference on machine learning, 214–223. PMLR. Asano, Y. M.; Rupprecht, C.; and Vedaldi, A. 2019. Selflabelling via simultaneous clustering and representation learning. arXiv preprint arXiv:1911.05371. Ben-David, S.; Blitzer, J.; Crammer, K.; and Pereira, F. 2006. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19. Biega, A. J.; Gummadi, K. P.; and Weikum, G. 2018. Equity of attention: Amortizing individual fairness in rankings. In The 41st international acm sigir conference on research & development in information retrieval, 405–414. Binns, R. 2018. Fairness in machine learning: Lessons from political philosophy. In Conference on fairness, accountability and transparency, 149–159. PMLR. Bogachev, V. I.; and Kolesnikov, A. V. 2012. The MongeKantorovich problem: achievements, connections, and perspectives. Russian Mathematical Surveys, 67(5): 785. Chen, J.; Dong, H.; Wang, X.; Feng, F.; Wang, M.; and He, X. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems, 41(3): 1–39. Chen, L.; Zhang, Y.; Zhang, R.; Tao, C.; Gan, Z.; Zhang, H.; Li, B.; Shen, D.; Chen, C.; and Carin, L. 2019. Improving sequence-to-sequence learning via optimal transport. arXiv preprint arXiv:1901.06283. Courty, N.; Flamary, R.; Habrard, A.; and Rakotomamonjy, A. 2017. Joint distribution optimal transportation for domain adaptation. Advances in neural information processing systems, 30. Cuturi, M. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26. Cuturi, M.; and Doucet, A. 2014. Fast computation of Wasserstein barycenters. In International conference on machine learning, 685–693. PMLR. Dai, E.; Zhao, T.; Zhu, H.; Xu, J.; Guo, Z.; Liu, H.; Tang, J.; and Wang, S. 2022. A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability. arXiv preprint arXiv:2204.08570. Damodaran, B. B.; Kellenberger, B.; Flamary, R.; Tuia, D.; and Courty, N. 2018. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European conference on computer vision (ECCV), 447–463. Dash, A.; Chakraborty, A.; Ghosh, S.; Mukherjee, A.; and Gummadi, K. P. 2021. When the umpire is also a player: Bias in private label product recommendations on ecommerce marketplaces. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 873–884. Deldjoo, Y.; Anelli, V. W.; Zamani, H.; Bellogin, A.; and Di Noia, T. 2021a. A flexible framework for evaluating user and item fairness in recommender systems. User Modeling and User-Adapted Interaction, 1–55. Deldjoo, Y.; Anelli, V. W.; Zamani, H.; Bellogin, A.; and Di Noia, T. 2021b. A flexible framework for evaluating user and item fairness in recommender systems. User Modeling and User-Adapted Interaction, 1–55. Deldjoo, Y.; Bellogin, A.; and Di Noia, T. 2021. Explaining recommender systems fairness and accuracy through the lens of data characteristics. Information Processing & Management, 58(5): 102662. Deldjoo, Y.; Jannach, D.; Bellogin, A.; Difonzo, A.; and Zanzonelli, D. 2022. A survey of research on fair recommender systems. arXiv preprint arXiv:2205.11127. Deldjoo, Y.; Jannach, D.; Bellogin, A.; Difonzo, A.; and Zanzonelli, D. 2023. Fairness in recommender systems: research landscape and future directions. User Modeling and User-Adapted Interaction, 1–50. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, 214–226. Ferraro, A. 2019. Music cold-start and long-tail recommendation: bias in deep representations. In Proceedings of the 13th ACM Conference on Recommender Systems, 586–590. Figalli, A.; and Glaudo, F. 2021. An Invitation to Optimal Transport, Wasserstein Distances, and Gradient Flows. European Mathematical Society/AMS. ISBN 978-3-98547010-5. Flamary, R.; Courty, N.; Tuia, D.; and Rakotomamonjy, A. 2016. Optimal transport for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell, 1: 1–40. Gharahighehi, A.; Vens, C.; and Pliakos, K. 2021. Fair multi-stakeholder news recommender system with hypergraph ranking. Information Processing & Management, 58(5): 102663. Gorantla, S.; Deshpande, A.; and Louis, A. 2021. On the problem of underranking in group-fair ranking. In International Conference on Machine Learning, 3777–3787. PMLR. Han, Z.; Chen, C.; Zheng, X.; Liu, W.; Wang, J.; Cheng, W.; and Li, Y. 2023a. In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems. In Proceedings of the 31st ACM International Conference on Multimedia, 6190–6201. Han, Z.; Zheng, X.; Chen, C.; Cheng, W.; and Yao, Y. 2023b. Intra and Inter Domain HyperGraph Convolutional Network for Cross-Domain Recommendation. In Proceedings of the ACM Web Conference 2023, 449–459. Hardt, M.; Price, E.; and Srebro, N. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8470 He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 639–648. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.-S. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, 173–182. Hutchinson, B.; and Mitchell, M. 2019. 50 years of test (un) fairness: Lessons for machine learning. In Proceedings of the conference on fairness, accountability, and transparency, 49–58. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8): 30–37. Li, Y.; Chen, C.; Zheng, X.; Zhang, Y.; Han, Z.; Meng, D.; and Wang, J. 2023. Making Users Indistinguishable: Attribute-wise Unlearning in Recommender Systems. In Proceedings of the 31st ACM International Conference on Multimedia, MM ’23, 984–994. New York, NY, USA: Association for Computing Machinery. ISBN 9798400701085. Li, Y.; Chen, H.; Fu, Z.; Ge, Y.; and Zhang, Y. 2021. Useroriented fairness in recommendation. In Proceedings of the Web Conference 2021, 624–632. Li, Y.; Zheng, X.; Chen, C.; and Liu, J. 2022. Making recommender systems forget: Learning and unlearning for erasable recommendation. arXiv preprint arXiv:2203.11491. Liang, D.; Krishnan, R. G.; Hoffman, M. D.; and Jebara, T. 2018. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference, 689– 698. Liu, W.; Su, J.; Chen, C.; and Zheng, X. 2021. Leveraging distribution alignment via stein path for cross-domain coldstart recommendation. Advances in Neural Information Processing Systems, 34: 19223–19234. Liu, W.; Zheng, X.; Hu, M.; and Chen, C. 2022a. Collaborative filtering with attribution alignment for review-based non-overlapped cross domain recommendation. In Proceedings of the ACM Web Conference 2022, 1181–1190. Liu, W.; Zheng, X.; Su, J.; Hu, M.; Tan, Y.; and Chen, C. 2022b. Exploiting variational domain-invariant user embedding for partially overlapped cross domain recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 312–321. Liu, Z.; Fang, Y.; and Wu, M. 2023. Mitigating popularity bias for users and items with fairness-centric adaptive recommendation. ACM Transactions on Information Systems, 41(3): 1–27. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; and Galstyan, A. 2021. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6): 1–35. Melchiorre, A. B.; Rekabsaz, N.; Parada-Cabaleiro, E.; Brandl, S.; Lesota, O.; and Schedl, M. 2021. Investigating gender fairness of recommendation algorithms in the music domain. Information Processing & Management, 58(5): 102666. Rahmani, H. A.; Naghiaei, M.; Dehghan, M.; and Aliannejadi, M. 2022. Experiments on generalizability of useroriented fairness in recommender systems. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2755– 2764. Rastegarpanah, B.; Gummadi, K. P.; and Crovella, M. 2019. Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems. In Proceedings of the twelfth ACM international conference on web search and data mining, 231–239. Santambrogio, F. 2015. Optimal transport for applied mathematicians. Birk¨auser, NY, 55(58-63): 94. Shakespeare, D.; Porcaro, L.; G´omez, E.; and Castillo, C. 2020. Exploring artist gender bias in music recommendation. arXiv preprint arXiv:2009.01715. Su, J.; Chen, C.; Liu, W.; Wu, F.; Zheng, X.; and Lyu, H. 2023. Enhancing Hierarchy-Aware Graph Networks with Deep Dual Clustering for Session-based Recommendation. In Proceedings of the ACM Web Conference 2023, 165–176. S¨uhr, T.; Hilgard, S.; and Lakkaraju, H. 2021. Does fair ranking improve minority outcomes? understanding the interplay of human and algorithmic biases in online hiring. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 989–999. Verma, S.; and Rubin, J. 2018. Fairness definitions explained. In Proceedings of the international workshop on software fairness, 1–7. Villani, C.; et al. 2009. Optimal transport: old and new, volume 338. Springer. Wang, Y.; Wang, L.; Li, Y.; He, D.; and Liu, T.-Y. 2013. A theoretical analysis of NDCG type ranking measures. In Conference on learning theory, 25–54. PMLR. Waters, S. 1976. Hit ratios. The Computer Journal, 19(1): 21–24. Wen, H.; Yi, X.; Yao, T.; Tang, J.; Hong, L.; and Chi, E. H. 2022. Distributionally-robust Recommendations for Improving Worst-case User Experience. In Proceedings of the ACM Web Conference 2022, 3606–3610. Xu, R.; Liu, P.; Zhang, Y.; Cai, F.; Wang, J.; Liang, S.; Ying, H.; and Yin, J. 2020. Joint Partial Optimal Transport for Open Set Domain Adaptation. In IJCAI, 2540–2546. Yang, J.; Liu, Y.; and Xu, H. 2023. HOTNAS: Hierarchical Optimal Transport for Neural Architecture Search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11990–12000. Zheng, X.; Su, J.; Liu, W.; and Chen, C. 2022a. DDGHM: dual dynamic graph with hybrid metric training for crossdomain sequential recommendation. In Proceedings of the 30th ACM International Conference on Multimedia, 471– 481. Zheng, X.; Wu, R.; Han, Z.; Chen, C.; Chen, L.; and Han, B. 2022b. Heterogeneous Information Crossing on Graphs for Session-based Recommender Systems. ACM Transactions on the Web. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8471
2024
941
18,786
A Diffusion-Based Framework for Multi-Class Anomaly Detection Haoyang He1*, Jiangning Zhang1,2*, Hongxu Chen1, Xuhai Chen1, Zhishan Li1, Xu Chen2, Yabiao Wang2, Chengjie Wang2, Lei Xie1† 1College of Control Science and Engineering, Zhejiang University 2Youtu Lab, Tencent {haoyanghe,186368,chenhongxu,22232044,zhishanli}@zju.edu.cn, {cxxuchen,caseywang,jasoncjwang}@tencent.com, [email protected] Abstract Reconstruction-based approaches have achieved remarkable outcomes in anomaly detection. The exceptional image reconstruction capabilities of recently popular diffusion models have sparked research efforts to utilize them for enhanced reconstruction of anomalous images. Nonetheless, these methods might face challenges related to the preservation of image categories and pixel-wise structural integrity in the more practical multi-class setting. To solve the above problems, we propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection, which consists of a pixel-space autoencoder, a latent-space SemanticGuided (SG) network with a connection to the stable diffusion’s denoising network, and a feature-space pre-trained feature extractor. Firstly, The SG network is proposed for reconstructing anomalous regions while preserving the original image’s semantic information. Secondly, we introduce Spatial-aware Feature Fusion (SFF) block to maximize reconstruction accuracy when dealing with extensively reconstructed areas. Thirdly, the input and reconstructed images are processed by a pre-trained feature extractor to generate anomaly maps based on features extracted at different scales. Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach which surpasses the stateof-the-art methods, e.g., achieving 96.8/52.6 and 97.2/99.0 (AUROC/AP) for localization and detection respectively on multi-class MVTec-AD dataset. Code is available at https: //lewandofskee.github.io/projects/diad. Introduction Anomaly detection is a crucial task in computer vision and industrial applications (Tao et al. 2022; Salehi et al. 2022; Liu et al. 2023), which goal of visual anomaly detection is to determine anomalous images and locate the regions of anomaly accurately. Existing anomaly detection models (Liznerski et al. 2021; Yi and Yoon 2020; Yu et al. 2021) mostly correspond to one class, which requires a large amount of storage space and training time as the number of classes increases. There is a critical requirement for a robust unsupervised multi-class anomaly detection model. *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: A analysis of different diffusion models for multiclass anomaly detection. The image above shows various denoising network architectures, while the images below demonstrate the results reconstructed by different methods for the same input image. a) DDPM suffers from categorical errors. b) LDM exhibits semantic errors. c) Our approach effectively reconstructs the anomalous regions while preserving the semantic information of the original image. The current mainstream unsupervised anomaly detection methods can be divided into three categories: synthesizingbased (Zavrtanik, Kristan, and Skoˇcaj 2021a; Li et al. 2021), embedding-based (Defard et al. 2021; Roth et al. 2022; Xie et al. 2023) and reconstruction-based (Liu et al. 2022; Liang et al. 2023) methods. The core of the reconstruction-based method is that during training, the model only learns from normal images. During testing, the model reconstructs abnormal images into normal ones using the trained model. Therefore, by comparing the reconstructed image with the input image, we can determine the location of anomalies. Traditional reconstruction-based methods, including AEs (Zavrtanik, Kristan, and Skoˇcaj 2021b), VAEs (Kingma and Welling 2022), and GANs (Liang et al. 2023; Yan et al. 2021) can learn the distribution of normal samples and reconstruct abnormal regions during the testing phase. However, these models have limited reconstruction capabilities especially large-scale defects or disappearances. Hence, models with stronger reconstruction capability are required to effectively tackle multi-class anomaly detection. Recently, the diffusion models (Ho, Jain, and Abbeel 2020; Rombach et al. 2022; Zhang and Agrawala 2023) have demonstrated their powerful image-generation capability. However, directly using current mainstream diffusion The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8472 models cannot effectively address multi-class anomaly detection problems. 1) For the Denoising Diffusion Probabilistic Model (DDPM) (Ho, Jain, and Abbeel 2020) in Fig. 1(a), when performing the multi-class setting, this method may encounter issues with misclassifying image categories. Because after adding T timesteps noise to the input image, the original class information is lost. During inference, denoising is performed based on this Gaussian noise-like distribution, which may generate images belonging to different categories. 2) Latent Diffusion Model (LDM) (Rombach et al. 2022) has an embedder as a class condition as shown in Fig. 1-(b), which overcomes the problem of misclassification in DDPM. However, LDM cannot address the issue of semantic loss in generated images. LDM cannot simultaneously preserve the semantic information of the input image while reconstructing the anomalous regions. For example, they may fail to maintain direction consistency with the input image in terms of objects like screws and hazelnuts. To address the problems, we propose DiAD for multiclass anomaly detection in Fig. 2, which comprises: a pixel space autoencoder, a latent space denoising network and a feature space pre-trained model. To effectively maintain consistent semantic information with the original image while reconstructing the location of anomalous regions, we propose the Semantic-Guided (SG) network with a connection to the Stable Diffusion (SD) denoising network. To further enhance the capability of preserving fine details in the original image, we propose the Spatial-aware Feature Fusion (SFF) block to integrate features at different scales. Finally, the reconstructed and input images are extracted features through a pre-trained model for anomaly scores. We summarize our contributions as follows: • We propose a novel diffusion-based framework DiAD for multi-class anomaly detection, which firstly tackles the problem of existing denoising networks of diffusionbased methods failing to correctly reconstruct anomalies. • We construct an SG network connecting to the SD denoising network to maintain consistent semantic information and reconstruct the anomalies. • We propose an SFF block to integrate features from different scales to further improve the reconstruction ability. • Abundant experiments demonstrate the sufficient superiority of DiAD over SOTA methods. Related Work Diffusion Model. The diffusion model has gained widespread attention since its remarkable reconstruction ability. It has demonstrated excellent performance in various applications such as image generation (Zhang and Agrawala 2023), video generation (Ho et al. 2022), object detection (Chen et al. 2022), image segmentation (Amit et al. 2022) and etc. LDM (Rombach et al. 2022) introduces conditions through cross-attention to control generation. Anomaly Detection. AD contains a variety of different settings, e.g., open-set (Ding, Pang, and Shen 2022), noisy learning (Tan et al. 2021; Yoon et al. 2022), zero-/fewshot (Huang et al. 2022; Jeong et al. 2023; Cao et al. 2023; Chen, Han, and Zhang 2023; Chen et al. 2023b; Zhang et al. 2023b), 3D AD (Wang et al. 2023; Chen et al. 2023a), etc. The unsupervised anomaly detection can primarily be categorized into three major methodologies: 1) Synthesizing-based methods synthesize anomalies on normal image samples. During the training phase, both normal images and synthetically generated abnormal images are input into the network for training, which aids in anomaly detection and localization. DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021a) consists of an end-to-end network composed of a reconstruction network and a discriminative sub-network, which synthesizes and generates justout-distribution phenomena. However, due to the diversity and unpredictability of anomalies in real-world scenarios, it is impossible to synthesize all types of anomalies. 2) Embedding-based methods encode the original image’s three-dimensional information into a multidimensional feature space (Roth et al. 2022; Cao et al. 2022; Gu et al. 2023). Most methods employ networks (He et al. 2016; Tan and Le 2019; Zhang et al. 2022, 2023c; Wu et al. 2023) pre-trained on ImageNet (Deng et al. 2009) for feature extraction. RD4AD (Deng and Li 2022) utilizes a WideResNet50 (Zagoruyko and Komodakis 2016) as the teacher model for feature extraction and employs a structurally identical network in reverse as the student model, computing the cosine similarity of corresponding features as anomaly scores. However, due to significant differences between industrial images and the data distribution in ImageNet, the extracted features might not be suitable for industrial anomaly detection purposes. 3) Reconstruction-based methods aim to train a model on a dataset without anomalies. The model learns to identify patterns and characteristics in the normal data. OCRGAN (Liang et al. 2023) decouples images into different frequencies and uses GAN for reconstruction. EdgRec (Liu et al. 2022) achieves good reconstruction results by first synthesizing anomalies and then extracting grayscale edge information from images, which is ultimately input into a reconstruction network. However, there are certain limitations in the reconstruction of large-area anomalies. Moreover, the accuracy of anomaly localization is also not sufficient. Recently, some studies have applied diffusion models to anomaly detection. AnoDDPM (Wyatt et al. 2022) is the first approach to employ a diffusion model for medical anomaly detection. DiffusionAD (Zhang et al. 2023a) utilizes an anomaly synthetic strategy to generate anomalous samples and labels, along with two sub-networks dedicated to the tasks of denoising and segmentation. DDAD (Mousakhan, Brox, and Tayyub 2023) employs a score-based pre-trained diffusion model to generate normal samples while finetuning the pre-trained feature extractor to achieve domain transfer. However, these approaches only add limited steps of noise and perform few denoising steps, which makes them unable to reconstruct large-scale defects. To overcome the aforementioned problems, We propose a diffusion-based framework DiAD for multi-class anomaly detection, which firstly tackles the problem of existing diffusion-based methods failing to correctly reconstruct anomalies. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8473 Diffusion Forward Process SD EB 2 SD DB 2 SD EB 3 SD DB 3 SD EB 4 SD DB 4 SD M SD EB 1 SD DB 1 SG EB 2 SG EB 3 SG EB 4 SG DB 4 SG M SG EB 1 SFF CNNs SD Denoising Network × T Pixel Space Latent Space M1 M2 M3 Feature Space Input Image   Reconstruction Image 8×8 4×4 4×4 8×8 16×16 16×16 32×32 32×32 32×32 16×16 8×8 4×4 4×4 Semantic-Guided Network Test only Train/Test Frozen  Add Figure 2: Framework of the proposed DiAD that contains three parts: 1) a pixel-space autoencoder {E, D}; 2) a latent-space Semantic-Guided (SG) network with a connection to Stable Diffusion (SD) denoising network; and 3) a feature-space pretrained feature extractor Ψ. During training, the input x0 and the latent variable zT are inputted into the SG network and the SD denoising network, respectively. The output noise and input noise are calculated for MSE loss and gradient optimization is computed. During testing, x0 and the reconstructed image ˆx0 are inputted into the same pre-trained feature extraction network to obtain feature maps {f1,f2,f3} of different scales, and their anomaly scores S are calculated. Preliminaries Denoising Diffusion Probabilistic Model. Denoising Diffusion Probabilistic Model (DDPM) consists of two processes: the forward diffusion process and the reverse denoising process. During the forward process, a noisy sample xt is generated using a Markov chain that incrementally adds Gaussian-distributed noise to an initial data sample x0. The forward diffusion process can be characterized as follows: xt = √¯αtx0 + √ 1 −¯αtϵt, ϵt ∼N(0, I), (1) where αt = 1 −βt, ¯αt = QT i=1 αi = QT i=1(1 −βi) and βi represents the noise schedule used to regulate the quantity of noise added at each timestep. In the reverse denoising process, xT is first sampled from equation 1 and xt−1 is reconstructed from xt and the model prediction ϵθ (xt, t) with the formulation: xt−1 = 1 √αt  xt −1 −αt √1 −¯αt ϵθ (xt, t)  + σtz, (2) where z ∼N(0, I), σt is a fixed constant related to the variance schedule, ϵθ (xt, t) is a U-Net (Ronneberger, Fischer, and Brox 2015) network to predict the distribution and θ is the learnable parameter which could be optimized as: min θ Ex0∼q(x0),ϵ∼N (0,I),t ∥ϵ −ϵθ (xt, t)∥2 2 . (3) Latent Diffusion Model. Latent Diffusion Model (LDM) focuses on the low-dimensional latent space with conditioning mechanisms. LDM consists of a pre-trained autoencoder model and a denoising U-Net-like attention-based network. The network compresses images using an encoder, conducts diffusion and denoising operations in the latent representation space, and subsequently reconstructs the images back to the original pixel space using a decoder. The training optimization objective is: LLDM = Ez0,t,c,ϵ∼N (0,1) h ∥ϵ −ϵθ (zt, t, c)∥2 2 i , (4) where c represents the conditioning mechanisms which can consist of multimodal types such as text or image, connected to the model through a cross-attention mechanism. zt represents the latent space variable, Method The proposed pipeline DiAD is shown in Fig. 2. First, the pre-trained encoder downsamples the input image into a latent-space representation. Then, noise is added to the latent representation, followed by the denoising process using an SD denoising network with a connection to the SG network. The denoising process is repeated for the same timesteps as the diffusion process. Finally, the reconstructed latent representation is restored to the original image level using the pre-trained decoder. In terms of anomaly detection and localization, the input and reconstructed images are fed into the same pre-trained model to extract features at different scales and calculate the differences between these features. Semantic-Guided Network As discussed earlier, DDPM and LDM each have specific problems when addressing multi-class anomaly detection tasks. In response to these issues and the multi-class task itself, we propose an SG network to address the problem of LDM’s inability to effectively reconstruct anomalies and preserve the semantic information of the input image. Given an input image x0 ∈R3×H×W in pixel space, the pre-trained encoder E encodes x0 into a latent space representation z = E(x0) where z ∈Rc×h×w. Similar to Eq. 1 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8474 where the original pixel space x is replaced by latent representation z, the forward diffusion process now can be characterized as follows: zt = √¯αtz0 + √ 1 −¯αtϵt, ϵt ∼N(0, I). (5) The perturbed representation zT and input x0 are simultaneously fed into the SD denoising network and SG network, respectively. After T steps of the reverse denoising process, the final variable ˆz is restored to the reconstructed image ˆx0 from the pre-trained decoder D giving ˆx0 = D(ˆz). The training objective of DiAD is: LDiAD = Ez0,t,ci,ϵ∼N (0,1) h ∥ϵ −ϵθ (zt, t, ci)∥2 2 i . (6) The denoising network consists of a pre-trained SD denoising network and an SG network that replicates the SD parameters for initiation as shown in Fig. 2. The pre-trained SD denoising network comprises four encoder blocks, one middle block and four decoder blocks. Here, ’block’ means a frequently utilized unit in the construction of the neural network layer, e.g.,, ’resnet’ block, transformer block, multihead cross attention block, etc. The input image x0 ∈R3×H×W is transformed into x ∈Rd×h×w by a set of ’conv-silu’ layers C in SG network in order to keep the same dimension with the latent representations in SD Encoder Block 1 ESD1. Then, the result of the summation of x and z are input into the SG Encoder Blocks (SGEBs). After continuous downsampling by the encoder ESG, the results are finally added to the output of the SD middle block MSD after its completion in the middle block MSG. Additionally, to address multi-class tasks of different scenarios and categories, the results of the SG Decoder Blocks (SGDBs) DSG are also added to the results of the SD decoder DSD with an SFF block combined which will be particularly explained in the next section. The output G of the denoising network is characterized as: G = DSD (MSD (ESD (zt)) + MSG (ESD (z + C (x0)))) + DSGj(MSG (ESD (z + C (x0)))), (7) where z represents the latent representation with noise perturbed, x0 represents the input image, C(·) represents a set of ’conv-silu’ layers in SG network, ESD(·) represents all the SD encoder blocks (SDEBs), ESG(·) represents all the SGEBs, MSG(·) and MSD(·) represent SG and SD middle blocks respectively, DSD(·) represent all the SDDBs and DSGj(·) represents SGDBs for j-th blocks. Spatial-Aware Feature Fusion Block When adding several layers of decoder blocks from SGEBs to SDDBs during the experiment as shown in Table 5, we found it to be challenging to solve the multi-class anomaly detection. This is because the dataset contains various types, such as objects and textures. For texture-related cases, the anomalies are generally smaller, so it is necessary to preserve their original textures. On the other hand, the defects often cover larger areas for object-related cases, requiring stronger reconstruction capabilities. Therefore, it is extremely challenging to simultaneously preserve the normal Conv Block Conv Block Conv Block Conv Block Conv Block Conv Block Conv Block Conv2d 3×3 Normalization Activation Conv Block = SG EB 4_3 SG EB 4_2 SG DB 4_2 SG EB 3_1 SG EB 3_2 SG EB 3_3 Conv Block Conv Block Add = SG EB 4_1 SG DB 4_1 SG DB 4_3 Figure 3: Schematic diagram of SFF block. Each layer in SGDB4 is obtained by adding the corresponding SGEB4 to every SGEB3 with Conv Block performed. information of the original samples and reconstruct the abnormal locations in different scenarios. Hence, we proposed a Spatial-aware Feature Fusion (SFF) block with the aim of integrating high-scale semantic information into the low-scale. This ultimately enables the model to both preserve the information of the original normal samples and reconstruct large-scale abnormal regions. The structure of the SFF block is shown in Fig. 3. Each SGEBs consists of three sub-layers. Therefore, the SFF block integrates the features of each layer in SGEB3 into each layer in SGEB4 and adds the fused features to the original features. The final output of each layer of the SGEB4 is: As Batch Normalization (BN) (Ioffe and Szegedy 2015) considers the normalization statistics of all images within a batch, it leads to a loss of unique details in each sample. BN is suitable for a relatively large mini-batch scenario with similar data distributions. However, for multi-class anomaly detection where there are significant differences in data distributions among different categories, normalizing the entire batch is not suitable for tasks in the multi-class setting. Since the results generated by using SD mainly depend on the input image instance, using Instance Normalization (IN) (Ulyanov, Vedaldi, and Lempitsky 2017) can not only accelerate model convergence but also maintain the independence between each image instance. In addition, in terms of choosing the activation function, we use the SiLU (Elfwing, Uchibe, and Doya 2018) instead of the commonly used ReLU (Hahnloser et al. 2000), which can preserve more input information. Experimental results in Table 5 show that the performance is improved by using IN and SiLU simultaneously instead of the combination of BN and ReLU. Anomaly Localization and Detection During the inference stage, the reconstruction image is obtained through the diffusion and denoising process in the latent space. For anomaly localization and detection, We use the same ImageNet pre-trained feature extractor Ψ to extract features from both the input image x0 and the reconstructed image ˆx0 and calculate the anomaly map on different scale The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8475 Category Non-Diffusion Method Diffusion-based Method PaDiM DRAEM RD4AD UniAD DDPM LDM Ours Objects Bottle 97.9/97.5/99.2/96.1 99.6/99.9/98.4 99.7/100./100. 63.6/71.8/86.3 93.8/98.7/93.7 99.7/96.5/91.8 Cable 70.9/57.8/74.0/76.3 84.1/89.5/82.5 95.2/95.9/88.0 55.6/69.7/76.0 55.7/74.8/77.7 94.8/98.8/95.2 Capsule 73.4/65.3/92.5/90.4 94.1/96.9/96.9 86.9/97.8/94.4 52.9/82.0/90.5 60.5/81.4/90.5 89.0/97.5/95.5 Hazelnut 85.5/93.7/97.5/92.3 60.8/69.8/86.4 99.8/100./99.3 87.0/90.4/88.1 93.0/95.8/89.8 99.5/99.7/97.3 Metal Nut 88.0/72.8/95.0/92.0 100./100./99.5 99.2/99.9/99.5 60.0/74.4/89.4 53.0/80.1/89.4 99.1/96.0/91.6 Pill 68.8/82.2/94.9/92.4 97.5/99.6/96.8 93.7/98.7/95.7 55.8/84.0/91.6 62.1/93.1/91.6 95.7/98.5/94.5 Screw 56.9/92.0/95.7/89.9 97.7/99.3/95.8 87.5/96.5/89.0 53.6/71.9/85.9 58.7/81.9/85.6 90.7/99.7/97.9 Toothbrush 95.3/90.6/96.8/90.0 97.2/99.0/94.7 94.2/97.4/95.2 57.5/68.0/83.3 78.6/83.9/83.3 99.7/99.9/99.2 Transistor 86.6/74.8/77.4/71.1 94.2/95.2/90.0 99.8/98.0/93.8 57.8/44.6/57.1 61.0/57.8/59.1 99.8/99.6/97.4 Zipper 79.7/98.8/99.9/99.2 99.5/99.9/99.2 95.8/99.5/97.1 64.9/77.4/88.1 73.6/89.5/90.6 95.1/99.1/94.4 Textures Carpet 93.8/98.0/99.1/96.7 98.5/99.6/97.2 99.8/99.9/99.4 95.5/98.7/91.0 99.4/99.8/99.4 99.4/99.9/98.3 Grid 73.9/99.3/99.7/98.2 98.0/99.4/96.5 98.2/99.5/97.3 83.5/93.9/86.9 67.3/82.6/84.4 98.5/99.8/97.7 Leather 99.9/98.7/99.3/95.0 100./100./100. 100./100./100. 98.4/99.5/96.3 97.4/99.0/96.3 99.8/99.7/97.6 Tile 93.3/99.8/100./100. 98.3/99.3/96.4 99.3/99.8/98.2 93.697.5/92.0 97.1/98.7/94.1 96.8/99.9/98.4 Wood 98.4/99.8/100./100. 99.2/99.8/98.3 98.6/99.6/96.6 98.6/99.6/97.5 97.8/99.4/95.9 99.7/100./100. Mean 84.2/88.1/94.7/92.0 94.6/96.5/95.2 96.5/98.8/96.2 71.9/81.6/86.6 76.6/87.8/88.1 97.2/99.0/96.5 Table 1: Image-level multi-class anomaly classification results with AUROCcls/APcls/F1maxcls metrics on MVTec-AD. Metrics Non-Diffusion Diffusion-based DRAEM UniAD DDPM LDM Ours AUROC-cls 79.1 85.5 54.5 56.7 86.8 AP-cls 81.9 85.5 57.9 61.4 88.3 F1max-cls 78.9 84.4 72.3 73.1 85.1 AUROC-seg 91.3 95.9 79.7 86.6 96.0 AP-seg 23.5 21.0 2.2 6.0 26.1 F1max-seg 29.5 27.0 4.5 9.9 33.0 PRO 58.8 75.6 46.8 55.0 75.2 Table 2: Quantitative comparisons on VisA dataset. feature maps Mn using cosine similarity: Mn(x0, ˆx0) = 1 −(Ψn(x0, ˆx0))T · Ψn(x0, ˆx0) ∥Ψn(x0, ˆx0)∥∥Ψn(x0, ˆx0)∥, (8) where n represents the n-th feature layer fn and the anomaly score S for an input-pair of anomaly localization is: S = X n∈N σnMn(x0, ˆx0), (9) where σn indicates the upsampling factor in order to keep the same dimension of the pixel space image and N indicates the number of feature layers used during inference. Experiment Datasets and Evaluation Metrics MVTec-AD Dataset. MVTec-AD (Bergmann et al. 2019) dataset simulates real-world industrial production scenarios, filling the gap in unsupervised anomaly detection. It consists of 5 types of textures and 10 types of objects, in 5,354 highresolution images from different domains. The training set contains 3,629 images with only anomaly-free samples. The test set consists of 1,725 images, including both normal and abnormal samples. Pixel-level annotations are provided for the anomaly localization evaluation. VisA Dataset. VisA (Zou et al. 2022) dataset consists of a total of 10,821 high-resolution images, including 9,621 normal images and 1,200 anomaly images with 78 types of anomalies. The VisA dataset comprises 12 subsets, each corresponding to a distinct object. 12 objects could be categorized into three different object types: Complex structure, Multiple instances, and Single instance. Evaluation Metrics. Following prior works, Area Under the Receiver Operating Characteristic Curve (AUROC), Average Precision (AP) and F1-score-max (F1max) are used in both anomaly detection and anomaly localization, where cls represents the image level anomaly detection and seg represents the pixel level anomaly localization. Also, Per-RegionOverlap (PRO) is used in anomaly localization. Implementation Details All images in MVTec-AD and VisA are resized to 256 × 256. For the denoising network, we adopt the 4-th block of SGDB for connection to SDDB. In this experiment, we adopt ResNet50 as the feature extraction network and choose n ∈{2, 3, 4} as the feature layers used in calculating the anomaly localization. We utilized the KL method as the Auto-encoder and fine-tune the model before training the denoising network. We train for 1000 epochs on a single NVIDIA Tesla V100 32GB with a batch size of 12. Adam optimiser (Loshchilov and Hutter 2019) with a learning rate of 1e−5 is set. A Gaussian filter with σ = 5 is used to smooth the anomaly localization score. For anomaly detection, the anomaly score of the image is the maximum value of the averagely pooled anomaly localization score which undergoes 8 rounds of global average pooling operations with a size of 8 × 8. During inference, the initial denoising timestep T The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8476 Category Non-Diffusion Method Diffusion-based Method PaDiM DRAEM RD4AD UniAD DDPM LDM Ours Objects Bottle 96.1/87.6/62.5/56.9 97.8/68.2/67.6 98.1/66.0/69.2 59.9/ 4.9/11.7 86.9/49.1/50.0 98.4/52.2/54.8 Cable 81.0/71.3/14.7/17.8 85.1/26.3/33.6 97.3/39.9/45.2 66.5/ 6.7/10.6 89.3/18.5/26.2 96.8/50.1/57.8 Capsule 96.9/50.5/ 6.0/10.0 98.8/43.4/50.0 98.5/42.7/46.5 63.1/ 6.2/ 9.7 90.0/ 7.9/27.3 97.1/42.0/45.3 Hazelnut 96.3/96.9/70.0/60.5 97.9/36.2/51.6 98.1/55.2/56.8 91.2/24.1/28.3 95.1/51.2/53.5 98.3/79.2/80.4 Metal Nut 84.8/62.2/31.1/21.0 93.8/62.3/65.4 94.8/55.5/66.4 62.7/14.6/29.2 70.5/19.3/30.7 97.3/30.0/38.3 Pill 87.7/94.4/59.1/44.1 97.5/63.4/65.2 95.0/44.0/53.9 55.3/ 4.0/ 8.4 74.9/10.2/15.0 95.7/46.0/51.4 Screw 94.1/95.5/33.8/40.6 99.4/40.2/44.6 98.3/28.7/37.6 91.1/ 1.8/ 3.8 91.7/ 2.2/ 4.6 97.9/60.6/59.6 Toothbrush 95.6/97.7/55.2/55.8 99.0/53.6/58.8 98.4/34.9/45.7 76.9/ 4.0/ 7.7 93.7/20.4/ 9.8 99.0/78.7/72.8 Transistor 92.3/64.5/23.6/15.1 85.9/42.3/45.2 97.9/59.5/64.6 53.2/ 5.8/11.4 85.5/25.0/30.7 95.1/15.6/31.7 Zipper 94.8/98.3/74.3/69.3 98.5/53.9/60.3 96.8/40.1/49.9 67.4/ 3.5/ 7.6 66.9/ 5.3/ 7.4 96.2/60.7/60.0 Textures Carpet 97.6/98.6/78.7/73.1 99.0/58.5/60.4 98.5/49.9/51.1 89.2/18.8/44.3 99.1/70.6/66.0 98.6/42.2/46.4 Grid 71.0/98.7/44.5/46.2 99.2/46.0/47.4 96.5/23.0/28.4 63.1/ 0.7/ 1.9 52.4/ 1.1/ 1.9 96.6/66.0/64.1 Leather 84.8/97.3/60.3/57.4 99.3/38.0/45.1 98.8/32.9/34.4 97.3/38.9/43.2 99.0/45.9/44.0 98.8/56.1/62.3 Tile 80.5/98.0/93.6/86.0 95.3/48.5/60.5 91.8/42.1/50.6 87.0/35.2/36.6 90.1/43.9/51.6 92.4/65.7/64.1 Wood 89.1/96.0/81.4/74.6 95.3/47.8/51.0 93.2/37.2/41.5 84.7/30.9/37.3 92.3/44.1/46.6 93.3/43.3/43.5 Mean 89.5/87.2/52.5/48.6 96.1/48.6/53.8 96.8/43.4/49.5 75.6/13.3/19.5 85.1/27.6/31.0 96.8/52.6/55.5 Table 3: Pixel-level multi-class anomaly segmentation results with AUROCseg/APseg/F1maxseg metrics on MVTec-AD. Method Non-Diffusion Diffusion-based DRAEM UniAD DDPM LDM Ours PRO 71.1 90.4 49.0 66.3 90.7 Table 4: Multi-class anomaly segmentation results with PRO metric on MVTec-AD. is set from 1,000. We use DDIM (Song, Meng, and Ermon 2021) as the sampler with 10 steps by default. Comparison with SOTAs We conduct and analyze a range of qualitative and quantitative comparison experiments on MVTec-AD, VisA, MVTec3D and Medical datasets. We choose a synthesizing-based method DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021a), two embedding-based methods PaDiM (Defard et al. 2021) and RD4AD (Deng and Li 2022), a reconstructionbased method EdgRec (Liu et al. 2022), a unified SOTA UniAD (You et al. 2022) method and diffusion-based DDPM and LDM methods. Specifically, we categorize the aforementioned methods into two types: non-diffusion and diffusion-based methods. Qualitative Results. We conducted substantial qualitative experiments on MVTec-AD and VisA datasets to visually demonstrate the superiority of our method in image reconstruction and the accuracy of anomaly localization. As shown in Figure 4, our method exhibits better reconstruction capabilities for anomalous regions compared to the EdgRec on MVTec-AD dataset. In comparison to UniAD shown in Figure 5, our method exhibits more accurate anomaly localization abilities on VisA dataset. Quantitative Results. As shown in Table 1 and in Table 3, our method achieves SOTA AUROC/AP/F1max metrics of 97.2/99.0/96.5 and 96.8/52.6/55.5 for image-wise and Figure 4: Qualitative illustration on MVTec-AD dataset. pixel-wise respectively for multi-class setting on MVTecAD dataset. For the diffusion-based methods, our approach significantly outperforms existing DDPM and LDM methods in terms of 11.7↑in AUROC and 25↑in AP for anomaly localization. For non-diffusion methods, our approach surpasses existing methods in both metrics, especially at the pixel level, where our method exceeds UniAD by 9.2↑/6.0↑ in AP/F1max. Our method has also demonstrated its superiority on VisA dataset, as shown in Table 2. Our approach exhibits significant improvements compared to diffusionbased methods of 30.1↑/9.4↑than the LDM method in image/pixel AUROC. It also performs well compared to UniAD by 4.9↑/6.0↑in pixel AP/F1max metrics. Ablation Studies The Architecture Design of DiAD. We investigate the importance of each module in DiAD as shown in Table 5. SD indicates only the diffusion model without connecting to the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8477 SD ✓ ✓ ✓ ✓ ✓ ✓ MSG ✓ ✓ ✓ ✓ ✓ SGEB3 ✓ ✓ ✓ ✓ SGEB4 ✓ BN+ReLU ✓ IN+SiLU ✓ AUROC-cls 79.3 95.1 95.3 93.8 96.7 97.2 AUROC-seg 89.5 91.1 89.1 91.2 96.7 96.8 Table 5: Ablation studies on the design of DiAD with AUROC metrics. SG network which is the LDM’s architecture. MSG indicates only the middle block of the SG network adding to the middle of SD. SGEB3 and SGEB4 indicate directly skipconnecting to the corresponding SDDB. When connecting SGDB3 and SGDB4 at the same time, more details of the original images are preserved in terms of texture, but the reconstruction ability for large anomaly areas decreases. Using the combination of IN+SiLU in the SFF block yields better results compared to using BN+ReLU. Effect of Pre-trained Feature Extractors. Table 6 shows the quantitative comparison of using different pre-trained feature extraction networks. ResNet50 achieved the best performance in anomaly classification metrics, while WideResNet101 excelled in anomaly segmentation. Backbone AUROC-cls AUROC-seg PRO VGG 16 91.8 92.1 80.1 19 91.3 92.3 80.4 ResNet 18 94.7 96 89.1 34 95.2 96.2 89.6 50 97.2 96.8 90.7 101 96.2 96.9 91.2 WideResNet 50 95.9 96.4 89.3 101 95.6 96.9 91.4 EfficientNet b0 93.5 94 84 b2 94.2 94.1 84.2 b4 92.8 93.6 83.5 Table 6: Ablation studies on different feature extractors. Effect of Feature Layers Used in Anomaly Score Calculating. After extracting feature maps of 5 different scales using a pre-trained backbone, the anomaly scores are calculated by computing the cosine similarity between feature maps from different layers. The experimental results, as shown in Appendix, indicate that using feature maps from layers f2, f3, and f4 (with corresponding sizes of 64 × 64, 32 × 32, and 16 × 16) yields the best performance. Effect of Forward Diffusion Timesteps. Increasing the number of diffusion steps in the forward process impacts the performance of image reconstruction. The experimental results, depicted in Figure 6, indicate that with an increasing number of forward diffusion steps, the image approaches pure Gaussian noise, while the anomaly reconstruction ability improves as well. Nevertheless, when the number of forInput Ours Rec. GT UniAD Loc. Ours Loc. Figure 5: Qualitative results on VisA dataset. Figure 6: Ablation studies on different diffusion timesteps. ward diffusion steps is less than 600, a significant decline in performance occurs because the number of steps is insufficient for anomaly reconstruction. Conclusion This paper proposes a diffusion-based DiAD framework to address the issue of category and semantic loss in the stable diffusion model for multi-class anomaly detection. We propose the Semantic-Guided network and Spatial-aware Feature Fusion block to better reconstruct the abnormal regions while maintaining the same semantic information as the input image. Our approach achieves state-of-the-art performance on MVTec-AD and VisA datasets, significantly outperforming the non-diffusion and diffusion-based methods. Limitation. Although our method has demonstrated exceptional performance in reconstructing anomalies, it can be susceptible to the influence of background impurities, resulting in errors in localization and classification. In the future, we will further explore diffusion models and enhance the background’s anti-interference capability for multi-class anomaly detection. Additionally, we will incorporate multimodal assistance in our anomaly detection. Lastly, we will utilize larger models to enhance reconstruction performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8478 Acknowledgments This work was supported by Jianbing Lingyan Foundation of Zhejiang Province, P.R. China (Grant No. 2023C01022). References Amit, T.; Shaharbany, T.; Nachmani, E.; and Wolf, L. 2022. SegDiff: Image Segmentation with Diffusion Probabilistic Models. arXiv:2112.00390. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In CVPR, 9592–9600. Cao, Y.; Wan, Q.; Shen, W.; and Gao, L. 2022. Informative knowledge distillation for image anomaly segmentation. Knowledge-Based Systems, 248: 108846. Cao, Y.; Xu, X.; Sun, C.; Cheng, Y.; Du, Z.; Gao, L.; and Shen, W. 2023. Segment Any Anomaly without Training via Hybrid Prompt Regularization. arXiv preprint arXiv:2305.10724. Chen, R.; Xie, G.; Liu, J.; Wang, J.; Luo, Z.; Wang, J.; and Zheng, F. 2023a. Easynet: An easy network for 3d industrial anomaly detection. In ACM MM, 7038–7046. Chen, S.; Sun, P.; Song, Y.; and Luo, P. 2022. DiffusionDet: Diffusion Model for Object Detection. arXiv:2211.09788. Chen, X.; Han, Y.; and Zhang, J. 2023. A Zero-/FewShot Anomaly Classification and Segmentation Method for CVPR 2023 VAND Workshop Challenge Tracks 1&2: 1st Place on Zero-shot AD and 4th Place on Few-shot AD. arXiv preprint arXiv:2305.17382. Chen, X.; Zhang, J.; Tian, G.; He, H.; Zhang, W.; Wang, Y.; Wang, C.; Wu, Y.; and Liu, Y. 2023b. CLIP-AD: A Language-Guided Staged Dual-Path Model for Zero-shot Anomaly Detection. arXiv preprint arXiv:2311.00453. Defard, T.; Setkov, A.; Loesch, A.; and Audigier, R. 2021. Padim: a patch distribution modeling framework for anomaly detection and localization. In ICPR, 475–489. Springer. Deng, H.; and Li, X. 2022. Anomaly detection via reverse distillation from one-class embedding. In CVPR, 9737– 9746. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In CVPR, 248–255. Ieee. Ding, C.; Pang, G.; and Shen, C. 2022. Catching both gray and black swans: Open-set supervised anomaly detection. In CVPR, 7388–7398. Elfwing, S.; Uchibe, E.; and Doya, K. 2018. Sigmoidweighted linear units for neural network function approximation in reinforcement learning. Neural networks, 107: 3–11. Gu, Z.; Liu, L.; Chen, X.; Yi, R.; Zhang, J.; Wang, Y.; Wang, C.; Shu, A.; Jiang, G.; and Ma, L. 2023. Remembering Normality: Memory-guided Knowledge Distillation for Unsupervised Anomaly Detection. In ICCV, 16401–16409. Hahnloser, R. H.; Sarpeshkar, R.; Mahowald, M. A.; Douglas, R. J.; and Seung, H. S. 2000. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. nature, 405(6789): 947–951. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778. Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D. P.; Poole, B.; Norouzi, M.; Fleet, D. J.; and Salimans, T. 2022. Imagen Video: High Definition Video Generation with Diffusion Models. arXiv:2210.02303. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. In NeurIPS, volume 33, 6840–6851. Huang, C.; Guan, H.; Jiang, A.; Zhang, Y.; Spratling, M.; and Wang, Y.-F. 2022. Registration based few-shot anomaly detection. In ECCV, 303–319. Springer. Ioffe, S.; and Szegedy, C. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Bach, F. R.; and Blei, D. M., eds., ICML, volume 37 of JMLR Workshop and Conference Proceedings, 448–456. JMLR.org. Jeong, J.; Zou, Y.; Kim, T.; Zhang, D.; Ravichandran, A.; and Dabeer, O. 2023. Winclip: Zero-/few-shot anomaly classification and segmentation. In CVPR, 19606–19616. Kingma, D. P.; and Welling, M. 2022. Auto-Encoding Variational Bayes. arXiv:1312.6114. Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. Cutpaste: Self-supervised learning for anomaly detection and localization. In CVPR, 9664–9674. Liang, Y.; Zhang, J.; Zhao, S.; Wu, R.; Liu, Y.; and Pan, S. 2023. Omni-frequency channel-selection representations for unsupervised anomaly detection. IEEE Transactions on Image Processing. Liu, J.; Xie, G.; Wang, J.; Li, S.; Wang, C.; Zheng, F.; and Jin, Y. 2023. Deep Industrial Image Anomaly Detection: A Survey. arXiv preprint arXiv:2301.11514, 2. Liu, T.; Li, B.; Zhao, Z.; Du, X.; Jiang, B.; and Geng, L. 2022. Reconstruction from edge image combined with color and gradient difference for industrial surface anomaly detection. arXiv:2210.14485. Liznerski, P.; Ruff, L.; Vandermeulen, R. A.; Franks, B. J.; Kloft, M.; and M¨uller, K. 2021. Explainable Deep OneClass Classification. In ICLR. Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. arXiv:1711.05101. Mousakhan, A.; Brox, T.; and Tayyub, J. 2023. Anomaly Detection with Conditioned Denoising Diffusion Models. arXiv:2305.15956. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. arXiv:2112.10752. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 234–241. Springer. Roth, K.; Pemula, L.; Zepeda, J.; Sch¨olkopf, B.; Brox, T.; and Gehler, P. 2022. Towards total recall in industrial anomaly detection. In CVPR, 14318–14328. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8479 Salehi, M.; Mirzaei, H.; Hendrycks, D.; Li, Y.; Rohban, M. H.; and Sabokrou, M. 2022. A Unified Survey on Anomaly, Novelty, Open-Set, and Out-ofDistribution Detection: Solutions and Future Challenges. arXiv:2110.14051. Salehi, M.; Sadjadi, N.; Baselizadeh, S.; Rohban, M. H.; and Rabiee, H. R. 2021. Multiresolution knowledge distillation for anomaly detection. In CVPR, 14902–14912. Song, J.; Meng, C.; and Ermon, S. 2021. Denoising Diffusion Implicit Models. In ICLR. OpenReview.net. Tan, D. S.; Chen, Y.-C.; Chen, T. P.-C.; and Chen, W.C. 2021. Trustmae: A noise-resilient defect classification framework using memory-augmented auto-encoders with trust regions. In WACV, 276–285. Tan, M.; and Le, Q. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 6105– 6114. PMLR. Tao, X.; Gong, X.; Zhang, X.; Yan, S.; and Adak, C. 2022. Deep Learning for Unsupervised Anomaly Localization in Industrial Images: A Survey. IEEE Transactions on Instrumentation and Measurement, 71: 1–21. Ulyanov, D.; Vedaldi, A.; and Lempitsky, V. 2017. Instance Normalization: The Missing Ingredient for Fast Stylization. arXiv:1607.08022. Wang, Y.; Peng, J.; Zhang, J.; Yi, R.; Wang, Y.; and Wang, C. 2023. Multimodal Industrial Anomaly Detection via Hybrid Fusion. In CVPR, 8032–8041. Wu, J.; Li, J.; Zhang, J.; Zhang, B.; Chi, M.; Wang, Y.; and Wang, C. 2023. PVG: Progressive Vision Graph for Vision Recognition. arXiv preprint arXiv:2308.00574. Wyatt, J.; Leach, A.; Schmon, S. M.; and Willcocks, C. G. 2022. AnoDDPM: Anomaly Detection with Denoising Diffusion Probabilistic Models using Simplex Noise. In CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022, 649–655. IEEE. Xie, G.; Wang, J.; Liu, J.; Jin, Y.; and Zheng, F. 2023. Pushing the Limits of Fewshot Anomaly Detection in Industry Vision: Graphcore. In ICLR. Yan, X.; Zhang, H.; Xu, X.; Hu, X.; and Heng, P. 2021. Learning Semantic Context from Normal Samples for Unsupervised Anomaly Detection. In AAAI, 3110–3118. Yi, J.; and Yoon, S. 2020. Patch SVDD: Patch-level SVDD for Anomaly Detection and Segmentation. In ACCV. Yoon, J.; Sohn, K.; Li, C.-L.; Arik, S. O.; Lee, C.-Y.; and Pfister, T. 2022. Self-supervise, Refine, Repeat: Improving Unsupervised Anomaly Detection. Transactions on Machine Learning Research. You, Z.; Cui, L.; Shen, Y.; Yang, K.; Lu, X.; Zheng, Y.; and Le, X. 2022. A Unified Model for Multi-class Anomaly Detection. In NeurIPS, volume 35, 4571–4584. Yu, J.; Zheng, Y.; Wang, X.; Li, W.; Wu, Y.; Zhao, R.; and Wu, L. 2021. FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows. arXiv:2111.07677. Zagoruyko, S.; and Komodakis, N. 2016. Wide Residual Networks. In BMVC. BMVA Press. Zavrtanik, V.; Kristan, M.; and Skoˇcaj, D. 2021a. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In ICCV, 8330–8339. Zavrtanik, V.; Kristan, M.; and Skoˇcaj, D. 2021b. Reconstruction by inpainting for visual anomaly detection. Pattern Recognition, 112: 107706. Zhang, H.; Wang, Z.; Wu, Z.; and Jiang, Y.-G. 2023a. DiffusionAD: Denoising Diffusion for Anomaly Detection. arXiv:2303.08730. Zhang, J.; Chen, X.; Xue, Z.; Wang, Y.; Wang, C.; and Liu, Y. 2023b. Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly Detection. arXiv preprint arXiv:2311.02612. Zhang, J.; Li, X.; Li, J.; Liu, L.; Xue, Z.; Zhang, B.; Jiang, Z.; Huang, T.; Wang, Y.; and Wang, C. 2023c. Rethinking Mobile Block for Efficient Attention-based Models. In ICCV, 1389–1400. Zhang, J.; Li, X.; Wang, Y.; Wang, C.; Yang, Y.; Liu, Y.; and Tao, D. 2022. Eatformer: Improving vision transformer inspired by evolutionary algorithm. arXiv preprint arXiv:2206.09325. Zhang, L.; and Agrawala, M. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. arXiv:2302.05543. Zou, Y.; Jeong, J.; Pemula, L.; Zhang, D.; and Dabeer, O. 2022. Spot-the-difference self-supervised pre-training for anomaly detection and segmentation. In ECCV, 392–408. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8480
2024
942
18,787
ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection Junwei He1,2, Qianqian Xu1*, Yangbangyan Jiang2, Zitai Wang3,4, Qingming Huang1,2,5∗ 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China 2School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China 3Institute of Information Engineering, CAS, Beijing, China 4School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 5Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China {hejunwei22s, xuqianqian}@ict.ac.cn, [email protected], [email protected], [email protected] Abstract Graph anomaly detection is crucial for identifying nodes that deviate from regular behavior within graphs, benefiting various domains such as fraud detection and social network. Although existing reconstruction-based methods have achieved considerable success, they may face the Anomaly Overfitting and Homophily Trap problems caused by the abnormal patterns in the graph, breaking the assumption that normal nodes are often better reconstructed than abnormal ones. Our observations indicate that models trained on graphs with fewer anomalies exhibit higher detection performance. Based on this insight, we introduce a novel twostage framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD). In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels. We pretrain graph autoencoders on these augmented graphs at multiple levels, which enables the graph autoencoders to capture normal patterns. In the next stage, the decoders are retrained for detection on the original graph, benefiting from the multi-level representations learned in the previous stage. Meanwhile, we propose the node anomaly distribution regularization to further alleviate Anomaly Overfitting. We validate the effectiveness of our approach through extensive experiments on both synthetic and real-world datasets. Introduction The goal of unsupervised graph anomaly detection (GAD) is to identify rare patterns that deviate from the majority patterns in a graph, which has been extensively applied in diverse domains, such as fraud detection (Abdallah, Maarof, and Zainal 2016; Cheng et al. 2020; Dou et al. 2020) and social network (Fan, Zhang, and Li 2020; Duan et al. 2023). Recently, reconstruction-based Graph Neural Networks (GNNs) methods have achieved great success and have become the mainstream approach. The common assumption is that normal nodes are easier to be reconstructed than abnormal nodes. On this basis, such methods usually train a graph autoencoder and determine anomalies according to the magnitude of the reconstruction errors. However, the anomalous patterns in the graph might hinder the performance of reconstruction-based methods in two *Corresponding Authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Original graph Reconstructed graph Normal node Anomaly node Learnable graph encoder Learnable graph decoder 1 2 3 4 5 6 1 2 3 4 5 6 Reconstructed node Original graph Reconstructed graph Reconstructed graph Pretraining Retraining Frozen graph encoder Anomaly-denoised graphs Anomaly-Denoised Augmentation 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 4 3 5 6 Original graph Reconstructed graph Normal node Anomaly node Learnable graph encoder Learnable graph decoder 1 2 4 3 5 6 1 2 4 3 5 6 Reconstructed node Original graph Reconstructed graph Reconstructed graph Pretraining Retraining Frozen graph encoder Anomaly-denoised graphs Anomaly-Denoised Augmentation 1 2 4 3 5 6 1 2 4 3 5 6 1 2 4 3 5 6 1 2 3 4 5 6 1 2 4 3 5 6 1 2 4 3 5 6 (a) Previous Reconstruction-based GAD Framework Original graph Reconstructed graph Normal node Anomaly node Learnable graph encoder Learnable graph decoder 1 2 3 4 5 6 1 2 3 4 5 6 Reconstructed node Original graph Reconstructed graph Reconstructed graph Pretraining Retraining Frozen graph encoder Anomaly-denoised graphs Anomaly-Denoised Augmentation 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 4 3 5 6 Original graph Reconstructed graph Normal node Anomaly node Learnable graph encoder Learnable graph decoder 1 2 4 3 5 6 1 2 4 3 5 6 Reconstructed node Original graph Reconstructed graph Reconstructed graph Pretraining Retraining Frozen graph encoder Anomaly-denoised graphs Anomaly-Denoised Augmentation 1 2 4 3 5 6 1 2 4 3 5 6 1 2 4 3 5 6 1 2 3 4 5 6 1 2 4 3 5 6 1 2 4 3 5 6 (b) The Proposed Two-stage GAD Framework Figure 1: Workflow comparison. Previous reconstructionbased methods are trained on the contaminated graph. In contrast, our framework involves pretraining on anomalydenoised graphs to reduce the impact of anomalous nodes. ways. (1) Anomaly Overfitting: graphs in the real world are highly sparse, and powerful GNNs tend to overfit to anomalous features, leading to small reconstruction errors even for anomalies. This, in turn, can cause the model to fail. (2) Homophily Trap: Most GNNs operate under the homophily assumption (Kipf and Welling 2016a), which suggests that connected nodes share similar features. Therefore, the presence of anomalous nodes may hinder the reconstruction of nearby normal nodes, such that the corresponding magnified reconstruction errors bias the detection results. These phenomena are illustrated in Figure 1a. The reconstructed features of normal nodes 3 and 6 are influenced by their anomalous neighbors 4 and 5 due to Homophily Trap. Meanwhile, owing to Anomaly Overfitting, nodes 4 and 5 are wellreconstructed, far from what we expected. We conduct a simple experiment to verify the negative effects of the anomalous patterns. Specifically, the popular DOMINANT baseline (Ding et al. 2019) are trained on the Cora and CiteSeer datasets (Sen et al. 2008) under three settings: on the original graph containing no abnorThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8481 Figure 2: Detection performance of the DOMINANT model on Cora and CiteSeer datasets. The x-axis denotes the number of injected anomalies (n), while the y-axis shows the test results for models trained on graphs with clean (no anomalies), half-injected (n/2), or fully-injected (n) anomalies, but all evaluated on graphs containing n injected anomalies. We see that the less the training data is contaminated, the better the performance is. 95 96 448 915 DOMINANT 5.7 2.8 2.8 11.3 Homophily Trap ADA-GAD(Ours) 11.3 95 96 448 915 3.3 3.3 4.1 ADA-GAD(Ours) 18 2 90 85 96 7.9 7.8 0.6 6.6 9.0 96 Normal node with ID No.96 95 Anomaly node with ID No.95 5.7/11.3 Anomaly score Homo. Trap DOMINANT 18 2 90 85 96 1.0 1.6 1.5 2.9 1.4 Anomaly Overfitting Figure 3: Real cases of Anomaly Overfitting and Homophily Trap on Disney and Books datasets. Compared with DOMINANT, our ADA-GAD can effectively mitigate these issues. mal nodes, on the partially-contaminated graph with n/2 injected anomalies, and on the graph with fully-injected n anomalies. During the testing phase, all the models are tested on graphs with n anomalies. As shown in Figure 2, the model trained on the clean graph consistently outperforms the others under different numbers of injected anomalies. Moreover, even converting only half of the anomalies into clean data for training can improve performance. Namely, the less the training data is contaminated, the better the detection performance is. Motivated by this, we hope to train the model on a graph as clean as possible. Since the ground-truth clean graph is not available, we need to find a way to reduce the anomaly level of the graph for training and effectively leverage the graph for detection. To this aim, we present a two-stage framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD), as illustrated in Figure 1(b). (1) Stage 1: We develop a learning-free augmentation method to obtain cleaner graphs, whose anomaly degree is quantified by a spectral propertybased metric. Such anomaly-denoised augmentation technique generates three levels of augmented graphs by masking: node-level, edge-level, and subgraph-level. Corresponding anomaly-denoised autoencoders are pretrained on these augmented graphs using masking pretraining strategies, forcing the model to discover normal patterns. (2) Stage 2: We freeze the pretrained encoders and retrain the decoders from scratch to reconstruct the original graph for detection. We utilize an attention mechanism to aggregate the frozen multi-level representations, and introduce node anomaly distribution regularization, which sharpens the anomaly score distribution of nodes to prevent Anomaly Overfitting. Subsequently, we identify anomalous nodes based on the magnitude of the reconstruction error. The efficacy of our ADAGAD framework is visualized in Figure 3. In comparison with the previous methods, our ADA-GAD exhibits a significant reduction in the issues of Anomaly Overfitting and Homophily Trap. The contributions of this paper are three-fold: • To alleviate the phenomena of Anomaly Overfitting and Homophily Trap, we propose a two-stage graph anomaly detection framework ADA-GAD that firstly reduces the anomaly level of the graph for pretraining and then retrains the decoder for detection. • In the pretraining stage, we present a metric to quantify the degree of anomaly in a graph. Then an anomalydenoised augmentation strategy is introduced to generate augmented graphs with lower anomaly degrees for multilevel masking pretraining. • In the retraining stage, we design a regularization term to make the distribution of each node’s anomaly score sharper, especially to overcome the Anomaly Overfitting issue. Extensive experiments on two synthetic and five real-world anomaly datasets demonstrate the superiority of the proposed method. Related Work Graph Neural Networks Graph neural networks (GNNs) are widely used in various deep learning tasks, as they can process graph-structured data and learn both the structural and attributive information of graphs (Kipf and Welling 2016a; Veliˇckovi´c et al. 2017; Gupta, Matta, and Pant 2021), which have achieved remarkable results in tasks such as social networks, recommendation systems and bioinformatics (Zhou et al. 2020; Waikhom and Patgiri 2021). GNNs can be divided into two types: spectral-based and spatialbased (Zhu et al. 2021a). Spectral-based GNNs use spectral graph theory and rely on the Laplacian matrix of the graph, while spatial-based GNNs use the spatial information of the nodes and rely on message passing mechanisms (Kipf and Welling 2016a; Xu et al. 2018). Typical spectralbased models include ChebNet (Defferrard, Bresson, and Vandergheynst 2016) and GCN (Kipf and Welling 2016a), while classical spatial-based GNNs are GAT (Velickovic et al. 2017), GraphSAGE (Hamilton, Ying, and Leskovec 2017), GIN (Xu et al. 2018), and GraphSNN (Wijesinghe and Wang 2022). Anomaly Detection on Static Attributed Graphs Graph anomaly detection (Duan et al. 2023) aims to identify nodes that are different from most nodes. Some progress has been made in anomaly detection on static attributed graphs. nondeep learning methods (Li et al. 2017; Peng et al. 2018) proposed techniques for detecting anomalous nodes in graphs The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8482 based on matrix decomposition using homophily assumption, which states that connected nodes have similar features. Moreover, the exploration of deep learning (Peng et al. 2018; Li et al. 2019; Bandyopadhyay et al. 2020) for graph anomaly detection is steadily increasing. DOMINANT (Ding et al. 2019) introduces GCN as a graph autoencoder to process both network structure and node attribute information. AnomalyDAE (Fan, Zhang, and Li 2020) uses GAT to encode network structure information. AEGIS (Ding et al. 2021) introduces an unsupervised inductive anomaly detection method that can be applied to new nodes. (Chen et al. 2020) proposed to use generative adversarial networks (GANs) (Goodfellow et al. 2019) to generate anomalous nodes to support anomaly detection, while (Liu et al. 2021; Xu et al. 2022; Huang et al. 2023) presented contrastive learning techniques for graph anomaly detection. Graph Self-Supervised Learning Graph self-supervised learning (GSSL) (Lee, Lee, and Park 2022) is an unsupervised approach that learns meaningful representations from graph data by constructing pretext tasks (Liu et al. 2022c). Three types of GSSL methods can be distinguished based on the different pretext tasks: contrastive, generative, and predictive. Contrastive methods generate multiple views for each graph instance and learn graph representations by contrasting the similarity and difference between different views (You et al. 2020; Sun et al. 2019; Zhu et al. 2021c; Zeng and Xie 2021). Generative methods employ autoencoders to reconstruct parts of the input graph (Zhu, Du, and Yan 2020; Manessi and Rozza 2021; He et al. 2022; Hou et al. 2022). Predictive methods (Wu et al. 2021; Jin et al. 2020; Peng et al. 2020) use statistical analysis or expert knowledge to generate pseudo-labels for graph data and then design some prediction-based proxy tasks based on these pseudo-labels to learn graph representation. Problem Definition The primary focus of this work is to address the task of GAD in attributed networks. Following previous studies (Ding et al. 2019; Liu et al. 2022b), we consider the unsupervised setting in this paper, i.e., learning without both node category labels and anomaly labels. An attributed network can be represented as G = (V, E, X), where V = {v1, . . . , vn} is the set of n nodes, E is the set of m edges, and X ∈Rn×d is the attribute matrix. The structural information could also be represented by a binary adjacency matrix A ∈Rn×n. Specifically, Aij = 1 if there exists a connection between nodes vi and vj, and Aij = 0 if not. The graph Laplacian matrix L is defined as D−A, where D is the degree matrix. Given this attributed network, the aim of GAD is to identify nodes that deviate significantly from the majority in terms of both structural and attribute features. We attempt to formulate an anomaly function (Liu et al. 2022b) that assigns an anomaly score to each node vi. Nodes that exceed the predefined anomaly threshold λ are classified as anomalous, while others are considered normal. The anomalous nodes in the attributed graph can be categorized into two types (Ma et al. 2021): • Structural anomalies refer to densely connected nodes or other connection patterns that deviate from the sparsely connected regular nodes. • Contextual anomalies are nodes whose attributes exhibit significant differences compared to their neighboring nodes. Methodology Previous reconstruction-based GAD models usually consists of a graph encoder and two graph decoders. Specifically, the attribute decoder reconstructs the node attributes, and the structural decoder reconstructs the adjacency matrix. The resulting reconstruction errors from both decoders are combined to calculate anomaly scores for the nodes. These anomaly scores are then ranked, and nodes with higher scores are identified as anomalies. As discussed in the Introduction, directly reconstructing the original graph containing mixed anomalies will suffer from Anomaly Overfitting and Homophily Trap, degenerating the detection performance. Ideally, training the graph autoencoders on the graph with fewer anomalies and utilizing it for detection is the best way to address this issue. However, this is infeasible in the unsupervised detection setting due to the absence of ground-truth anomaly information. Instead, we resort to an anomaly-denoised pretraining process which reduces the anomaly rate of the graph by augmentation and pretrains via the reconstruction on the anomaly-denoised graph. After mitigating the negative impact of anomalies on the encoder, we freeze it and only retrain the decoder on the original graph before proceeding with the subsequent detection. This forms a two-stage framework in Figure 4. Stage 1: Anomaly-Denoised Pretraining This stage generates the anomaly-denoised graphs and pretrains the graph autoencoders on them so that the autoencoders can focus on the normal patterns. It paves the way for the subsequent anomaly detection stage by increasing the reconstruction error of anomalous nodes. For the anomaly-denoised augmentation, we need to ensure that the anomaly level of the augmented graph is lower than that of the original graph. Then a key question arises: how to quantify the anomaly level of a graph? Some prior researches have shown that the level of anomaly in a signal y on the graph G relates to its spectral statistics such as High-frequency Area Energy Ehigh (Tang et al. 2022; Gao et al. 2023) Ehigh(G, y) = yT Ly yT y . (1) As the anomaly ratio of signal y in the graph G increases, Ehigh(G, y) also increases, known as ‘right-shift’ (Tang et al. 2022). Inspired by this, we define the anomaly degree at the attribute and structure level based on the corresponding characteristics as follows. Definition 1 (Attribute Anomaly Magnitude) The attribu -te anomaly magnitude on the graph G is defined as: Aano(G) = Ehigh(G, X) = XT LX XT X . (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8483 Attention Aggregation Subgraph-level Encoder Edge-level Encoder Stage 1 Stage 2 Encoder Structure decoder Attribute decoder Anomaly-Denoised Autoencoders Attribute Reconstruction Structure Reconstruction Node-level Embedding Subgraph-level Embedding Edge-level Embedding Frozen Multi-level Aggregation & Decoder Attribute & Structure Reconstruction + Normal nodes Contextual anomalies Structural anomalies Normal attributes Anomalous attributes Node-level  anomaly denoising Subgraph-level  anomaly denoising Edge-level  anomaly denosing Data stream of Stage 2 Data stream of Stage 1 Node Anomaly Distribution Regularization + anomaly score Node-level Encoder Figure 4: Overall framework of ADA-GAD. In Stage 1, we pretrain the graph autoencoders at three levels using the anomalydenoised augmentation to mitigate the negative impact of anomalous patterns in the graph. In Stage 2, we retrain decoders based on multi-level embeddings obtained from fixed encoders, together with a regularization to sharpen the anomaly score’s distribution. Definition 2 (Structural Anomaly Magnitude) The structural anomaly magnitude on the graph G is defined as: Sano(G) = Ehigh(G, D) = DT LD DT D . (3) A larger Aano for G indicates more pronounced variations in its attributes, leading to a higher ratio of contextual anomalies. And Sano characterizes the ratio of structural anomalies similarly. Definition 3 (Graph Anomaly Magnitude) The anomaly magnitude on a graph G is defined as the sum of the attribute anomaly magnitude Aano and the structural anomaly magnitude Sano: Gano(G) = Aano(G) + Sano(G). Apparently, as the proportion of anomalous nodes within the graph increases, the graph anomaly magnitude also tends to rise. This provides a good measure for the anomalydenoised augmentation process. The objective of the augmentation is to minimize the anomaly level of the augmented graph G′ = (X′, A′) within predefined augmentation budgets: minimize A′,X′ Gano(G′), subject to α ≤Gano(G) −Gano(G′) ≤β, ∥X −X′∥2 F ≤ϵ1, ∥A −A′∥2 F ≤ϵ2, (4) where α > 0 and β > 0 set the acceptable bounds for anomaly rate reduction, ∥· ∥F denotes the Frobenius norm. Additionally, ϵ1 and ϵ2 are both small and denote the augmentation budget for the attribute and structure, respectively. However, because the graph’s adjacency matrix is discrete, directly solving this optimization problem is very challenging (Zhu et al. 2021b). Thus, we turn to design a learning-free augmentation approach to achieve our goal. We introduce a simplified graph masking strategy to generate the augmented graph and conduct denoising pretraining at three levels: node-level, edge-level, and subgraph-level. Node-level denoising pretraining For node-level anomaly denoising, we randomly select a subset of nodes Vn ⊂V for replacement-based masking using a probability of pr. The masked node features are adjusted as follows: exi = xj vi ∈Vn xi vi /∈Vn, (5) where we randomly choose another node (denoted as j) and replace the original feature xi with xj if vi ∈Vn. Additionally, we also introduce a probabilistic mechanism where the feature of each node vi ∈Vn is randomly transited to zero with a probability of pz. After the augmentation, we could calculate the corresponding graph anomaly magnitude and check if it satisfies the constraint in Problem (4). The augmented graph is valid if the condition holds. We repeat the above augmentation steps multiple times, generating a collection of valid augmented graphs of length ln, denoted as Cn = {Gn 1 , Gn 2 , ..., Gn ln}, where each Gn k = (V, E, Xn k ) satisfies Gano(Gn k ) ≤θ, Xn k is the masked attribute matrix generated each time, θ = Gano(G) −α is the anomaly degree threshold. For each Gn k , we feed it into the node-level graph autoencoders consisting of the GNN encoder Enc1 and the attribute decoder Decatt 1 (Ding et al. 2019; Fan, Zhang, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8484 Li 2020), and then obtain the reconstructed features: ˆ Xn k = Decatt 1 (Enc1(Gn k )). (6) In the pretraining, the autoencoder should especially learn to reconstruct the feature xi of the masked node vi ∈V. Then the node-level reconstruction loss Ln is the sum of reconstruction losses over Cn: Ln = ln X k=1 ∥Xn k −ˆ Xn k ∥2 F . (7) Edge-level denoising pretraining Similar to the nodelevel pretraining, we randomly select a subset of edges Ee from E and apply masking with a probability of q, resulting in the corresponding entries in the adjacency matrix being set to zero. After multiple times of augmentation, we obtain a collection of le edge masking graphs, denoted as Ce = {Ge 1, Ge 2, ..., Ge le}, where each Ge k = (V, E \ Ee k, X) also fulfills the condition Gano(Ge k) ≤θ, Ee k is edge subset generated each time. The edge-level autoencoders take the edge-level masked graph Ge k as input and aim to reconstruct the denoised graph structure E\Ee k. The reconstructed adjacency matrix for each augmented Ae k can be expressed as: ˆ Ae k = Decstr 2 (Enc2(Ge k)), (8) where Enc2 and Decstr 2 denote the GNN encoder and the structural decoder, respectively. The loss function of the edge-level autoencoders Le can be defined as: Le = le X k=1 ∥Ae k −ˆ Ae k∥2 F . (9) Subgraph-level denosing pretraining In addition, we propose a novel pretext task of subgraph masking pretraining. We employ random walk-based subgraph sampling for masking, adopting similar node and edge masking strategies as the above masking processes. In the resulted augmented graph sets Cs = {Gs 1, Gs 2, ..., Gs ls}, each Gs k = (V, E\Es k, Xs k) satisfies Gano(Gs k) ≤θ, where Es k is edge subset generated each time, Xs k is attribute matrix generated each time. Subgraph-level masking can be viewed as a specific combination of node- and edge-level masking. The reconstructed feature and structure are: ˆ As k = Decatt 3 (Enc3(Gs k)), ˆ Xs k = Decstr 3 (Enc3(Gs k)), (10) where Enc3 is a GNN encoder, Decatt 3 and Decstr 3 denote the attribute and structure decoder, respectively. The corresponding reconstruction loss function can be formulated as follows: Ls = Lsn + Lse, (11) where Lsn and Lse are computed by substituting ˆ Xs k and ˆ As k into Eq. (7) and (9), respectively. In Stage 1, all the above three procedures are simultaneously adopted to pretrain the corresponding autoencoders. The various levels of denoising pretraining help the model discover the underlying normal node patterns. Stage 2: Retraining for Detection In this stage, the graph is no longer masked for learning. We fix the pretrained graph encoders, discard the pretrained decoders and retrain two unified decoders (one for attribute and another for structure) from scratch to detect anomalous information in the graph. Multi-level embedding aggregation The pretrained encoders produce embeddings at three levels. They are firstly passed through a fully connected layer, and then aggregated using an attention mechanism. This yields an aggregated multi-level embedding denoted as h for each node. Graph reconstruction for anomaly detection The aggregated multi-level embeddings h are fed into an attribute decoder Decatt f and a structure decoder Decstr f for reconstruction. Specifically, we reconstruct the adjacency matrix A and attribute matrix X of the original graph as ˆ A and ˆ X, respectively. The corresponding graph reconstruction loss Lrec is: Lrec = (1 −γ)Lrs + γLra = (1 −γ)∥A −ˆ A∥2 F + γ∥X −ˆ X∥2 F , (12) where γ ∈[0, 1] is a balance hyperparameter. And the anomaly score si for the i-th node is defined as: si = (1 −γ) ∥ai −ˆai∥2 + γ ∥xi −ˆxi∥2 . (13) where ˆai and ai represent the reconstructed and original structure vector of node vi, respectively. Similarly, ˆxi and xi are the i-th reconstructed and original attribute vector, respectively. Node Anomaly Distribution Regularization We propose a novel approach to regularize the model with a node anomaly distribution loss Ls that enforces sparsity on the anomaly distribution, further mitigating the Anomaly Overfitting. Considering that overfitting occurs when all nodes are reconstructed very well, we intentionally introduce some non-uniformity in the anomaly distribution around nodes to enhance the difficulty of reconstruction. Therefore, we require the following anomaly distribution Ai of node vi, e.g., the anomaly scores of a node and its neighbors, to be sharper: Ai = s−τ i P j∈Ni s−τ j , (14) where Ni represents the neighborhood of node vi, and τ ∈ (0, 1) is a temperature coefficient. Then the corresponding entropy Si is: Si = −Ai log Ai = s−τ i P j∈Ni s−τ j (log X j∈Ni s−τ j −τ log si). (15) In fact, Si represents the smoothness level of the anomaly distribution around node i. A higher value of Si indicates a sharper anomaly distribution. Accordingly, the node anomaly distribution regularization term Lreg is defined as: Lreg = − X vi∈V Si. (16) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8485 Dataset Nodes Edges Feat Anomalies Ratio Cora∗ 2,708 11,060 1,433 138 5.1% Amazon∗ 13,752 515,042 767 694 5.0% Weibo 8,405 407,963 400 868 10.3% Reddit 10,984 168,016 64 366 3.3% Disney 124 335 28 6 4.8% Books 1,418 3,695 21 28 2.0% Enron 13,533 176,987 18 5 0.4% Table 1: Statistics of dataset (∗indicates the dataset with injected anomalies). Optimization Objective Putting all together, we have the overall loss function in this stage: L = Lrec + Lreg = (1 −γ)Lra + γLrs + γregLreg, (17) where γreg is a weight hyperparameter that should be small to avoid influencing optimization of Lrec. Based on this loss, we retrain the aggregation and the decoder modules. After retraining, we sort the nodes based on their anomaly scores si, and take the nodes with higher si as anomalous nodes according to the given anomaly rate. Experiments Experimental Setup Datasets We conducted experiments on two datasets injected with synthetic anomalies: Cora (Sen et al. 2008), Amazon (Shchur et al. 2018), and five manually labeled datasets with anomalies: Weibo (Zhao et al. 2020), Reddit (Kumar, Zhang, and Leskovec 2019), Disney (M¨uller et al. 2013), Books (S´anchez et al. 2013), and Enron (S´anchez et al. 2013). We injected contextual anomalies into datasets with no labeled anomalies by swapping node attributes, and structural anomalies by altering node connections within the graph, maintaining an equal number for each type in alignment with prior research (Ding et al. 2019; Liu et al. 2022a). The statistics of all the datasets are recorded in Table 1. Competitors We adopt three non-deep learning methods for graph anomaly detection comparison: SCAN (Xu et al. 2007), Radar (Li et al. 2017), ANOMALOUS (Peng et al. 2018). Additionally, we have also selected the following deep learning-based competitors: GCNAE (Kipf and Welling 2016b), DOMINANT (Ding et al. 2019), DONE (Bandyopadhyay et al. 2020), AdONE (Bandyopadhyay et al. 2020), AnomalyDAE (Sakurada and Yairi 2015), GAAN (Chen et al. 2020), CoLA (Liu et al. 2021), OCGNN (Wang et al. 2021), CONAD (Xu et al. 2022). Meanwhile, we also implement four variants of the proposed ADA-GAD method to verify the effectiveness of the anomaly-denoised augmentation: ADA-GADrand refers to using random augmentation; ADA-GADnode, ADAGADedge, and ADA-GADsubgraph utilize a single level of anomaly-denoised augmentation, denoising pretrained at only the node, edge, and subgraph level, respectively. Implementation Details We implement all the competitors with the PyGOD toolbox (Liu et al. 2022a). We set the number of epochs/dropout rate/weight decay to 100/0.1/0.01, respectively. The embedding dimension d is set to 12 for the Disney, Books, and Enron datasets, and 64 for the others. Our ADA-GAD method utilizes GCN as the encoders and decoders, except for the Enron and Weibo datasets, where we adopt GAT as the encoders and GCN as decoders. For the real-world datasets Disney, Books, and Enron, the encoder depth is set to 2 and the decoder depth is 1. For the other datasets, encoder and decoder depths are set to 1. During augmentation, the number of masks for nodes and edges is set within the range of 1 to 20, respectively. The number of random walks and walk length for the subgraph mask are both set to 2. ln, le, and ls are all set to 10, with θ is assigned to the smallest Gano among N aug random augmentations. In the experiments, N aug is set to 30. The pretraining epoch and the retain epoch are both set to 20. AUC (Area under the ROC Curve) (Bradley 1997) is used as the performance metric. We repeat all experiments 10 times using 10 different seeds. Performance Comparison All the experimental results are reported in Table 2 reports all the experimental results. From the results, we have the following observations: (1) ADA-GAD consistently exhibits better AUC performance than other competitors, which validates the effectiveness of the proposed method. (2) A single anomaly-denoised pretraining branch is a little inferior to the combination of three-level anomaly-denoised pretraining branches but outperforms the random one. This phenomenon indicates that our anomaly-denoised training strategy successfully utilizes the information at the node, edge, and subgraph levels for anomaly detection. (3) On four datasets with relatively small feature dimensions (i.e., Reddit, Disney, Books, and Enron), some competitors might achieve poor AUC performances, which is consistent with the empirical results in the benchmark (Liu et al. 2022b). In contrast, our ADA-GAD demonstrates consistent improvement over the competitors, which again validates our motivation. (4) The non-deep learning methods, Radar and ANOMALOUS, outperform the other deep learning competitors on the Weibo and Reddit datasets. This counterintuitive result indicates that these deep learning methods might suffer from severe over-fitting. Ablation Study Studies on Aggregation Strategies We explored different aggregation strategies: non-learnable linear, learnable linear, and our attention aggregation. The non-learnable method uses fixed weights, whereas the learnable method optimizes weights through gradient descent. Figure 6 demonstrates the superior performance of our attention aggregation, highlighting its enhanced efficacy. Studies on the Model Depth and Node Anomaly Distribution Regularization To investigate the effectiveness of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8486 Algorithm Cora Amazon Weibo Reddit Disney Books Enron SCAN 64.95±0.00 65.85±0.00 70.63±0.00 49.67±0.00 50.85±0.00 52.42±0.00 53.70±0.00 Radar 53.28±0.00 58.93±0.00 98.27±0.00 56.64±0.00 50.14±0.00 56.21±0.00 64.10±0.00 Non-Deep ANOMALOUS 35.13±1.07 71.49±1.71 98.27±0.00 51.58±8.66 50.14±0.00 52.51±0.00 63.65±0.36 MLPAE 70.91±0.07 74.20±0.00 90.01±0.42 49.74±1.70 48.02±0.00 51.28±5.36 41.55±2.59 GCNAE 70.90±0.00 74.20±0.00 88.98±0.31 50.70±0.46 47.34±1.31 54.81±1.59 66.86±0.54 DOMINANT 76.71±0.07 74.20±0.00 92.17±0.41 56.20±0.06 52.91±3.04 40.14±2.66 54.93±0.66 DONE 83.60±1.45 73.38±4.37 86.86±0.38 51.40±2.26 48.42±4.23 54.05±1.64 61.07±3.15 AdONE 82.12±0.71 79.31±2.60 82.98±0.63 51.53±1.38 50.93±2.34 54.13±1.60 58.36±7.34 AnomalyDAE 80.99±0.07 77.39±0.01 92.99±0.44 52.21±2.03 48.29±4.17 59.86±4.82 45.85±13.16 GAAN 68.32±1.38 77.70±0.34 92.53±0.01 51.23±1.19 48.02±0.00 53.38±2.13 56.55±11.69 CoLA 56.88±1.63 61.00±1.09 22.18±3.36 53.21±1.10 54.46±7.67 49.69±4.20 58.53±9.38 OCGNN 50.02±0.14 49.99±0.04 79.68±5.76 48.76±3.57 68.19±1.45 57.33±4.14 54.39±6.11 CONAD 84.34±0.03 82.62±0.26 90.87±0.59 56.02±0.01 45.38±4.64 40.82±1.18 54.67±0.48 ADA-GADrand 81.61±0.01 76.36±0.12 90.74±0.65 56.03±0.38 68.56±2.94 61.75±2.20 66.12±4.87 ADA-GADnode 84.13±0.02 77.38±0.02 96.39±0.74 56.33±0.16 68.05±3.70 62.77±2.31 71.55±2.27 ADA-GADedge 84.10±0.01 81.85±0.03 94.52±0.70 56.37±0.10 68.42±3.19 62.71±2.17 72.34±1.42 ADA-GADsubgraph 84.42±0.01 81.79±0.04 96.69±0.59 55.58±0.36 68.59±2.62 62.70±2.13 72.86±0.88 Deep ADA-GAD 84.73±0.01 83.25±0.03 98.44±0.33 56.89±0.01 70.04±3.08 65.24±3.17 72.89±0.86 Table 2: AUC (%) results (mean ± std). The best result is shown in bold, while the second best is marked with underline. Figure 5: Effect of the encoder depth and weight of node anomaly distribution regularization on four organic datasets. our model at different network depths, we evaluate the performance of the encoder with the number of layers ranging from 1 to 9 under different weights (0, 0.01, 0.001) of the node anomaly distribution regularization. As shown in Figure 5, we can find that (1) the optimal number of encoder layers varies across datasets, with the Weibo dataset having an optimal number of 1 and the other three realworld datasets having an optimal number of 3 or 4. This suggests that the Weibo dataset is more prone to overfitting, Figure 6: Performance comparison using different aggregation strategies. consistent with our previous experimental findings. (2) After reaching the optimal number of layers, increasing the depth fails to improve performance. Fortunately, the node anomaly distribution regularization can alleviate this issue, as a larger weight within a small range can induce better performance. Conclusion In this paper, we introduced ADA-GAD, a novel two-stage framework for graph anomaly detection. Through anomalydenoised augmentation and a two-stage training framework, ADA-GAD effectively captures the normal patterns and enhances anomaly detection performance. Additionally, we introduce a node anomaly distribution regularization term to mitigate the model overfitting by constraining the anomaly distribution near nodes. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on multiple benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8487 Acknowledgments This work was supported in part by the National Key R&D Program of China under Grant 2018AAA0102000, in part by National Natural Science Foundation of China: 62236008, U21B2038, U23B2051, 61931008, 62122075 and 61976202, in part by the Fundamental Research Funds for the Central Universities, in part by Youth Innovation Promotion Association CAS, in part by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No. XDB28000000, in part by the Innovation Funding of ICT, CAS under Grant No.E000000, in part by the Postdoctoral Fellowship Program of CPSF under Grant GZB20230732, and in part by China Postdoctoral Science Foundation under Grant 2023M743441. References Abdallah, A.; Maarof, M. A.; and Zainal, A. 2016. Fraud detection system: A survey. Journal of Network and Computer Applications, 68: 90–113. Bandyopadhyay, S.; N, L.; Vivek, S. V.; and Murty, M. N. 2020. Outlier Resistant Unsupervised Deep Architectures for Attributed Network Embedding. In Proceedings of the 13th International Conference on Web Search and Data Mining. Bradley, A. P. 1997. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7): 1145–1159. Chen, Z.; Liu, B.; Wang, M.; Dai, P.; Lv, J.; and Bo, L. 2020. Generative Adversarial Attributed Network Anomaly Detection. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Cheng, D.; Wang, X.; Zhang, Y.; and Zhang, L. 2020. Graph neural network for fraud detection via spatial-temporal attention. IEEE Transactions on Knowledge and Data Engineering, 34(8): 3800–3813. Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29. Ding, K.; Li, J.; Agarwal, N.; and Liu, H. 2021. Inductive anomaly detection on attributed networks. In Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence, 1288–1294. Ding, K.; Li, J.; Bhanushali, R.; and Liu, H. 2019. Deep Anomaly Detection on Attributed Networks, 594–602. Dou, Y.; Liu, Z.; Sun, L.; Deng, Y.; Peng, H.; and Yu, P. S. 2020. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 315–324. Duan, J.; Wang, S.; Zhang, P.; Zhu, E.; Hu, J.; Jin, H.; Liu, Y.; and Dong, Z. 2023. Graph anomaly detection via multiscale contrastive learning networks with augmented view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 7459–7467. Fan, H.; Zhang, F.; and Li, Z. 2020. Anomalydae: Dual Autoencoder for Anomaly Detection on Attributed Networks. International Conference on Acoustics, Speech, and Signal Processing. Gao, Y.; Wang, X.; He, X.; Liu, Z.; Feng, H.; and Zhang, Y. 2023. Addressing heterophily in graph anomaly detection: A perspective of graph spectrum. In Proceedings of the ACM Web Conference 2023, 1528–1538. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2019. Generative Adversarial Nets. Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, 177–177. Gupta, A.; Matta, P.; and Pant, B. 2021. Graph neural network: Current state of Art, challenges and applications. Materials Today: Proceedings, 46: 10927–10932. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, 30. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16000–16009. Hou, Z.; Liu, X.; Cen, Y.; Dong, Y.; Yang, H.; Wang, C.; and Tang, J. 2022. Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 594– 604. Huang, Y.; Wang, L.; Zhang, F.; and Lin, X. 2023. Unsupervised Graph Outlier Detection: Problem Revisit, New Insight, and Superior Method. In 2023 IEEE 39th International Conference on Data Engineering, 2565–2578. IEEE. Jin, W.; Derr, T.; Liu, H.; Wang, Y.; Wang, S.; Liu, Z.; and Tang, J. 2020. Self-supervised learning on graphs: Deep insights and new direction. arXiv preprint arXiv:2006.10141. Kipf, T. N.; and Welling, M. 2016a. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Kipf, T. N.; and Welling, M. 2016b. Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Kumar, S.; Zhang, X.; and Leskovec, J. 2019. Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Lee, N.; Lee, J.; and Park, C. 2022. Augmentation-free selfsupervised learning on graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7372– 7380. Li, J.; Dani, H.; Hu, X.; and Liu, H. 2017. Radar: residual analysis for anomaly detection in attributed networks. International Joint Conference on Artificial Intelligence. Li, Y.; Huang, X.; Li, J.; Du, M.; and Zou, N. 2019. SpecAE: Spectral AutoEncoder for Anomaly Detection in Attributed Networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8488 Liu, K.; Dou, Y.; Zhao, Y.; Ding, X.; Hu, X.; Zhang, R.; Ding, K.; Chen, C.; Peng, H.; Shu, K.; Chen, G. H.; Jia, Z.; and Yu, P. S. 2022a. PyGOD: A Python Library for Graph Outlier Detection. arXiv preprint arXiv:2204.12095. Liu, K.; Dou, Y.; Zhao, Y.; Ding, X.; Hu, X.; Zhang, R.; Ding, K.; Chen, C.; Peng, H.; Shu, K.; et al. 2022b. Bond: Benchmarking unsupervised outlier node detection on static attributed graphs. Advances in Neural Information Processing Systems, 35: 27021–27035. Liu, Y.; Jin, M.; Pan, S.; Zhou, C.; Zheng, Y.; Xia, F.; and Yu, P. 2022c. Graph self-supervised learning: A survey. IEEE Transactions on Knowledge and Data Engineering. Liu, Y.; Li, Z.; Pan, S.; Gong, C.; Zhou, C.; and Karypis, G. 2021. Anomaly detection on attributed networks via contrastive self-supervised learning. IEEE transactions on neural networks and learning systems, 33(6): 2378–2392. Ma, X.; Wu, J.; Xue, S.; Yang, J.; Zhou, C.; Sheng, Q. Z.; Xiong, H.; and Akoglu, L. 2021. A comprehensive survey on graph anomaly detection with deep learning. IEEE Transactions on Knowledge and Data Engineering. Manessi, F.; and Rozza, A. 2021. Graph-based neural network models with multiple self-supervised auxiliary tasks. Pattern Recognition Letters, 148: 15–21. M¨uller, E.; S´anchez, P.; Mulle, Y.; and B¨ohm, K. 2013. Ranking outlier nodes in subspaces of attributed graphs. International Conference on Data Engineering. Peng, Z.; Dong, Y.; Luo, M.; Wu, X.-M.; and Zheng, Q. 2020. Self-supervised graph representation learning via global context prediction. arXiv preprint arXiv:2003.01604. Peng, Z.; Luo, M.; Li, J.; Liu, H.; and Zheng, Q. 2018. ANOMALOUS: A Joint Modeling Approach for Anomaly Detection on Attributed Networks. International Joint Conference on Artificial Intelligence. Sakurada, M.; and Yairi, T. 2015. Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis. Sen, P.; Namata, G.; Bilgic, M.; Getoor, L.; Galligher, B.; and Eliassi-Rad, T. 2008. Collective classification in network data. AI magazine, 29(3): 93–93. Shchur, O.; Mumme, M.; Bojchevski, A.; and G¨unnemann, S. 2018. Pitfalls of Graph Neural Network Evaluation. Sun, F.-Y.; Hoffmann, J.; Verma, V.; and Tang, J. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000. S´anchez, P.; M¨uller, E.; Laforet, F.; Keller, F.; and B¨ohm, K. 2013. Statistical Selection of Congruent Subspaces for Mining Attributed Graphs. International Conference on Data Mining. Tang, J.; Li, J.; Gao, Z.; and Li, J. 2022. Rethinking graph neural networks for anomaly detection. In International Conference on Machine Learning, 21076–21089. PMLR. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y.; et al. 2017. Graph attention networks. stat, 1050(20): 10–48550. Waikhom, L.; and Patgiri, R. 2021. Graph neural networks: Methods, applications, and opportunities. arXiv preprint arXiv:2108.10733. Wang, X.; Jin, B.; Du, Y.; Cui, P.; Tan, Y.; and Yang, Y. 2021. One-class graph neural networks for anomaly detection in attributed networks. Neural computing and applications, 33: 12073–12085. Wijesinghe, A.; and Wang, Q. 2022. A new perspective on” how graph neural networks go beyond weisfeiler-lehman?”. In International Conference on Learning Representations. Wu, L.; Lin, H.; Tan, C.; Gao, Z.; and Li, S. Z. 2021. Selfsupervised learning on graphs: Contrastive, generative, or predictive. IEEE Transactions on Knowledge and Data Engineering. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Xu, X.; Yuruk, N.; Feng, Z.; and Schweiger, T. 2007. SCAN. Knowledge Discovery and Data Mining. Xu, Z.; Huang, X.; Zhao, Y.; Dong, Y.; and Li, J. 2022. Contrastive attributed network anomaly detection with data augmentation. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 444–457. Springer. You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; and Shen, Y. 2020. Graph Contrastive Learning with Augmentations. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M. F.; and Lin, H., eds., Advances in Neural Information Processing Systems, volume 33, 5812–5823. Curran Associates, Inc. Zeng, J.; and Xie, P. 2021. Contrastive self-supervised learning for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 10824–10832. Zhao, T.; Deng, C.; Yu, K.; Jiang, T.; Wang, D.; and Jiang, M. 2020. Error-Bounded Graph Anomaly Loss for GNNs. Conference on Information and Knowledge Management. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; and Sun, M. 2020. Graph neural networks: A review of methods and applications. AI open, 1: 57–81. Zhu, M.; Wang, X.; Shi, C.; Ji, H.; and Cui, P. 2021a. Interpreting and unifying graph neural networks with an optimization framework. In Proceedings of the Web Conference 2021, 1215–1226. Zhu, Q.; Du, B.; and Yan, P. 2020. Self-supervised training of graph convolutional networks. arXiv preprint arXiv:2006.02380. Zhu, Y.; Xu, W.; Zhang, J.; Du, Y.; Zhang, J.; Liu, Q.; Yang, C.; and Wu, S. 2021b. A survey on graph structure learning: Progress and opportunities. arXiv preprint arXiv:2103.03036. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2021c. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021, 2069–2080. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8489
2024
943
18,788
ViSTec: Video Modeling for Sports Technique Recognition and Tactical Analysis Yuchen He*, Zeqing Yuan*, Yihong Wu, Liqi Cheng, Dazhen Deng†, Yingcai Wu† Zhejiang University {heyuchen, leoyuan, wuyihong, lycheecheng, dengdazhen, ycwu}@zju.edu.cn Abstract The immense popularity of racket sports has fueled substantial demand in tactical analysis with broadcast videos. However, existing manual methods require laborious annotation, and recent attempts leveraging video perception models are limited to low-level annotations like ball trajectories, overlooking tactics that necessitate an understanding of stroke techniques. State-of-the-art action segmentation models also struggle with technique recognition due to frequent occlusions and motion-induced blurring in racket sports videos. To address these challenges, We propose ViSTec, a Video-based Sports Technique recognition model inspired by human cognition that synergizes sparse visual data with rich contextual insights. Our approach integrates a graph to explicitly model strategic knowledge in stroke sequences and enhance technique recognition with contextual inductive bias. A two-stage action perception model is jointly trained to align with the contextual knowledge in the graph. Experiments demonstrate that our method outperforms existing models by a significant margin. Case studies with experts from the Chinese national table tennis team validate our model’s capacity to automate analysis for technical actions and tactical strategies. More details are available at: https://ViSTec2024.github.io/. Introduction Racket sports, including tennis, badminton, and table tennis, are distinguished by their highly strategic nature, drawing millions of players and fans to explore in-depth tactical analysis. A game in racket sports consists of rallies, which are sequences of strokes executed by players alternately from both sides. Here, a stroke refers to the action of hitting the ball with a racket, and each stroke can employ a certain technique, such as “topspin” and “push.” A tactic is characterized by a series of consecutive stroke techniques. In a match, it is of paramount importance to analyze the interplay of tactics and players’ characteristics in technical actions. Both aspects necessitate an understanding of the techniques used in each stroke. Every year, numerous tournaments are held and broadcasted, generating a vast amount of video data. Therefore, modeling broadcast videos on the technique level is a *Equal contribution; alphabetically ordered; each reserves the right to be listed first. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. promising direction for facilitating and democratizing racket sports analysis. Previous methods for racket sports analysis (Wu et al. 2018; Polk et al. 2014; Chu et al. 2022) heavily depend on fine-grained data and suffer from low scalability due to the demand of labor-intensive annotation from domain experts. On the other hand, recent attempts utilize video perception models for automatic annotation, yet are hindered by the sparsity of visual information in broadcast videos, particularly challenges such as motion-induced blurring, subtle movement amplitude, and frequent occlusions of wrists and rackets. Moreover, they concentrate only on low-level objects and coarse-grained events, such as ball trajectories (Huang et al. 2019) and stroke timestamps (Voeikov, Falaleev, and Baikulov 2020). As for techniques, which involves high-level semantics and contextual knowledge, it cannot be taken as mere actions and state-of-the-art action segmentation models (Bian et al. 2022) fall short in recognizing racket sports techniques. In this paper, we aim to recognize and analyze finegrained stroke techniques from low-quality broadcast videos, bridging the gap between professional expertise and automated analysis. We propose ViSTec, which incorporates domain knowledge as inductive prior. We select table tennis as a representative racket sport for our study, considering it the most challenging for stroke recognition and well-known for being highly strategic. We collaborated closely with two senior data analysts from the Chinese national table tennis team when developing and evaluating our methods. ViSTec is composed of an action perception module and a domain knowledge module. The action perception module operates in a two-stage manner, leveraging visual information. It first segments each stroke clip from raw video, and then classifies the specific stroke techniques, working collaboratively with the domain knowledge module to enhance accuracy. The domain knowledge module models contextual knowledge, with a focus on technique sequence dependencies. It adopts the form of a graph to explicitly represent the transition relations between techniques, thus integrating this relational understanding as prior knowledge. The two modules are thoughtfully aggregated and jointly trained to achieve better synergy. To demonstrate the effectiveness of our framework, we perform comparative experiments with state-of-the-art acThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8490 tion segmentation models and conduct an ablation study to examine individual components. Furthermore, we conduct case studies on the 2022 Table Tennis World Cup to analyze players’ playing styles and optimal strategies under different circumstances. The results demonstrate that our model exhibits a proficient understanding of stroke techniques, enabling automatic tactical analysis. The contributions of this paper are as follows: • We address the problem of video-based technique recognition in racket sports, facilitating automatic tactical analysis. • We propose a novel framework that leverages both sparse visual information and contextual domain knowledge for video understanding, achieving state-of-the-art performance in sports technique recognition. • We conduct experiments and case studies to demonstrate the usefulness of our model and obtain valuable insights validated by professional analysts. Related Work Sports Action Recognition and Segmentation Action recognition aims to identify the categories of human actions in videos. Existing studies (Karpathy et al. 2014; Tran et al. 2015; Carreira and Zisserman 2017; Wang et al. 2019) propose a series of neural network architectures to learn action representations from raw video or optical flow. For example, Carreira and Zisserman (Carreira and Zisserman 2017) proposed a two-stream inflated 3D convolutional network architecture as a backbone video model. However, action recognition focuses on video-level classification, thereby failing to analyze long videos containing multiple actions. Researchers further delve into action segmentation, a task that involves not only recognizing actions but also localizing the time intervals they occur. There are several methods to obtain time segments, such as sliding windows (Kim, Kang, and Kim 2022), proposal generation (Lin et al. 2018, 2019), and per-frame labeling (Shou et al. 2017). For sports analysis, datasets are the key. Liu et al. (Liu et al. 2022) proposed a dataset that covers a series of actions across different sport types, and evaluated BMN (Lin et al. 2019), DBG (Lin et al. 2020), G-TAD (Xu et al. 2020). Finegym (Shao et al. 2020) is a comprehensive gymnastic dataset with 530 action types. P 2A (Bian et al. 2022) proposes a fine-grained table tennis dataset and assesses a series of localization and recognition models. These datasets open up a research direction of recognizing complex and dynamic sports actions, but stateof-the-art methods fail on racket sports scenarios with challenges such as frequent occlusions and subtle movements. Racket Sports Data Mining and Analysis The popularity of racket sports has garnered the interest of data mining. For tennis analysis, a series of studies analyze and visualize the scoring outcome (Polk et al. 2014) and ball trajectories (Polk et al. 2020). Some research attempts have been devoted to synthesizing or reconstructing player actions (Zhang et al. 2021, 2023) from broadcast videos. For badminton, researchers have employed AR/MR technologies to visualize and analyze 3D shuttle trajectories (Ye et al. 2021; Chu et al. 2022; Lin et al. 2023). For table tennis, a series of visual analytics systems, such as iTTVis (Wu et al. 2018) and Tac-Simur (Wang et al. 2020), are developed to analyze the attributes between consecutive strokes. Tac-Valuer (Wang et al. 2021a) combines deep learning and abductive learning (Dai et al. 2019) to incorporate sequence dependency into the stroke classification. However, these methods rely on fine-grained attributes that are labeled by the experts manually. To improve data accessibility, EventAnchor (Deng et al. 2021) combines computer vision models and human-computer interaction techniques to improve data annotation efficiency. To further enable tactical analysis, we develop a method to recognize high-level stroke techniques from broadcast videos. The results can be directly used to analyze the player tactics without additional data annotation, democratizing large-scale analysis on tactics. Data Descriptions and Notations Racket Sports Data In racket sports, tactical analysts usually focus on finegrained stroke techniques. The stroke techniques refer to specific types of action to wave the racket and hit the ball. Different stroke techniques result in various ball spins and moving directions, which consequently affect the ball trajectory to a great extent. Taking table tennis as an example, strokes can be categorized to eight techniques, such as serve, topspin, short, and block. Additional attributes can be incorporated to provide a more fine-grained classification, such as forehand and backhand. In tennis, stroke techniques can also be categorized into eight types (Zhang et al. 2023). In addition, in racket sports analysis, consecutive three strokes are usually considered to be a minimal unit for tactics (Wang et al. 2021b). Professional analysts normally conduct analysis at tactic or stroke level (Wang et al. 2020). Notations We first introduce the problem and notations used in this study. In our scenario, the input is a video V = {v1, v2, ..., vT } with T frames and the output is the sequence S = {(s1, t1), (s2, t2), ..., (sN, tN)} with N strokes. The value si represents the stroke technique and ti is the timestamp of the stroke. In table tennis, for example, the si can be techniques such as topspin, serve, short, block, push, flick or smash. It is noted that compared to ordinary action segmentation tasks where ti is an interval lasting for frames or seconds, in racket sports the interval of the action is usually ambiguous. Therefore, researchers represent the stroke action as an immediate event and only record the moment when the ball hits the racket as the time of stroke event (Voeikov, Falaleev, and Baikulov 2020). ViSTec In this section, we introduce our two-stage framework for stroke recognition. The framework is demonstrated in Fig. 1. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8491 Figure 1: The framework of ViSTec. (A) is the stroke segmentation module. (B) is the cls module, with segmented stroke features as input and probability distributions for each segment as output, as shown in (C). (D) is the grh module for domain knowledge modeling. (E) shows the detail of video feature extractor. Video Feature Modeling We first model the spatial and temporal features of videos. Given that stroke is fast-paced and lasts for only several frames, it is required to extract frame-wise features (Chen et al. 2022). We employ a transformer-based structure for frame-wise feature extraction. Because of the success of vision transformers in the tasks of video modeling, we select VideoMAE (Tong et al. 2022) as our backbone. VideoMAE first divides a long video clip into slices and extracts features slice by slice with a slice length of 16 frames. However, in our scenario, every frame counts because of the short durations of table tennis strokes. Therefore, we use a slice length of 2 frames. A slice is in the shape of H × W × 2 (H = 224, W = 224), and each slice is forwarded into a 3-dimensional convolutional layer with the filter shape of H/16 × W/16 × 2 and stride shape of H/16 × W/16 × 2. As a result, a slice is converted into 16 × 16 patches of features. Each patch represents the spatial feature of a region in the original slice. All patches of features are then concatenated together and forwarded into a transformer-based network. The network can model the relations between feature patches and construct a spatial feature of the video slice. We further model the temporal feature of the whole video based on the spatial feature of each video slice. After the spatial modeling, a feature of T/2×196×1280 is obtained. The feature is average pooled over the patch dimension and transformed into the shape of T/2×1280. The feature is then forwarded into a multi-layer transformer encoder to model the temporal features. Then the temporal features are forwarded into a fully-connected layer to predict frame-wise stroke attributes, which are represented as multi-class distributions of size T/2 × C. Stroke Segmentation With the feature extracted from the backbone model, we perform stroke segmentation seg with a fully-connected network, predicting the probability of each timestamp. seg(V ) = ˆP(t1, t2, ..., tT |V ) (1) , where ˆP is the predicted probability of a stroke given the video V . Compared to other segmentation models (Lin et al. 2019; Xu et al. 2020) that focus on actions with relatively longer durations (lasting for seconds or minutes), we model strokes to be instant events. Therefore, the seg predicts a series of signals of the stroke probability at each timestamp. During training, we convert stroke events into a series of cosine signals, where the peak is the moment of ball hitThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8492 ting (Voeikov, Falaleev, and Baikulov 2020). P(ti) = ( cos (ti−ts)π σ if |ti −ts| ≤σ 2 0 otherwise (2) where ts is the stroke timestamp which is the closest to the target timestamp ti. In this study, we use a σ = 8 because the strokes have an average length of about eight frames. During training, we evaluate the difference between the predicted probability ˆP and target probability P using binary cross entropy lBCE. lBCE(ˆP, P) = 1 T T X i=1 ˆpi · pi + (1 −ˆpi) · (1 −pi), (3) where ˆpi = ˆP(ti|V ) and pi = P(ti). For a video clip, the differences of all timestamps are computed and then reduced with the mean value. With the results from the stroke segmentation seg module, we further filter the timestamps with high probabilities as the stroke segments. To avoid ragged segments, we merge the strokes between which the temporal distance is smaller than σ 2 frames. As a result, the predicted stroke timestamps ˆti(1 ≤i ≤N) are intervals. Stroke Classification with Contextual Knowledge After segmentation, we further classify the strokes into finegrained techniques using a classification module cls and a graph module grh. The cls is a fully-connected network that takes the segmented stroke features as input and predicts the technique types. ˆsi = cls(fi) = P(tec|fi), (4) where fi is the aggregated features of the stroke ˆsi by the time interval ˆti. The result P(tec|fi) is the distribution of different techniques. Predicting stroke techniques solely from visual features might fail in distinguishing strokes that are visually similar to each other. Naively selecting the best-predicted technique for each stroke might result in invalid stroke sequences. Contextual Knowledge Learning. Considering the intricacies of table tennis stroke techniques, which require contextual inference for proper interpretation, we introduce the grh module, a graph-based data structure to model the contextual information of table tennis game videos. As illustrated in Figure 1, grh represents a directed graph comprised of several nodes and edges, denoted as Ggrh = (VT , E), (5) where the node set is defined as VT = {vtec0, . . . , vtecm−1}, with each node vteck symbolizing the classification label teck of a distinct stroke technique. Notably, a designated “null” node is utilized to represent an empty label, which serves its purpose when generating sequences, as the initial label in a sequence. The edge set E = {e0, ..., em×(m−1)−1} encompasses all possible directed edges connecting pairs of nodes except edges from all nodes to “null” node, and for each edge from vtecA to vtecB, there exists a weight representing the transition from stroke technique tecA to tecB. Joint Training of cls and grh. To incorporate contextual information, our model departs from the approach in Eq. 4, where aggregated features fi were the sole input. Instead, we introduce the preceding stroke’s label along with the aggregated features of the current stroke as input. When the preceding stroke is absent, as in the case of the first stroke, the label “null” is employed to denote the preceding stroke. Given the knowledge of the previous stroke’s label, denoted as tecp, we leverage this label to locate the corresponding node within grh. Subsequently, by querying the directed edges emanating from this node, we derive a weight vector representing transitions from label tecp to next possible labels. The length of this vector equals the number of distinct technique labels, with its shape aligning with the output of the classification model. In the subsequent discourse, we shall refer to such weight vectors as Wtecp. We update the parameters of the classification model as follows. Initially, for the model’s output that hasn’t undergone normalization, we conduct min-max normalization to ensure all values within the vector are confined to the [0, 1] range. Subsequently, we obtain the transition weight vector from grh, similarly ensuring that all values within the vector fall within the [0, 1] range. We then combine the two vectors with a specific proportion. We compute the cross-entropy loss and execute backpropagation for parameter updates. lCE(Pc, ˆPc) = − X i pci · ˆpci (6) ˆPc = Softmax(MinMax(cls(fi)) + αWtecp), (7) where ˆPc and Pc are the predicted and target classification probability respectively, ˆpci = ˆPc(teci|f), pci = Pc(teci), α is a hyperparameter for the combination. When updating the weights of the cls module, it is equally essential to update the weights within the graph (grh). In ViSTec, these weights are updated with an adaptive stride. It is noteworthy that at the commencement of training, the graph is initialized using all known technique sequences from the training set. This initialization enables the weights to roughly reflect the transition probabilities between various pairs of techniques observed in the training data. Following a single forward pass through the model, we obtain cls(fi), allowing us to compute ˆPc as indicated in Eq. 7. This estimation is then used to update the edge weights Wtecp originating from the node corresponding to the previous stroke label tecp within grh. Should ˆPc deduce the correct label, i.e., if the label with the highest confidence in ˆPc aligns with the ground truth label, there is no need to update the edge weights of grh. However, if discrepancies arise, we adopt the following strategy for updates. Let the predicted label inferred from ˆPc be denoted as tecpred, and the ground-truth label as tecgt. Our objective is to diminish the transition tecp →tecpred within Wtecp while reinforcing the transition tecp →tecgt. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8493 Algorithm 1: Updating Wtecp Input: Weight vector Wtecp, predicted label of current segment tecpred, and ground-truth label tecgt. Output: Updated weight vector W ′ tecp. 1: Initialize: W ′ tecp ←Wtecp 2: W ′ tecp[tecpred] ←(1 −βU(cls(fi)))W ′ tecp[tecpred] 3: W ′ tecp[tecgt] ←(1 + βU(cls(fi)))W ′ tecp[tecgt] 4: Normalization: W ′ tecp ←W ′ tecp/ max(W ′ tecp) The hyperparameter β controls the stride of the update. U(cls(fi)) constitutes a crucial element in the adaptive update stride and effectively incorporates the uncertainty of the classification confidence of cls concerning input features. Here, we employ entropy to quantify the uncertainty in the classification confidence provided by cls. The computation of U(cls(fi)) is as follows: U(cls(fi)) = (1 −Entropy(Softmax(cls(fi))) µ ), (8) where µ represents the maximum entropy of the probability distribution corresponding to the number of classes in the classification. It can be precomputed and regarded as a constant in training. The significance of determining the update stride in this manner lies in the fact that when cls assigns an incorrect label with lower uncertainty, it indicates a more serious model error at that moment. Consequently, there is a greater need for rectifying the edge weights of grh to assist the overall model in making accurate classification. On the other hand, during the early stages of training, the model’s classification uncertainty is relatively high. Consequently, the update stride is smaller, which serves to prevent rapid disruption of the initial graph weights. This approach enhances the model’s robustness during the initial training phase. Inference of ViSTec As depicted in Figure 1, ViSTec’s inference process contains two stages. First, using the seg module, we acquire several time intervals ˆti(1 ≤i ≤N) representing different strokes from the input match video. Centered around each of these intervals, we extend the respective time spans to form segments, each spanning no more than 40 frames. When adjacent extended intervals overlap, we designate the midpoint between them as the separator for creating distinct segments. Through this segmentation, we ensure that every segment encompasses only a single stroke, each frame belongs to a solitary segment excluding redundant frames, such as those capturing player preparation before a serve. Second, we sequentially iterate each segment and employ the features of the current segment along with the predicted label of the previous segment (or a ”null” label if there is no previous segment) as inputs. By utilizing Eq. 7, we compute the predictive probability distribution and choose the label with the highest confidence as the prediction. Upon completing this inference iteration, we obtain the sequence of stroke techniques for the given match video. Experiments Experiment Dataset All experiments are performed on a dataset constructed from broadcast videos of World Table Tennis (WTT) games. We use table tennis as an experimental scenario because it is the most challenging racket sport for video analysis considering the frequent occlusion, minimal movement amplitude, and blurring caused by the rapid pace. We collected 4000 rally clips segmented from 18 games by recognizing scoreboard changes (Deng et al. 2021). Each clip includes a series of strokes. We labeled the timestamps of each stroke to train the stroke segmentation module. However, the labeling of the stroke techniques requires professional table tennis knowledge and experience. Therefore, we consulted with professional athletes who have been members of provincial teams or national reserve teams. Models F1@{10,25,50} Acc. Edit C2F-TCN 50.8, 45.3, 32.0 61.1 45.0 ASFormer 75.2, 73.3, 69.7 77.5 73.7 UVAST 75.2, 74.3, 71.3 76.1 74.1 SSTDA 76.0, 73.2, 67.0 76.5 72.5 MS-TCN 76.8, 74.8, 71.1 78.2 73.9 ViSTec w/o grh 76.3, 76.2, 75.3 82.0 74.3 ViSTec w/o U 77.9, 77.7, 77.0 82.2 74.8 ViSTec 79.3, 79.2, 78.5 83.5 76.3 Table 1: Experiment results including ablation studies of the proposed method and baselines. Comparative Study We compare ViSTec with state-of-the-art action segmentation models. Specifically, we train C2F-TCN (Singhania, Rahaman, and Yao 2021), ASFormer (Yi, Wen, and Jiang 2021), UVAST (Behrmann et al. 2022), SSTDA (Chen et al. 2020) and MS-TCN (Farha and Gall 2019) on the table tennis dataset. Noted that these models rely on visual features that are extracted with backbone models, such as I3D (Carreira and Zisserman 2017). However, I3D performs pooling along the temporal dimension, which makes it inappropriate to process the table tennis dataset, where a stroke action only lasts for several frames. Therefore, to ensure a fair comparison, we extract frame-wise features using the same backbone as ViSTec for the baseline models. Moreover, we generate frame-wise labels from ground-truth annotations, which consist of sequences of timestamp-technique pairs. Centered around each timestamp, we assign corresponding technique labels to frames within a range of no more than 40 frames. In cases where adjacent intervals overlap, we separate them by the midpoint between them. Frames that remain unassigned labels are considered background frames. We adopted evaluation metrics commonly employed in the field of action segmentation (Lea et al. 2017). As illustrated in Table 1, “Acc.” corresponds to the frame-wise accuracy, “Edit” denotes the segmental edit score, and “F1@{10, 25, 50}” signifies the segmental F1 score at overlapping thresholds of 10%, 25%, and 50%, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8494 Benefiting from our two-stage design, we attain notable segmentation results in the first stage, while leveraging domain knowledge to enhance classification accuracy in the second stage. Our proposed approach excels across these evaluation metrics, achieving state-of-the-art results. Ablation Study We evaluate the effectiveness of different modules by removing them. Initially, we eliminated the grh module to investigate its impact on the model’s performance. Specifically, in the second stage, we solely employed the P(tec|fi) calculated from the output cls(fi) in Eq. 4 to train the classification model by cross-entropy loss. As indicated in Table 1, the performance metrics of ViSTec without the grh module were consistently inferior to those of the complete ViSTec, indicating that the grh module contributes positively to the model’s performance. In another experiment, we excluded the uncertainty term from the grh update stride, which corresponds to the U(cls(fi)) term defined in Eq. 8. This means the updating stride of the grh is fixed. As shown in Table 1, the performance metrics of ViSTec without the U term were lower than those of the full ViSTec, yet higher than those of ViSTec without grh. This observation underlines that introducing the uncertainty term for dynamically updating the graph’s weights enhances the model’s performance. Qualitative Evaluation Figure 2 presents the segmentation results of a sample, comparing baseline (UVAST), ViSTec without grh, ViSTec without U, ViSTec, and the ground truth. Notably, our method yields superior results in terms of segmentation and classification. First, the start and end points predicted by the baseline model can not align well with the ground truth. The baseline model tends to segment labels into equal-length segments except “Serve”. Differently, our proposed method detects the stroke event first and then performs the segmentation, thus allowing detecting actions with varying duration. Second, our method demonstrates commendable classification performance as well. The use of uncertainty and the graph module can effectively introduce domain knowledge learned from historic data to fix incorrect predictions. Furthermore, offline tests on a single A100 GPU show ViSTec achieving an inference speed of 39.3 frames per second, which exceeds the typical frame rate of broadcast match videos, enabling real-time processing. Evaluation with Case Studies To validate the effectiveness of ViSTec, we present two case studies conducted with senior analysts from the Chinese table tennis team. The two cases address the most critical and complex facets of sports analysis that require domain knowledge: technique analysis and tactical analysis. Technique analysis demands meticulous observation of each technical action of different players, while tactical analysis necessitates careful attention to temporal correlations across various scales. In the first case, we analyze the player’s technical actions based on visual features extracted from the video, Figure 2: Illustration of segmentation result for a sample from our dataset with the ground truth sequence “Serve, Short, Short, Topspin, Block”. uncovering the personalized characteristics and correlations. Second, we perform analysis based on sequences of stroke techniques obtained by ViSTec from video, identifying tactics with a high scoring rate. Case 1: Analyzing Personalized Characteristics of Technical Actions Players have unique features in their technical actions and understanding the relation among technical actions is key to comprehending a player’s characteristics. The conventional approach necessitates domain experts to review long videos and summarize various techniques manually. However, with the visual features extracted by our model, this process can now be accomplished automatically, streamlining the analysis and reducing reliance on manual expertise. After extracting features for each stroke from broadcast video with ViSTec, we employ t-SNE for dimensionality reduction, projecting them onto a two-dimensional plane shown in a scatter plot in Figure 3 (A). Notably, the stroke features form clusters based on technique categories, implying that ViSTec has good observation on technical detail and context. This is especially impressive considering mere actions are often not informative enough to classify owing to occlusion and limited movement amplitude, making contextual details such as ball trajectory necessary. As shown in Figure 3 (B), for Japanese players, technique “Block” and “Topspin” exhibits striking analogy, as do “Push” and “Short”. These similarities within deep visual features “reveal the high consistency and deceptive nature of their certain techniques,” noted by the experts. This observation furnishes valuable insights, allowing opponents to enhance their preparation and anticipation of the players’ moves in specific techniques, a critical factor in the fastpaced world of racket sports. Similar analysis can be transferred to other players in real time, enabling the understanding of the unique characteristics of their opponent’s actions. Case 2: Discovering Optimal Tactical Choices In racket sports, tactics hold paramount importance and are often meticulously selected, taking into account the opponents’ characteristics, the current status of the game, and the individual’s strengths. A tactic in table tennis refers to the techniques employed in consecutive strokes. Analyzing these tactics presents a complex challenge, given the degree of freedom in the temporal dimension and the multitude of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8495 Figure 3: Case 1: (A) displays visual features of the strokes from two Japanese players with t-SNE. (B) highlights techniques that share noticeable similarities. (C) shows the technique actions highlighted in (B). possibilities. Traditionally, this analysis has required domain experts to identify and analyze the techniques used in each round, a task that can be demanding and time-consuming. Our proposed model, however, represents a significant advancement in this domain, capable of accurately recognizing sequences of techniques directly from raw video data, thereby unlocking new potentials in tactical analysis. We take 18 match videos from WTT to analyze tactical patterns of high scoring rate. We begin by extracting sequences of stroke techniques from match videos using our model, subsequently conducting an analysis to discern the correlation between tactics and scoring rates in sets of three consecutive strokes. As demonstrated in Figure 4(B), the sequence “Serve Short Topspin” exhibits the highest scoring rate. This suggests that when serving on our side and the subsequent opponent’s stroke involves a “Short” technique, the optimal choice in terms of scoring rate is to respond with a “Topspin” stroke in the next play. Moving on to Figure 4(C), it becomes evident that following two strokes of the “Serve” and “Short” techniques, persisting with another “Short” stroke or responding with “Others” leads to a sudden drop in scoring rate to around 0.43. This underscores that taking the initiative early in the game and launching an offensive increases our likelihood of winning. This observation is corroborated in other sequences with high scoring rate, where, in the majority of these sequences, the athlete who wins initiates offensive techniques earlier than his opponents. The discovered insights were confirmed by professional analysts collaborating with the Chinese national table tennis team. Figure 4: Case 2: (A) shows the structure of a table tennis tactic, consisting of consecutive three strokes. (B) illustrates the scoring rate of consecutive three strokes. (C) illustrates the scoring rate using different techniques after “Serve, Short”. Such analysis can be further applied to specific phases of a match and individual opponents, enabling the discovery of optimal tactical choices for each segment of the game against a particular player. This refined approach offers tangible benefits to both players and coaches in their tactical preparation, fostering a more nuanced understanding of the game’s dynamics and enhancing competitive edge. Conclusion In this work, we propose a model, ViSTec, to recognize and analyze stroke techniques in racket sports videos, facilitating automated tactical analysis. The fundamental insight lies in integrating sparse visual information with contextual domain knowledge to enhance high-level video understanding. The efficacy of the proposed model is substantiated through a series of comparative experiments, ablation studies, and two case studies validated by analysts from the Chinese national table tennis team. In the future, we envision extending this work further in two aspects. First, we plan to incorporate more nuanced context into the domain knowledge module, such as ball placement and player position. This enhancement aims to uncover optimal tactics tailored to specific contexts, adding another layer of sophistication to our analysis. Second, we intend to utilize the current technique-transition graph to unearth personalized tactical features, a step that promises to further enrich our discovery of insightful patterns. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8496 Acknowledgements The work was supported by NSF of China (U22A2032), Key “Pioneer” R&D Projects of Zhejiang Province (2023C01120), and the Collaborative Innovation Center of Artificial Intelligence by MOE and Zhejiang Provincial Government (ZJU). References Behrmann, N.; Golestaneh, S. A.; Kolter, Z.; Gall, J.; and Noroozi, M. 2022. Unified Fully and Timestamp Supervised Temporal Action Segmentation via Sequence to Sequence Translation. In Avidan, S.; Brostow, G. J.; Ciss´e, M.; Farinella, G. M.; and Hassner, T., eds., Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXV, volume 13695 of Lecture Notes in Computer Science, 52–68. Springer. Bian, J.; Wang, Q.; Xiong, H.; Huang, J.; Liu, C.; Li, X.; Cheng, J.; Zhao, J.; Lu, F.; and Dou, D. 2022. P2A: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos. CoRR, abs/2207.12730. Carreira, J.; and Zisserman, A. 2017. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 4724–4733. IEEE Computer Society. Chen, M.; Li, B.; Bao, Y.; AlRegib, G.; and Kira, Z. 2020. Action Segmentation With Joint Self-Supervised Temporal Domain Adaptation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 9451–9460. Computer Vision Foundation / IEEE. Chen, M.; Wei, F.; Li, C.; and Cai, D. 2022. Frame-wise Action Representations for Long Videos via Sequence Contrastive Learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 13791–13800. IEEE. Chu, X.; Xie, X.; Ye, S.; Lu, H.; Xiao, H.; Yuan, Z.; Chen, Z.; Zhang, H.; and Wu, Y. 2022. TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations. IEEE Transactions on Visualization and Computer Graphics, 28(1): 118–128. Dai, W.; Xu, Q.; Yu, Y.; and Zhou, Z. 2019. Bridging Machine Learning and Logical Reasoning by Abductive Learning. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alch´e-Buc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 2811–2822. Deng, D.; Wu, J.; Wang, J.; Wu, Y.; Xie, X.; Zhou, Z.; Zhang, H.; Zhang, X. L.; and Wu, Y. 2021. EventAnchor: Reducing Human Interactions in Event Annotation of Racket Sports Videos. In Kitamura, Y.; Quigley, A.; Isbister, K.; Igarashi, T.; Bjørn, P.; and Drucker, S. M., eds., CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, 73:1– 73:13. ACM. Farha, Y. A.; and Gall, J. 2019. MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 3575–3584. Computer Vision Foundation / IEEE. Huang, Y.; Liao, I.; Chen, C.; Ik, T.; and Peng, W. 2019. TrackNet: A Deep Learning Network for Tracking Highspeed and Tiny Objects in Sports Applications. In 16th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2019, Taipei, Taiwan, September 18-21, 2019, 1–8. IEEE. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; and Fei-Fei, L. 2014. Large-Scale Video Classification with Convolutional Neural Networks. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, 1725–1732. IEEE Computer Society. Kim, Y. H.; Kang, H.; and Kim, S. J. 2022. A Sliding Window Scheme for Online Temporal Action Localization. In Avidan, S.; Brostow, G. J.; Ciss´e, M.; Farinella, G. M.; and Hassner, T., eds., Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIV, volume 13694 of Lecture Notes in Computer Science, 653–669. Springer. Lea, C.; Flynn, M. D.; Vidal, R.; Reiter, A.; and Hager, G. D. 2017. Temporal Convolutional Networks for Action Segmentation and Detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 1003–1012. IEEE Computer Society. Lin, C.; Li, J.; Wang, Y.; Tai, Y.; Luo, D.; Cui, Z.; Wang, C.; Li, J.; Huang, F.; and Ji, R. 2020. Fast learning of temporal action proposal via dense boundary generator. In Proceedings of the AAAI conference on artificial intelligence, 11499–11506. Lin, T.; Aouididi, A.; Chen, Z.; Beyer, J.; Pfister, H.; and Wang, J.-H. 2023. VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching. arXiv:2307.12539. Lin, T.; Liu, X.; Li, X.; Ding, E.; and Wen, S. 2019. BMN: Boundary-Matching Network for Temporal Action Proposal Generation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 3888–3897. IEEE. Lin, T.; Zhao, X.; Su, H.; Wang, C.; and Yang, M. 2018. BSN: Boundary Sensitive Network for Temporal Action Proposal Generation. In Ferrari, V.; Hebert, M.; Sminchisescu, C.; and Weiss, Y., eds., Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part IV, volume 11208 of Lecture Notes in Computer Science, 3–21. Springer. Liu, Y.; Wang, L.; Wang, Y.; Ma, X.; and Qiao, Y. 2022. FineAction: A Fine-Grained Video Dataset for Temporal Action Localization. IEEE Transactions on Image Processing, 31: 6937–6950. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8497 Polk, T.; J¨ackle, D.; H¨außler, J.; and Yang, J. 2020. CourtTime: Generating Actionable Insights into Tennis Matches Using Visual Analytics. IEEE Transactions on Visualization and Computer Graphics, 26(1): 397–406. Polk, T.; Yang, J.; Hu, Y.; and Zhao, Y. 2014. TenniVis: Visualization for Tennis Match Analysis. IEEE Transactions on Visualization and Computer Graphics, 20(12): 2339– 2348. Shao, D.; Zhao, Y.; Dai, B.; and Lin, D. 2020. FineGym: A Hierarchical Video Dataset for Fine-Grained Action Understanding. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 2613–2622. Computer Vision Foundation / IEEE. Shou, Z.; Chan, J.; Zareian, A.; Miyazawa, K.; and Chang, S. 2017. CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 1417–1426. IEEE Computer Society. Singhania, D.; Rahaman, R.; and Yao, A. 2021. Coarse to Fine Multi-Resolution Temporal Convolutional Network. arXiv:2105.10859. Tong, Z.; Song, Y.; Wang, J.; and Wang, L. 2022. VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training. CoRR, abs/2203.12602. Tran, D.; Bourdev, L. D.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning Spatiotemporal Features with 3D Convolutional Networks. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, 4489–4497. IEEE Computer Society. Voeikov, R.; Falaleev, N.; and Baikulov, R. 2020. TTNet: Real-time temporal and spatial video analysis of table tennis. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020, 3866–3874. Computer Vision Foundation / IEEE. Wang, J.; Deng, D.; Xie, X.; Shu, X.; Huang, Y.; Cai, L.; Zhang, H.; Zhang, M.; Zhou, Z.; and Wu, Y. 2021a. TacValuer: Knowledge-based Stroke Evaluation in Table Tennis. In Zhu, F.; Ooi, B. C.; and Miao, C., eds., KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 1418, 2021, 3688–3696. ACM. Wang, J.; Wu, J.; Cao, A.; Zhou, Z.; Zhang, H.; and Wu, Y. 2021b. Tac-Miner: Visual Tactic Mining for Multiple Table Tennis Matches. IEEE Transactions on Visualization and Computer Graphics, 27(6): 2770–2782. Wang, J.; Zhao, K.; Deng, D.; Cao, A.; Xie, X.; Zhou, Z.; Zhang, H.; and Wu, Y. 2020. Tac-Simur: Tactic-based Simulative Visual Analytics of Table Tennis. IEEE Trans. Vis. Comput. Graph., 26(1): 407–417. Wang, L.; Xiong, Y.; Wang, Z.; Qiao, Y.; Lin, D.; Tang, X.; and Gool, L. V. 2019. Temporal Segment Networks for Action Recognition in Videos. IEEE Trans. Pattern Anal. Mach. Intell., 41(11): 2740–2755. Wu, Y.; Lan, J.; Shu, X.; Ji, C.; Zhao, K.; Wang, J.; and Zhang, H. 2018. iTTVis: Interactive Visualization of Table Tennis Data. IEEE Trans. Vis. Comput. Graph., 24(1): 709–718. Xu, M.; Zhao, C.; Rojas, D. S.; Thabet, A.; and Ghanem, B. 2020. G-tad: Sub-graph localization for temporal action detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10156–10165. Ye, S.; Chen, Z.; Chu, X.; Wang, Y.; Fu, S.; Shen, L.; Zhou, K.; and Wu, Y. 2021. ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization. IEEE Transactions on Visualization and Computer Graphics, 27(2): 860–869. Yi, F.; Wen, H.; and Jiang, T. 2021. ASFormer: Transformer for Action Segmentation. In 32nd British Machine Vision Conference 2021, BMVC 2021, Online, November 22-25, 2021, 236. BMVA Press. Zhang, H.; Sciutto, C.; Agrawala, M.; and Fatahalian, K. 2021. Vid2player: Controllable video sprites that behave and appear like professional tennis players. ACM Transactions on Graphics (TOG), 40(3): 1–16. Zhang, H.; Yuan, Y.; Makoviychuk, V.; Guo, Y.; Fidler, S.; Peng, X. B.; and Fatahalian, K. 2023. Learning Physically Simulated Tennis Skills from Broadcast Videos. ACM Trans. Graph., 42(4). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8498
2024
944
18,789
Label Attentive Distillation for GNN-Based Graph Classification Xiaobin Hong1, Wenzhong Li1*, Chaoqun Wang2, Mingkai Lin1, Sanglu Lu1 1State Key Laboratory for Novel Software Technology, Nanjing University 2The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China {xiaobinhong, mingkai}@smail.nju.edu.cn, [email protected], {lwz, sanglu}@nju.edu.cn Abstract Graph Neural Networks (GNNs) have emerged as a powerful tool for modeling graph-structured data, exhibiting remarkable potential in applications such as social networks, recommendation systems, and molecular structures. However, the conventional GNNs perform node-level feature aggregation from neighbors without considering graph-label information, which leads to the misaligned embedding problem that may cause a detrimental effect on graph-level tasks such as graph classification. In this paper, we propose a novel label-attentive distillation method called LAD-GNN for graph representation learning to solve this problem. It alternatively trains a teacher model and a student GNN with a distillation-based approach. In the teacher model, a labelattentive encoder is proposed to encode the label information fusing with the node features to generate ideal embedding. In the student model, the ideal embedding is used as intermediate supervision to urge the student GNN to learn class-friendly node embedding to facilitate graph-level tasks. Generally, LAD-GNN is an enhanced GNN training approach that can be incorporated with arbitrary GNN backbone to improve performance without significant increase of computational cost. Extensive experiments with 7 GNN backbones based on 10 benchmark datasets show that LAD-GNN improves the SOTA GNNs in graph classification accuracy. The source codes of LAD-GNN are publicly available on https: //github.com/XiaobinHong/LAD-GNN. Introduction Graph Neural Networks (GNNs) (Kipf and Welling 2016, 2017) are adept at converting unstructured data into lowdimensional representations by effectively capturing both node features and topological dependencies. The GNN learning tasks broadly focused on node classification, link prediction, and graph classification (You et al. 2021; Wang et al. 2021), which have demonstrated their effectiveness in various fields such as protein-to-protein interaction (Nouranizadeh et al. 2021), molecular medicine (Li et al. 2022b) information retrieval (Chen et al. 2022), etc. This paper focuses on graph classification, a graph-level task that aims to learn a graph representation with GNNs to predict the graph labels. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Graph Pooling Input Graph GNN Backbone Readout Classifier Predict (a) (b) (c) GCN GAT GIN SAGE OURS -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 GCN GAT GIN SAGE OURS -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 Positive correlation Negative correlation GCN GAT GIN SAGE OURS -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 Positive correlation Negative correlation Node embedding Label embedding Node embedding Label embedding Node embedding Label embedding Figure 1: (a) The pipeline of conventional GNN-based graph classification. (b) Illustration of misaligned node embeddings and label embeddings for GraphSAGE on the MUTAG dataset, where different colors denote different graph classes. (c) The violin plots of correlation statistics of node/label embeddings on the MUTAG dataset. It shows that our method has fewer negative correlations and much higher positive correlation scores compared to the other GNNs. The conventional pipeline of GNN-based graph classification is shown in Fig. 1 (a), which consists of the following processes: (1) The input graph is fed into a GNN backbone to generate node embedding by aggregating neighbors’ information via message passing; (2) A graph-level representation is formed by applying a readout function (e.g., graph pooling (Duvenaud et al. 2015)) on the node embedding; (3) The graph representation is fed to a classifier, which is trained with supervised labels and then applied for graph classification. However, a major issue of the conventional GNN-based pipeline for graph classification lies in that it conducts node embeddings without considering graph-level information. As a result, the readout function pools the diverse local node embeddings to form a global graph representation, which exists the embedding misalignment problem that can jeopardize the accuracy of graph-level task. For example, Fig. 1 (b) shows the label embeddings and node embeddings with the popular The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8499 GNN named GraphSAGE (Hamilton 2017) on the MUTAG dataset, where different colors represent different classes. According to the figure, the node embeddings are not in line with their corresponding label embeddings, where a large number of blue and red node embeddings are mixed up in the feature space. We call it the “embedding misalignment” phenomenon, where the diverse unaligned node embeddings generated by GNNs lead to less discriminative representation among graph classes. For further explanation, we show in Fig. 1(c) the statistics of correlation coefficients between node embeddings and the corresponding label embeddings for four widely used GNN backbones, i.e., GCN (Kipf and Welling 2017), GAT (Veliˇckovi´c et al. 2018), GIN (Xu et al. 2019), and GraphSAGE on the MUTAG dataset. It shows that there is a large ratio of node embeddings negatively correlated to the ground truth labels in all GNN backbones. To address this issue, we propose a label-attentive design for GNNs to integrate global graph information into node embedding to improve graph-level tasks. As shown in Fig. 1(c), the OURS approach yields a much higher proportion of positive correlations to the ground truth, which are more favorable for an ideal graph representation. Specifically, this paper proposes a novel Label Attentive Distillation method named LAD-GNN for graph representation learning to overcome the embedding misalignment problem. The pipeline of the proposed LAD-GNN is illustrated in Fig. 2, which consists of a two-phrase training processes and an inference step described as follows. (1) Label-attentive teacher training: we propose an auxiliary neural network called label-attentive encoder that encodes the ground-truth into label embedding, which is attentively combined with the node embedding generated by the GNN backbone to form an ideal embedding. (2) Distillationbased student learning: we train a student GNN to generate class-friendly node embedding by distilling knowledge from the teacher. The intuition is that the teacher model trained with augmented labels can generate an informative feature map to provide effective supervision for the student model. We adopt a multi-task training paradigm to train the student GNN to minimize a classification loss and an auxiliary distill loss, which can inherit the class-specific knowledge from the teacher model and encourage the student to generate class-friendly node embedding to facilitate graphlevel tasks. When model deployment, only the student model works without graph labels, ensure no information leakage. We empirically evaluate our method in graph classification tasks on 10 benchmark datasets with 7 commonly used GNN backbones and execute performance comparisons with other 9 GNN training methods (such as manual/automated graph augmentation methods and graph distillation methods). The experimental results demonstrate that LAD-GNN significantly outperforms the original GNN backbones: it achieves up to 16.8% accuracy improvement on the IMDBBINARY dataset with the GraphSAGE backbone. We also perform extensive experiments for parameter sensitivity and visualization for detailed analyses and asses. Our major contributions are summarized as follows. • We propose a novel label-attentive distillation method called LAD-GNN for graph representation learning. It introduces a teacher model trained with an auxiliary label encoder to generate ideal embedding, and proposes a distillation-like approach to train a student GNN with intermediate supervision to learn class-friendly node embedding. Generally, LAD-GNN is an enhanced GNN training approach that can be incorporated with arbitrary GNNs on graph-level tasks without significant increase of computational cost. • We introduce a novel label-attentive encoding architecture design to encode the graph labels into the latent space fusing with nodes embedding, and propose an auxiliary distillation supervision method to solve the embedding misalignment problem, which can enhance the GNN backbone to capture informative class-specific information to facilitate graph-level task. • We perform extensive experiments using 7 GNN backbones on 10 benchmark datasets to validate graph classification performances, and execute comparisons with the state-of-the-art GNN training methods. Experimental results justify the superiority and effectiveness of the proposed method. Related Works Graph Neural Networks Graph Neural Networks (GNNs) have received tremendous attention due to their superiority in a wide variety of graph learning tasks (Park et al. 2020; Hong et al. 2021; Wang et al. 2022). The pioneering work of GNN was the Graph Convolutional Networks (GCN) (Kipf and Welling 2017), which for the first time introduced the spectral convolution to graph data and employed the Chebyshev polynomials to accelerate its training. Based on GCN, graph attention networks (Veliˇckovi´c et al. 2018) dynamically learn the weights (attention scores) on the edges when performing message passing and introduce the attention mechanism into neighborhood aggregation. Graph Isomorphism Network (GIN) (Xu et al. 2019) used the injective multiset function for neighbor aggregation, which had been shown to be as powerful as the 1-WL test in distinguishing graph structures. As the prevalence of Transformer framework in CV and NLP tasks, some studies (Rong et al. 2020a; Wu et al. 2022; M¨uller et al. 2023) applied Transformer for GNNs, which incorporated the correlations between nodes for neighborhood aggregation. The customizing GNNs for graph-level tasks mainly focus on refining graph readout strategies, which can be categorized into pooling methods (Bianchi, Grattarola, and Alippi 2020; Liu et al. 2022), attention mechanism methods (Nouranizadeh, Matinkia, and Rahmati 2021; Chen et al. 2023), and information theoreticbased methods (Gao et al. 2021; Han et al. 2022). Label Enhancement Methods Labels are commonly used as supervision in the computing loss function at the end of the output. There were plenty of works that used label-enhanced techniques to boost model training (Bengio, Weston, and Grangier 2010; Sun et al. 2017; Yang et al. 2021; Peng et al. 2022). The usage of labelenhanced GNNs can be categorized under label-enhanced The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8500 node embedding and graph structure optimization. In labelenhanced node embedding (Wang 2021; Shi et al. 2020; Li et al. 2022a), the labels were encoded to concatenate with or sum up the node attributes to enhance feature representation. Within the context of label-enhanced graph structure optimization (Chen et al. 2019; Yang et al. 2021), labels played a crucial role in the process of refining the adjacency matrix to facilitate the unimpeded propagation of topological information among nodes that share a common label. While the existing label-enhancement GNNs mostly focused on improving node-level tasks, our work for the first time introduced a label-attentive approach to enhance GNN learning for graph-level tasks. Knowledge Distillation Our method shares some common principles with knowledge distillation (KD) (Hinton et al. 2015; Zhang et al. 2020b). Originally KD aims to reduce the model size when deployed on devices with limited computational resources. In KD, a large teacher model is trained first, and its predictions are used as soft labels to supervise the training of a small student model. There were a few works that adopted KD for GNNs (Guo et al. 2023). Yang et al. proposed a graph knowledge distillation framework, which can inject the knowledge of an arbitrarily learned GNN model into a well-designed student model (Yang, Liu, and Shi 2021). Jing et al. trained a multi-talented student GNN that amalgamates knowledge from a couple of teacher GNNs with heterogeneous architectures to handle distinct tasks (Jing et al. 2021). Different from the existing GNN knowledge distillation works, our proposed distillation-like method introduces a novel label-attentive encoder to generate ideal embedding, which is used as intermediate supervision to train the student GNN to generate task-friendly graph presentations. Methodology Problem Formulation Given a graph dataset with known labels D = (G, Y) = {(Gi, yi)}N i=1, where Gi ∈G is the i-th graph in the dataset. Denoted by Gi = (Ai, Xi), where Ai ∈Rni×ni is the adjacency matrix describing the link relationship of the nodes set Vi; Xi ∈Rni×d is the node’s feature vector; and ni, d are the nodes number and feature dimension of the i-th graph Gi respectively. The goal of graph representation learning is to learn a low-dimensional embedding for each graph to predict the label. Let Y = {yi}N i=1 be the label set, where yi denotes the yi-th class label and N is the dataset scale. Without loss of generality, we omit the index i thereafter to simplify the description. Overall Framework The overall framework of LAD-GNN is shown in Fig. 2. It contains a two-phase process that trains a teacher model and a student model iteratively. Firstly, it proposes a labelattentive teacher training method to train a teacher GNN to generate label-augmented node embedding. Specifically, it introduces an auxiliary neural network called label-attentive encoder that encodes the ground-truth into label embedding and then combines the label embedding with the nodes embedding generated by the GNN backbone using an attention mechanism to form an ideal embedding, which is fed into the readout function and classification head to predict the graph label. The label-attentive encoder is jointly trained with the GNN backbone to minimize the classification loss. Secondly, it applies a distillation-based student learning method to train a student GNN model. In this phase, the ideal embedding from the teacher model performs as intermediate supervision to distill the student. As shown in the figure, the student model shares a classification head with the teacher model, and it trains the student GNN to minimize both the classification loss and the distillation loss, which can inherit the knowledge from the teacher model to generate classfriendly nodes embedding to facilitate graph-level task. Before getting into the details of label-attentive distillation, we first introduce the following general components for GNN-based graph classification. GNN Backbone. It is used to extract the node-level features H = {Hv|v ∈V}, where Hv ∈Rd′ denotes node v’s embedding which synthesizes the topology and initial node attributes. A GNN layer can be simplify formalized as: H(l+1) v = UPT(H(l) v , AGG({H(l) u |u ∈Nv})), ∀v ∈V (1) where H(l+1) v denotes the v-th latent node representation of the (l + 1)-th layer; H(0) = X is initialized by the node feature matrix; Nv denotes the neighbors of node v; AGG and UPT are the aggregation and update function respectively. The nodes embedding H is aggregated by the GNN backbone as: H = f(A, X; θf). (2) Readout Function. It is used to form a graph-level representation from the node embedding, which can be regarded as a graph pooling operator: ZG = POOL({Hv|v ∈V}), (3) where ZG ∈Rd′ denotes the representation of graph G, and d′ is the feature dimension. The average pooling (Duvenaud et al. 2015) and max pooling (Xu et al. 2019) are the common pooling operations, which treat all nodes equally. Considering the importance of different nodes, there are also some customized pooling operations, such as subgraph selector (Wu et al. 2020; Li et al. 2020) and attention mechanism (Lee, Lee, and Kang 2019). Classification Head. It is used to predict graph labels based on the pooled graph representations: bY = g(Z; ϕg). where bY ∈RN×c is the output one-hot prediction with c classes, ϕg is the classification head parameters and the Multi-Layer Perceptron (MLP) is commonly employed. Label-Attentive Teacher Training As illustrated in Fig. 2, the teacher model consists of a GNN backbone and a label-attentive encoder. Given a sample (G, y), we feed the graph G to the GNN backbone to generate node representations. To align the node embedding with the global label, we introduce a label-attentive encoder architecture as follows. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8501 GNN backbone [0, 0, ……, 1, 0] [0, 1, ……, 0, 0] [0, 0, ……, 0, 1] [1, 0, ……, 0, 0] …… …… …… [0, 0, ……, 1, 0] [0, 1, ……, 0, 0] [0, 0, ……, 0, 1] [1, 0, ……, 0, 0] …… …… …… Teacher (Label-Attentive Training) Input Graphs Readout Readout Ground Truth Imtermediate Supervision Input Graphs Classification Head Distillation Loss Graph Pooling Predicted Label Ideal Embedding Classification Loss Node Embedding Classification Loss Student (Distillation-based Learning) Label Encoder Label Encoder Label-Attentive Encoder GNN backbone GNN backbone Student GNN Teacher GNN Label Embedding Add & Norm FFN DotProduct Attention Q K V Shared Shared Graph Pooling LN LN Figure 2: The overall framework of LAD-GNN. The pipeline consists of three steps: (1) Label-attentive training using known labels to train a teacher GNN; (2) Distillation-based learning to train a student GNN; (3) Inference using the student GNN for graph classification (without label input). Label-Attentive Encoder. The label-attentive encoder consists of a label encoder and several layers of attention mechanisms and operations to form an ideal embedding. Taking the ground-truth labels yG as input, the label encoder h(·) encodes the input to a latent embedding. In practice, we propose to use a Multi-Layer Perceptron (MLP): Hl = h(yG|G ∈G), (4) where Hl is the label embedding of graph G. The attention mechanisms work similarly to the popular Transformer (Vaswani et al. 2017) architecture. The label embedding and the node embedding from the teacher GNN go through a layernorm (LN) to alleviate covariate shift, and then are fed into a scaled dot-product attention layer for feature fusion. The add & normalization operation is used to alleviate internal covariate shift and enhance the independence between features. The feed-forward network (FFN) processes the output of the attention layer to form a higher-level latent representation. This architecture enables the model to capture intricate relationships between label and node embeddings, in the meanwhile enhancing model expressiveness by adding nonlinearity. The overall operations can be formulated as: H ′ v =Attention(Hv, Hl) = Softmax(QKT √dk · τ)V, H(T ) v = FFN(LN(H ′ v + Hv)) + H ′ v, (5) where Hl is the label embedding generated by the label encoder; Hv is the node embedding generated by the GNN backbone; Q = HlWQ is the label embedding projection; K = HvWK and V = HvWV are the node embeddings projections; and τ is the attention temperature coefficient (Zhang et al. 2021). Teacher Model Training. The teacher model employs the GNN backbone and the label-attentive encoder to generate ideal embedding, which flows into the readout function and the shared classification head to output the prediction label bY. The objective function of teacher model training is the cross entropy for graph classification: Lcls = 1 N N X i=1 −(yi log( ˆyi) + (1 −yi) log(1 −ˆyi)). (6) Distillation-based Student Learning As shown in Fig. 2, the student model learns from the teacher model with knowledge distillation, and it shares the classification head with the teacher model. During inference, a graph G is input into the student GNN to generate node representations, which are fed into the readout function and the classification head to predict its corresponding label ˆy. Specifically, after the teacher model is trained to converge, we use the teacher model’s output, i.e., the ideal embedding, as intermediate supervision to guide the student’s GNN backbone to learn enhanced node embedding with a distillation-like method. We use H(T ), H(S) to denote the node embeddings extracted from the teacher GNN and the student GNN respectively. To distill the class-dependent knowledge from the teacher to the student model, we aim to minimize the following distillation loss representing as a Mean Square Error (MSE): Ldis = 1 N N X i=1 ||H(T ) i , H(S) i ||2 2. (7) The training of student model is a multi-task paradigm that urges the student GNN to generate ideal embedding and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8502 Algorithm 1: LAD-GNN Algorithm Input: graph datasets D, labels Y, hyper parameter λ, τ. Output: optimized model parameters, prediction of the graph labels bY. 1: Randomly initializes the model parameters and splits the dataset to train/val/test. 2: for Teacher training epochs do 3: for batchs do 4: Label Encoding: Hl = h(yG|G ∈G); 5: Neighbor Aggregation: H(T ) = fT (A, X; θfT ); 6: H(T ) = Attention(H(T ), Hl) in Eq. 5; 7: bY = g(POOL(H(T ) v |v ∈V); ϕg); 8: Cross-Entropy Loss Computation in Eq. 6; 9: Save the optimized model parameters. 10: end for 11: end for 12: for Student training epochs do 13: for batchs do 14: Neighbor Aggregation: H(S) = fS(A, X; θfS); 15: Load teacher model for ideal embedding: H(T ); 16: Ldis = 1 N PN i=1 ||H(T ) i , H(S) i ||2 2; 17: bY = g(POOL(H(S) v |v ∈V), ϕg); 18: L = Lcls + λ · Ldis. 19: end for 20: end for pushes the classifier to optimize graph-level tasks. Therefore the student model is trained with the same set of training samples as the teacher’s to minimize the following comprehensive objective function: L = Lcls + λ · Ldis, (8) where Ldis is the classification loss defined in Eq. 6; Ldis is the distillation loss defined in Eq. 7, and λ is the hyperparameter for balancing the classification and distillation loss. Complexity Analysis The proposed LAD-GNN algorithm is summarized in Algorithm 1. LAD-GNN consists of a two-phase training process for the teacher and student models. The teacher jointly trains the GNN backbone with the label-attentive encoder by a classification loss Lcls. The student shares a similar model architecture with the teacher except for the label-attentive encoder. LAD-GNN conducts a distillation-like training for the student with an objective function combining the distillation loss Ldis and the classification loss Lcls. Note that compared with the conventional GNN learning methods, the time complexity of LAD-GNN mainly lies in the distillation loss computation in the student training phrase, and the time complexity for model inference is the same as that of the GNN backbones. As an example, the time complexity of a SOTA GNN backbone named MEWISPool (Nouranizadeh et al. 2021) is O(|V|(kd + |E|)) + O(|V|3)), where k is the maximum degree of the graph and d is the dimension of nodes features. By applying the proposed LADGNN training approach on the MEWISPool backbone, the time complexity of training the teacher model consumes O(|V|(kd+|E|))+O(|V|3))+O(Ld), where L is the numner of layers of the label-attentive encoder; the time complexity of student training consumes O(|V|(kd + |E|)) + O(|V|3)) + O(Ld) + O(|V|2). In summary, the increase of computational complexity lies in the term O(Ld), which is neglectable since L ≪|V| and d ≪|V| holds in practice. Experiments In this section, we assess the performance of LAD-GNN in comparison with the 7 GNN backbones based on 10 open graph datasets, and then we evaluate LAD-GNN on graph classification tasks compared with 9 other GNN training strategies. We further discuss the sensitivity of hyperparameters and visualize the graph representation of different approaches. We implement LAD-GNN in PyTorch v1.12, and the experiments are conducted on a GPU-equipped PC with an NVIDIA GeForce RTX 3090Ti. We assess the graph classification performances on 10 open datasets, which include the Chemical Molecules datasets MUTAG, PTC (Debnath et al. 1991), and NCI1 (Wale, Watson, and Karypis 2008), the Bioinformatics graph datasets PROTEINS and ENZYMES (Borgwardt et al. 2005), and the Social Network datasets COLLAB, IMDA-BINARY, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-5K (Yanardag and Vishwanathan 2015). These graph datasets contain non-attribute graphs, and we use the node’s degree as the initial node feature following the literature (Duong et al. 2019). For the detailed statistics of these datasets please refer to the supplementary materials. Performance Comparison with GNN Backbones We conduct experiments based on 7 GNN backbones, which include 4 commonly used message-passing protocol GNNs (i.e., GCN (Kipf and Welling 2017), GAT (Veliˇckovi´c et al. 2018), GraphSAGE (Hamilton 2017) and GIN (Xu et al. 2019)) and 3 state-of-the-arts graph classification models (i.e., DGCNN (Zhang et al. 2018), SAGPool (Lee, Lee, and Kang 2019), and MEWISPool (Nouranizadeh et al. 2021)). LAD-GNN is integrated with each GNN backbone to boost graph classification performance, and the results are reported in Table 1, where each row reports the classification accuracy of the original GNN backbone and the results after applying LAD-GNN, and each column reports the performance of one dataset in the table. For GCN, GAT, and GraphSAGE, since the original models are proposed for the node classification task, we reproduce these models based on the PyG (PyTorch Geometric) library (Fey and Lenssen 2019) and applied them for graph classification, where the node aggregation layer numbers of them are all set to 2. For GIN, DGCNN, SAGPool, and MEWISPool, whose models are proposed for the graph classification task in the original papers, we run their open-source codes on the 10 datasets. To remove the unwanted bias towards the training data, for all experiments on the datasets, we evaluate the model performance with a ten-fold cross-validation setting. For a fair comparison, all datasets are randomly split to train/validation/test sets following the 0.8/0.1/0.1 protocol in each The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8503 Models MUTAG COLLAB ENZYMES IMDB-B IMDB-M NCI1 PROTEINS PTC REDDIT-B REDDIT-M GCN 0.747±0.08 0.810±0.01 0.765±0.11 0.573±0.04 0.402±0.05 0.490±0.03 0.643±0.05 0.662±0.06 0.813±0.06 0.425±0.02 +LAD-GNN 0.837±0.07 0.841±0.06 0.772±0.12 0.633±0.04 0.406±0.03 0.584±0.06 0.716±0.03 0.740±0.08 0.825±0.03 0.491±0.07 GAT 0.737±0.12 0.693±0.03 0.778±0.08 0.546±0.07 0.377±0.05 0.727±0.04 0.692±0.05 0.691±0.08 0.727±0.02 0.487±0.01 +LAD-GNN 0.805±0.05 0.712±0.02 0.813±0.08 0.576±0.04 0.414±0.04 0.731±0.04 0.703±0.04 0.706±0.09 0.745±0.04 0.488±0.02 GraphSAGE 0.816±0.09 0.791±0.05 0.742±0.09 0.589±0.06 0.420±0.05 0.699±0.04 0.706±0.04 0.709±0.07 0.831±0.05 0.466±0.02 +LAD-GNN 0.863±0.10 0.793±0.05 0.792±0.10 0.755±0.08 0.420±0.06 0.867±0.07 0.733±0.05 0.814±0.10 0.925±0.02 0.517±0.04 GIN 0.842±0.06 0.659±0.03 0.733±0.18 0.701±0.03 0.434±0.03 0.729±0.04 0.721±0.02 0.620±0.06 0.717±0.02 0.410±0.02 +LAD-GNN 0.854±0.05 0.678±0.04 0.765±0.16 0.714±0.03 0.441±0.02 0.747±0.03 0.730±0.03 0.657±0.05 0.725±0.02 0.450±0.03 DGCNN 0.913±0.01 0.749±0.01 0.501±0.02 0.728±0.02 0.464±0.01 0.701±0.01 0.713±0.02 0.617±0.03 0.743±0.01 0.457±0.03 +LAD-GNN 0.944±0.02 0.751±0.02 0.516±0.01 0.748±0.02 0.482±0.01 0.705±0.02 0.721±0.03 0.649±0.02 0.774±0.02 0.529±0.04 SAGPool 0.763±0.08 0.725±0.02 0.426±0.06 0.589±0.05 0.419±0.06 0.694±0.03 0.588±0.05 0.614±0.06 0.837±0.02 0.484±0.02 +LAD-GNN 0.784±0.08 0.733±0.02 0.428±0.07 0.593±0.04 0.434±0.04 0.722±0.02 0.601±0.03 0.649±0.07 0.858±0.03 0.508±0.03 MEWISPool 0.926±0.03 0.745±0.01 0.383±0.01 0.772±0.01 0.474±0.04 0.711±0.05 0.699±0.03 0.714±0.02 +LAD-GNN 0.947±0.01 0.771±0.02 0.433±0.02 0.779±0.03 0.511±0.03 0.723±0.02 0.746±0.01 0.746±0.03 Table 1: The results on 10 graph classification datasets compared with 7 SOTA GNN backbones. In each row, the higher reports the original GNN model’s performance, and the lower (shadows in gray) is the results with the proposed LAD-GNN. Methods PROTEINS IMDB-BINARY COLLAB MUTAG NCI109 NCI1 PTC DropEdge 0.707±0.002 0.733±0.012 0.812±0.003 0.779±0.005 0.762±0.007 0.780±0.002 M-Mixup 0.706±0.003 0.736±0.004 0.811±0.005 0.798±0.015 0.788±0.005 0.803±0.003 G-Mixup 0.715±0.006 0.748±0.004 0.811±0.009 0.805±0.002 0.654±0.043 0.686±0.037 JOAOv2 0.700±0.003 0.707±0.008 0.688±0.003 0.775±0.016 0.675±0.003 0.670±0.006 AD-GCL 0.699±0.008 0.712±0.008 0.670±0.008 0.837±0.010 0.634±0.003 0.641±0.004 AutoGCL 0.684±0.008 0.707±0.007 0.745±0.002 0.783±0.022 0.705±0.003 0.737±0.002 KD 0.763±0.035 0.808±0.026 0.812±0.017 0.878±0.121 0.756±0.053 GFKD 0.633±0.077 0.623±0.052 0.633±0.023 0.677±0.129 0.625±0.059 DFAD-GNN 0.690±0.061 0.675±0.049 0.689±0.011 0.765±0.073 0.669±0.037 IGSD 0.744±0.060 0.704±0.110 0.902±0.070 0.754±0.030 0.614± 0.170 LAD-GNN 0.765±0.056 0.811±0.042 0.827±0.054 0.899±0.113 0.882±0.065 0.896±0.059 0.791±0.062 Table 2: Comparison of LAD-GNN with 9 other GNN training methods on the graph classification task. The first 3 rows belong to manual graph augmentation methods, the second 3 rows are graph auto-augmentation methods, and the third 4 rows are graph distillation methods. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 GCN (AUC = 0.75) GAT (AUC = 0.77) GIN (AUC = 0.79) SAGE (AUC = 0.76) LAD-GNN (AUC = 0.86) (a) ROC curves on MUTAG 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 GCN (AUC = 0.71) GAT (AUC = 0.70) GIN (AUC = 0.74) SAGE (AUC = 0.71) LAD-GNN (AUC = 0.80) (b) ROC curves on PTC Figure 3: The ROC curves on MUTAG and PTC datasets of GCN, GAT, GIN, SAGE, and the proposed LAD-GNN. model. We report the average and standard deviation of test accuracy across the ten folds within the cross-validation. As shown in Table 1, the proposed LAD-GNN which shadows in gray achieves a clear performance improvement compared with the original models in the accuracy of the graph classification task: it outperforms the GNN backbones and gains 3.3% average improvement in the absolute accuracy. In particular, LAD-GNN achieves 16.6% and 16.8% improvements on the IMDB-BINARY and NCI1 datasets respectively with GraphSAGE as the backbone. In summary, the experimental results show that LAD-GNN effectively improves the graph-level performance compared with the state-of-the-art GNN models across a range of datasets. In addition, we plot the ROC curves of LAD-GNN and other backbones (GCN, GAT, GIN, and GraphSAGE) on the MUTAG and PTC datasets, and calculate the AUC values (area under the ROC curve) of each model for performance comparison. As shown in Fig. 3, the ROC curves of our model (the red line) are generally above all backbones. Furthermore, the AUC values of LAD-GNN are also greater than that of the comparison models, which proves the superiority of our proposed method. Performance Comparison with other GNN Training Strategies In order to assess the effectiveness of the proposed training strategy, we compare LAD-GNN with 9 other GNN trainThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8504 (a) GCN (NCI1) (b) LAD-GNN (NCI1) (c) GAT (PROTEINS) (d) LAD-GNN (PROTEINS) (e) GIN (PTC) (f) LAD-GNN (PTC) (g) SAGE (COLLAB) (h) LAD-GNN (COLLAB) . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . .. . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . .. . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . .. . . .. . . . .. . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . Figure 4: The t-SNE visualization of graph representations for LAD-GNN and the original GNN backbones. ing methods for the graph classification task. The compared methods include the manual graph augmentation methods (i.e., DropEdge (Rong et al. 2020b), M-Mixup (Wang et al. 2021), G-Mixup (Han et al. 2022)), the graph autoaugmentation methods (i.e., JOAOv2 (You et al. 2021), ADGCL (Suresh et al. 2021), AutoGCL (Yin et al. 2022)), and the graph distillation methods (i.e., KD, GFKD (Deng and Zhang 2021), DFAD-GNN (Zhuang et al. 2022), and IGSD (Zhang et al. 2020a)). The results are reported in Table 2, where the results of the graph augmentation methods (the upper 6 rows) follow the literature of AutoGCL (Yin et al. 2022), the three distillation methods (i.e. KD, GFKD, and DFAD-GNN) follow the DFAD-GNN (Zhuang et al. 2022), and IGSD follows the literature (Zhang et al. 2020a). It is shown that our LAD-GNN outperforms the manual/auto graph augmentation and distillation methods, achieving the best performance on all datasets except MUTAG, where it is outperformed by the self-distillation method IGSD (though the difference is small). In summary, LAD-GNN achieves a 5.6% accuracy improvement on NCI1 dataset compared with the second-place training method, and outperforms other methods except for a bit worse on MUTAG. Hyperparameter Analysis We further discuss the hyper-parameter sensitiveness of λ in Eq. 8 and τ in Eq 5. We tune the values of λ from 0.001 to 1000 and τ from 0.1 to 1.0, and test the graph classification performance on 4 datasets (i.e., PTC, NCI109, PROTEINS, and IMDB-BINARY). The results are presented in Fig 5. It indicates that different values of λ may result in the optimal convergence values for the parameters, and the best λ value for a given dataset can be determined using the validation set. τ is less sensitive with a low impact on the graph classification accuracy, and to ensure uniformity of hyperparameters across different datasets, we carefully considered and experimented with various values and ultimately selected τ = 0.1 as the standard value for our experiments. Visualization In order to intuitively understand the quality of global graph representation learned by different methods, we visualize the graph embedding of different GNN backbones and the (a) Parameter λ (b) Parameter τ Figure 5: Hyperparameters sensitiveness of (a) λ and (b) τ on the 4 datasets. (a) graph #117 (yG = 1) (b) graph #594 (yG = 4) Figure 6: Visualization of attention scores on ENZYMES. results are shown in Fig 4. The graph embedding is extracted by the readout function which is projected into a twodimensional space with t-SNE for visualization. Compared with the original backbones, LAD-GNN gathers graphs in the same class more closely and provides more obvious boundaries between graphs in different classes. We also visualize the label attention score heatmaps for two random graphs on ENZYMES dataset in Fig 6. In Fig. 6 (a), the attention scores of nodes more focus on label 1 (the ground truth). A similar result can be found in Fig. 6 (b), which shows that the node embeddings align with the ground truth of the proposed method. Conclusion In this paper, we focused on the performance issue of Graph Neural Networks (GNNs) in graph-level classification tasks. We found that conventional GNNs’ node-level information aggregation approach forms misaligned embeddings that can jeopardize graph-level tasks. To address this issue, we propose a novel label attentive distillation method called LAD-GNN for graph classification. LAD-GNN introduces a teacher model with a label-attentive encoder architecture to encode the ground-truth labels into the latent space fusing with nodes embedding to form ideal embedding, and encourage the student GNN to learn class-friendly node embeddings that facilitate graph-level tasks with a self-distilled intermediate supervision method. Extensive experimental results justified the superiority and effectiveness of the proposed method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8505 Acknowledgments This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 61972196, 61832008, 61832005), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the Sino-German Institutes of Social Computing. References Bengio, S.; Weston, J.; and Grangier, D. 2010. Label embedding trees for large multi-class tasks. Advances in Neural Information Processing Systems, 23. Bianchi, F. M.; Grattarola, D.; and Alippi, C. 2020. Spectral clustering with graph neural networks for graph pooling. In International Conference on Machine Learning, 874–883. PMLR. Borgwardt, K. M.; Ong, C. S.; Sch¨onauer, S.; Vishwanathan, S.; Smola, A. J.; and Kriegel, H.-P. 2005. Protein function prediction via graph kernels. Bioinformatics, 21(suppl 1): i47–i56. Chen, D.; Liu, X.; Lin, Y.; Li, P.; Zhou, J.; Su, Q.; and Sun, X. 2019. Highwaygraph: Modelling long-distance node relations for improving general graph neural network. arXiv preprint arXiv:1911.03904. Chen, F.; Wang, J.; Wei, Y.; Zheng, H.-T.; and Shao, J. 2022. Breaking Isolation: Multimodal Graph Fusion for Multimedia Recommendation by Edge-wise Modulation. In Proceedings of the 30th ACM International Conference on Multimedia, 385–394. Chen, J.; Xiong, H.; Zheng, H.; Zhang, D.; Zhang, J.; Jia, M.; and Liu, Y. 2023. EGC2: Enhanced graph classification with easy graph compression. Information Sciences, 629: 376–397. Debnath, A. K.; Lopez de Compadre, R. L.; Debnath, G.; Shusterman, A. J.; and Hansch, C. 1991. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2): 786– 797. Deng, X.; and Zhang, Z. 2021. Graph-free knowledge distillation for graph neural networks. arXiv preprint arXiv:2105.07519. Duong, C. T.; Hoang, T. D.; Dang, H. T. H.; Nguyen, Q. V. H.; and Aberer, K. 2019. On node features for graph neural networks. arXiv preprint arXiv:1911.08795. Duvenaud, D. K.; Maclaurin, D.; Iparraguirre, J.; Bombarell, R.; Hirzel, T.; Aspuru-Guzik, A.; and Adams, R. P. 2015. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28. Fey, M.; and Lenssen, J. E. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. Gao, X.; Dai, W.; Li, C.; Xiong, H.; and Frossard, P. 2021. iPool–Information-Based Pooling in Hierarchical Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems. Guo, Z.; Zhang, C.; Fan, Y.; Tian, Y.; Zhang, C.; and Chawla, N. V. 2023. Boosting graph neural networks via adaptive knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 7793– 7801. Hamilton, Y. Z. . L. J., W. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, 30. Han, X.; Jiang, Z.; Liu, N.; and Hu, X. 2022. G-Mixup: Graph Data Augmentation for Graph Classification. arXiv preprint arXiv:2202.07179. Hinton, G.; Vinyals, O.; Dean, J.; et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7). Hong, X.; Zhang, T.; Cui, Z.; Huang, Y.; Shen, P.; Li, S.; and Yang, J. 2021. Graph game embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 7711–7720. Jing, Y.; Yang, Y.; Wang, X.; Song, M.; and Tao, D. 2021. Amalgamating knowledge from heterogeneous graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15709–15718. Kipf, T. N.; and Welling, M. 2016. Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR). Lee, J.; Lee, I.; and Kang, J. 2019. Self-attention graph pooling. In International conference on machine learning, 3734– 3743. PMLR. Li, M.; Chen, S.; Zhang, Y.; and Tsang, I. 2020. Graph cross networks with vertex infomax pooling. Advances in Neural Information Processing Systems, 33: 14093–14105. Li, W.; Chen, J.; Gao, P.; and Huang, Z. 2022a. Label enhancement with label-specific feature learning. International Journal of Machine Learning and Cybernetics, 13(10): 2857–2867. Li, Z.; Wu, Q.; Nie, F.; and Yan, J. 2022b. GraphDE: A Generative Framework for Debiased Learning and Out-ofDistribution Detection on Graphs. In Advances in Neural Information Processing Systems. Liu, C.; Zhan, Y.; Wu, J.; Li, C.; Du, B.; Hu, W.; Liu, T.; and Tao, D. 2022. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321. M¨uller, L.; Galkin, M.; Morris, C.; and Ramp´aˇsek, L. 2023. Attending to graph transformers. arXiv preprint arXiv:2302.04181. Nouranizadeh, A.; Matinkia, M.; and Rahmati, M. 2021. Topology-Aware Graph Signal Sampling for Pooling in Graph Neural Networks. In 2021 26th International Computer Conference, Computer Society of Iran (CSICC), 1–7. IEEE. Nouranizadeh, A.; Matinkia, M.; Rahmati, M.; and Safabakhsh, R. 2021. Maximum Entropy Weighted Independent Set Pooling for Graph Neural Networks. arXiv preprint arXiv:2107.01410. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8506 Park, T.; Efros, A. A.; Zhang, R.; and Zhu, J.-Y. 2020. Contrastive learning for unpaired image-to-image translation. In European conference on computer vision, 319–345. Springer. Peng, J.; Wang, H.; Yue, S.; and Zhang, Z. 2022. Contextaware co-supervision for accurate object detection. Pattern Recognition, 121: 108199. Rong, Y.; Bian, Y.; Xu, T.; Xie, W.; Wei, Y.; Huang, W.; and Huang, J. 2020a. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33: 12559–12571. Rong, Y.; Huang, W.; Xu, T.; and Huang, J. 2020b. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification. In International Conference on Learning Representations. Shi, Y.; Huang, Z.; Feng, S.; Zhong, H.; Wang, W.; and Sun, Y. 2020. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509. Sun, X.; Wei, B.; Ren, X.; and Ma, S. 2017. Label embedding network: Learning label representation for soft training of deep networks. arXiv preprint arXiv:1710.10393. Suresh, S.; Li, P.; Hao, C.; and Neville, J. 2021. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34: 15920–15933. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. International Conference on Learning Representations. Wale, N.; Watson, I. A.; and Karypis, G. 2008. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3): 347–375. Wang, Y. 2021. Bag of tricks of semi-supervised classification with graph neural networks. arXiv preprint arXiv:2103.13355. Wang, Y.; Wang, W.; Liang, Y.; Cai, Y.; and Hooi, B. 2021. Mixup for node and graph classification. In Proceedings of the Web Conference, 3663–3674. Wang, Z.; Liu, M.; Luo, Y.; Xu, Z.; Xie, Y.; Wang, L.; Cai, L.; Qi, Q.; Yuan, Z.; Yang, T.; et al. 2022. Advanced graph and sequence neural networks for molecular property prediction and drug discovery. Bioinformatics, 38(9): 2579– 2586. Wu, Q.; Zhao, W.; Li, Z.; Wipf, D.; and Yan, J. 2022. NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification. In Advances in Neural Information Processing Systems. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1): 4–24. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2019. How Powerful are Graph Neural Networks? In International Conference on Learning Representations. Yanardag, P.; and Vishwanathan, S. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 1365–1374. Yang, C.; Liu, J.; and Shi, C. 2021. Extract the knowledge of graph neural networks and go beyond it: An effective knowledge distillation framework. In Proceedings of the Web Conference, 1227–1237. Yang, H.; Yan, X.; Dai, X.; Chen, Y.; and Cheng, J. 2021. Self-enhanced gnn: Improving graph neural networks using model outputs. In 2021 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. Yin, Y.; Wang, Q.; Huang, S.; Xiong, H.; and Zhang, X. 2022. Autogcl: Automated graph contrastive learning via learnable view generators. In Proceedings of the AAAI Conference on Artificial Intelligence, 8892–8900. You, Y.; Chen, T.; Shen, Y.; and Wang, Z. 2021. Graph contrastive learning automated. In International Conference on Machine Learning, 12121–12132. PMLR. Zhang, H.; Lin, S.; Liu, W.; Zhou, P.; Tang, J.; Liang, X.; and Xing, E. P. 2020a. Iterative graph self-distillation. arXiv preprint arXiv:2010.12609. Zhang, M.; Cui, Z.; Neumann, M.; and Chen, Y. 2018. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence. Zhang, S.; Zhang, X.; Bao, H.; and Wei, F. 2021. Attention temperature matters in abstractive summarization distillation. arXiv preprint arXiv:2106.03441. Zhang, W.; Miao, X.; Shao, Y.; Jiang, J.; Chen, L.; Ruas, O.; and Cui, B. 2020b. Reliable data distillation on graph convolutional network. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, 1399– 1414. Zhuang, Y.; Lyu, L.; Shi, C.; Yang, C.; and Sun, L. 2022. Data-Free Adversarial Knowledge Distillation for Graph Neural Networks. arXiv preprint arXiv:2205.03811. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8507
2024
945
18,790
DAG-Aware Variational Autoencoder for Social Propagation Graph Generation Dongpeng Hou1, 2, Chao Gao2*, Xuelong Li2, Zhen Wang3, 1, 2† 1School of Mechanical Engineering, Northwestern Polytechnical University 2School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University 3School of Cybersecurity, Northwestern Polytechnical University Abstract Propagation models in social networks are critical, with extensive applications across various fields and downstream tasks. However, existing propagation models are often oversimplified, scenario-specific, and lack real-world user social attributes. These limitations detaching from real-world analysis lead to inaccurate representations of the propagation process in social networks. To address these issues, we propose a User Features Attention-based DAG-Aware Variational Autoencoder (DAVA) for propagation graph generation. First, nearly 1 million pieces of user attributes data are collected. Then DAVA can integrate the analysis of propagation graph topology and corresponding user attributes as prior knowledge. By leveraging a lightweight attention-based framework and a sliding window mechanism based on BFS permutations weighted by user influence, DAVA significantly enhances the ability to generate realistic, large-scale propagation data, yielding graph scales ten times greater than those produced by existing SOTA methods. Every module of DAVA has flexibility and extension that allows for easy substitution to suit other generation tasks. Additionally, we provide a comprehensive evaluation of DAVA, one focus is the effectiveness of generated data in improving the performance of downstream tasks. During the generation process, we discover the Credibility Erosion Effect by modifying the generation rules, revealing a social phenomenon in social network propagation. Introduction In recent years, the analysis of propagation models in social networks has attracted growing attention due to their considerable impact on various aspects of society (Leskovec et al. 2007; Vosoughi, Roy, and Aral 2018). Numerous applications within the realm of social networks are fundamentally based on the propagation process. Examples include influence assessment (Xia et al. 2021), locating the diffusion source (Wang et al. 2023), and user profiling (Jiang, Ren, and Ferrara 2023), all of which rely on different propagation models. Consequently, by employing these models in conjunction with downstream tasks, far-reaching implications can be observed across a wide range of fields, such as politics, public health, and marketing (Goel et al. 2016). *Corresponding author: [email protected] †Corresponding author: [email protected] Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. propagation data in platforms rumor node feature alignment data analysis rumor true … generation guide 0 5 10 15 20 0.0 0.2 0.4 0.6 0.8 1.0 CCDF rumor authentic EV AL true rumor …… Jane Ada Elva No Fans Verif Yes No Fans Verif No No Fans Verif Yes depth 2 3 4 5 6 7 8 0.0 0.2 0.4 0.6 0.8 1.0 rumor authentic CCDF true rumor diameter CCDF CCDF Figure 1: The illustration of a novel propagation graph generation grounded in real-world propagation data. Recognizing that propagation is user-driven, we analyze the propagation data in social media, emphasizing unique user attributes and propagation structures. These analyses guide our graph generation closely mirroring real-world propagation. One of the widely used traditional propagation models in the social network is information cascade models, which provide a basis for understanding and analyzing the spread of information, and behaviors in various scenarios (Shakarian et al. 2015). However, these propagation models are often tailored for specific scenarios and depend on simplistic assumptions. Further, some deep learning submodules, easily integrated into downstream tasks, adeptly capture complex relationships and patterns in information propagation (Xia et al. 2021; Ling et al. 2022). However, propagation models based on representation learning in deep frameworks are often latent and lack direct interpretability, making it difficult to understand the underlying mechanisms. In summary, the most significant drawback of the aforementioned methods is their detachment from real-world data (Chen, Castillo, and Lakshmanan 2022; Guille et al. 2013), which leads to a weak capacity to characterize propagation dynamics in actual scenarios. Therefore, the motivation of this paper is to conduct research based on real propagation data from social media and propose a framework that effectively reflects the real propagation process. However, there are two existing challenges. First, we recognize that propagation in social networks is user-driven (Li et al. 2022), but existing propagation data barely contains user information, resulting in propagation based directed acyclic graph (DAG) in social media that only has a topological structure without assigning user attributes to nodes. Second, the cost of obtaining real propagation datasets is high, and due to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8508 hardware limitations, the scale of the collected propagation data is generally small (such as the user scale in the Twitter propagation dataset is only smaller than 2,000) (Liu et al. 2015; Ma et al. 2016), which is not conducive to demonstrating the performance of downstream algorithms on different scale datasets. In order to overcome these two challenges, we have made some efforts. First, we crawl nearly one million user attributes, including the number of tweets, followers, and followings, then correlate these attributes to the nodes in the real propagation data based on the unique UID reliably, ensuring the completeness and authenticity of propagation events. Then a comprehensive and thorough analysis of the Twitter and Weibo propagation data, including various structural topologies and user features, is conducted to obtain the propagation characteristics distribution. Second, based on these prior information, we further introduce a graph generative model to learn the propagation patterns of both rumors and non-rumors1. More specifically, a User Features Attention based DAG-Aware Variational Autoencoder (DAVA) is developed for the propagation graph generation in the social network. DAVA employs an exponential distribution sampling based variational autoencoder for graph-level generation and a relationship attention mechanism focused on user attributes for edge-level generation. Finally, DAVA can construct large-scale propagation datasets with similar characteristics as those statistically observed on Weibo and Twitter. The contributions of this paper are summarized below. • We provide a comprehensive analysis of real-world propagation data from Twitter and Weibo platforms with nearly one million crawled users. Incorporating these insights directly DAVA can improve the quality of the generated output and reduce the depth of hidden layer. And the code is available at https://github.com/cgaocomp/DAVA. • We introduce an innovative model DAVA for social propagation data generation. It features a graph permutation with user importance, exponential sampling in the graph autoencoder, an interpretable attention mechanism that focuses on user relationships, and a unique loss function. What’s more, the time-sliding window strategy generates graphs ten times larger than SOTA methods. • We identify a phenomenon called Credibility Erosion Effect in social network propagation. Importantly, we incorporate this discovery into DAVA’s generation process by applying a decay factor to the predecessors. Such a factor mirrors the credibility erosion effect, enhancing the realism and effectiveness of the generative graphs. • We expand the evaluation strategies for DAVA to rigorously verify the generation ability similar to real-world propagation characteristics, including traditional metrics in the generative field, comparing feature distributions with real data using CCDF, and assessing utility in downstream tasks like source localization. 1We adopt the traditional definition from the social psychology literature (Allport and Postman 1947), which defines a rumor as a story or statement whose truth value is unverified or deliberately false, for better understanding. Related Work In this paper, we employed deep graph generative techniques to generate and augment real-world propagation datasets in social networks. Therefore, the related work should be discussed in two parts: propagation models in social networks and deep graph generative models. Propagation Models The propagation models are helpful to comprehend how sources are formed, information is spread, and group behaviors are influenced in social networks. Some classical influence diffusion models are widely used to characterize the propagation process in social networks. For example, Kempe et al. propose the Independent Cascade (IC) and Linear Threshold (LT) models (Kempe, Kleinberg, and Tardos 2003). Following the successful application of the Susceptible-Infectious (SI) and Susceptible-InfectiousRecovered (SIR) models in epidemiology, these models have been adopted in social networks (Wang et al. 2022). However, these models often make simplified assumptions and are restricted to specific scenarios, limiting their realworld applicability. To completely learn the complex interaction parameters of the underlying propagation models, methods based on deep learning have received widespread attention in recent years. The IVGD model is a versatile architecture of reversible graph diffusion models, designed to autonomously learn the inherent rules of the propagation process (Wang, Jiang, and Zhao 2022). Despite deep methods having an adept ability to learn heterogeneous parameters of propagation models, their robustness and transferability are challenged without the support of real-world propagation analysis (Wu et al. 2020). Deep Graph Generative Models Deep graph generative techniques are initially successfully applied in fields like chemistry and pharmaceuticals (You et al. 2018a; Jin, Barzilay, and Jaakkola 2018). For instance, Li et al. use Graph Convolutional Networks (GCN) to capture both the structure and attributes during the generation process (Li et al. 2018). With these mature applications in various domains, such techniques have been extended to social networks. GraphRNN, for instance, address the limitations of previous methods that could only learn from a single graph or generate small-scale graphs (You et al. 2018b). However, GraphRNN primarily focuses on graph permutation embedding and the scale of generation, rather than adequately learning the attribute information within the graph. Zhang et al. further consider the attribute-based neural architecture graphs and introduced the DAG based variational autoencoder D-VAE (Zhang et al. 2019). This approach allowed for more comprehensive learning and generation of attribute information within graphs. And the goal of DVAE bears resemblance to our task of generating social network propagation DAG. However, D-VAE falls short in adequately distinguishing the weight relationships between nodes by employing a unified aggregation based on message passing. Additionally, current works (Han et al. 2023; Zahirnia et al. 2022) entail relatively high computational complexity so it is difficult to generate a large-scale graph. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8509 Methodology Problem Definition Considering a DAG G = (V, E, F), which represents a propagation process related to a unique event topic extracted via web crawling from social platforms like Twitter or Weibo. Here, V = {v1, ..., vn} represents the set of users, E = {(vi, vj) | i ̸= j, (vi, vj) ∈V × V } symbolizes the propagation paths, where each edge from vi to vj signifies that a piece of information is disseminated from user vi to user vj, F = {f(v) | v ∈V } denotes the feature vectors associated with each user, such as user profile information or posting behaviors. The goal is to construct a generative model M that can accurately mimic the statistical properties of the extracted propagation network G. Specifically, the model should be capable of generating a new network G′ = (V ′, E′, F ′) where V ′, E′, and F ′ are the sets of users, propagation paths, and user features in the generated network, respectively, The generated network G′ should exhibit similar statistical properties on some rigorous metrics to the extracted network G. User Features Attention Based DAG-Aware Variational Autoencoder Fast Preprocessing for Propagation DAG Different types (rumor and non-rumor) of propagation DAGs exhibit significant disparities in both features and structures. More specifically, leveraging social data analysis techniques, we used the Complementary Cumulative Distribution Function (CCDF) for evaluation, examining Weibo and Twitter’s propagation data across aspects like network diameter, propagation depth and breadth, and structural virality. Our findings demonstrate that rumors spread more virally, whereas non-rumors disperse in a broadcast-like fashion. An analysis of user attributes also showed that source users broadcasting non-rumors are typically of higher quality, while those spreading rumors are more active2. And these insights guide our proposed generative model. One focus is an abundance of users in the propagation DAGs that are directly linked to the propagation source and have an out-degree of zero. The impact of these users on the propagation structure is relatively minor given that their depth is fixed at 2, thereby contributing weakly to the structural virality. We utilize a simple yet effective model fθ(G) to learn the number of such users, denoted as ˆV . By eliminating these connections prior to the graph generation training process, we can decrease redundant information, streamline the dataset, and improve 2Our comprehensive analysis reveals some phenomena divergent from previous research. For instance, rumor spread is slower but persists longer; 90% rumors involve fewer participants than non-rumor events. However, the remaining rumors, though rare, are quite sensational. These rumors attract a significant number of participants, leading to a higher average than median participant count from an overall perspective. Additionally, we identify that reputable active users, termed ‘onlookers’, inadvertently or unwittingly spread rumors due to their extensive online interactions and the allure of sensational fake news. Conversely, celebrities exhibit caution, mindful of releasing unverified information. More detailed information can be found in an analysis (Hou et al. 2024). training efficiency. fθ(G(V −ˆV , E −ˆE, F)) →| ˆV |, (1) where ∀v ∈ˆV , out-neighbor(v)=0 and (s, v) ∈E. s is the source user in a DAG. Then the generative model could not focus on these meaningless users and their related connections, and the costs arising from these details can be directly omitted. Specifically, we leverage the graph pooling technique, Set2Set (Vinyals, Bengio, and Kudlur 2015), to directly encode the entire DAG, including its topology and node features, thereby implementing Eq. (1). fθ(G) ≜fθ(G(V −ˆV , E −ˆE, F)) = Nonlinear(Set2Set(G(V −ˆV , E −ˆE, F))). (2) Note that the choice of the function fθ is flexible. Some other models with aggregation capabilities, such as GCN (Welling and Kipf 2016), GAT (Velickovic et al. 2018), etc., can also be applied to fθ. It’s worth mentioning that when finally generating a new graph G′, we only need to perform fθ(G′) and add these users and connections. One of the biggest challenges in graph generation is nonunique representations. A graph with n nodes can correspond to up to n equivalent adjacency matrices due to arbitrary node orderings, making it computationally expensive to model and optimize objective functions of graph generation. Therefore, based on user features E, we propose a unique graph representation method based on node importance using Breadth-First Search (BFS) to ensure a unique node index sequence Φ for the matrix representation of identical graph structures. Due to space constraints, we demonstrate the implementation process in Alg. 1. For more detailed reference, see the publicly available source code. Algorithm 1: BFS Permutation with Node Importance Require: A propagation DAG G(V, E, F) Ensure: The BFS sequence bfsSequence (i.e., Φ) 1: F = MinMaxScaler(F) // Normalization 2: Chi-square(F) →{f ′ 1, f ′ 2, ...} // Sort the importance of different features. 3: Provide a unique one-hot encoding representation I(v) for each user based on the sorted dimension{f ′ 1, f ′ 2, ...}. 4: Initialize visited ← ∅, curLevel ← [root], bfsSequence ←[] 5: while curLevel ̸= ∅do 6: Sort curLevel based on feature importance I(v) 7: nextLevel ←∅ 8: for vi ∈curLevel do 9: if vi /∈visited then 10: visited.add(vi), bfsSequence.add(vi) 11: nextLevel.add(Neighbor list(vi)) 12: end if 13: end for 14: curLevel ←nextLevel, nextLevel ←∅ 15: end while 16: return bfsSequence The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8510 1 1 3 2 1 2 1 1 3 2 2 Gn... G1 … … … E-VAE E-VAE E-VAE 2 1 3 4 1 3 2 4 user feature base relationship attention topology update edge update joint relationship network Figure 2: The illustration of the generation process in DAVA. The graph-level generation process (blue line) is primarily centered on the topological structure of DAGs. This process employs an exponential variational autoencoder to establish latent space vector representations of the graph structures. Building upon this foundational topology, the edgelevel generation process (green line) evaluates the similarity between a potential new user and each existing user. For such an evaluation, DAVA leverages a user attribute attention mechanism (purple line) to assess the likelihood of directed edge formation between the new user and each existing user. Then the newly generated edge structures are subsequently integrated into the next sequence of graph-level generation. The Generation Process So far, the unique input sequence Φ can be determined for the same graph. Next, we would like to introduce the graph generation process of the proposed model. In the generation process, we specifically focus on the characteristics of the propagation DAG in social networks. Our framework allows for unified training across different scales and an arbitrary number of graphs, and it can generate graphs of a much larger scale (at least 10 times larger than the SOTA). Furthermore, without loss of generality, our model demonstrates strong extensibility to general graph generation tasks, which can be achieved by conveniently revising some modules. We will briefly explain this in the corresponding sections. The key idea of our generation strategy lies in the directed iterative focus on the graph-level and edge-level generation processes, guided by GΦ. The graph-level process primarily concentrates on the topology of the DAG. Then, utilizing the topological information, the edge level incorporates user attribute information via an attention mechanism to gauge the potential of forming an edge between a new user and existing users in the current graph context, thereby facilitating the generation of new directed edges. The newly formed edge structure is further integrated into the graph-level generation process. This allows for a sequential generation within the entire framework. Through this strategy, our approach maintains an orderly progression, ensuring consistent and coherent graph and edge generation. Referring to the scalable modeling process, our graph-level generating function and edge-level generating function are defined as follows. hi = fG(hi−1, Ω(φi−1)), (3) φi = fE(hi, F), (4) where hi represents a vector encoding the state of the graph topology generated so far, φi−1 is the predicted adjacency vector associated with the most recently generated user vi−1, and φi−1(j) (j < i −1) signifies the probability of an edge existing between the most recently generated user vi−1 and the historical user vj. Ω(x) is a one-hot decoder function to set the maximum value to 1 and set all other values to 0 from the vector x, indicating which historical user is most likely to form an edge with the newly introduced user in the condition of the current topological context. Next, we present the graph-level generative model fG and edge-level generative model fE, which pertain to the task of modeling the inherent DAG in the social network. The generative model fG we propose represents an integration of a Gated Recurrent Unit (GRU) (Chung et al. 2014) and an Exponential Variational Autoencoder (EVAE), where E-VAE is predicated on sampling from an exponential distribution. Our approach employs a Gated Recurrent Unit (GRU) and a Variational Autoencoder (VAE) module. The GRU ensures temporal data generation continuity, while the VAE provides potent latent variable representation. Uniquely, we use an Exponential-VAE (EVAE) for propagation Directed Acyclic Graphs (DAGs). The choice is the idea that the primary objective of the graphlevel generation model is to effectively learn and assimilate the topological information inherent in propagation graphs. We therefore expect to use some of the conclusions obtained from the propagation data analysis as a priori guidance for graph generation. That is, we found a long-tailed distribution trend from the Cumulative Distribution Function (CDF) analysis of topological features. Further, we fitted these traits to various distributions using MLE, and found the exponential distribution consistently yielded the lowest MLEs. To further assess the extent to which the exponential distribution fits our data, we employ the skewness-based Kolmogorov-Smirnov (K-S) test. Some features, like structural virality, surpassed the 0.05 significance level in the KS test against an exponential distribution, leading DAVA to adopt exponential sampling in the graph-level generation 3. log qϕ  z | GRU(x(k))  = log Exp  z; λ(k) , (5) where qϕ(z | GRU(x(k))) represents the approximate posterior distribution of the latent variable z parameterized by ϕ, conditioned on the output of the GRU applied to the input data x(k). Eq. (5) implies that the distribution of the latent variable z is inferred based on the information extracted from the input data by the GRU, which encapsulates the temporal dependencies in the input data and then is used to guide the generation of the latent space within the VAE framework. In this case, samples are drawn from an exponential distribution. Our goal is to model the latent variables z given the k-th observed data points x(k) using an exponential distribution. Therefore, log Exp(z; λ(k)) represents the natural logarithm of the probability density function of an exponen3The interpretability of E-VAE and the corresponding KL divergence are proven in the supplementary files. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8511 tial distribution with a rate parameter λ(k). Moreover, to ensure efficient backpropagation of gradients and avoid the potential issue of exploding exponentials within DAVA, we use a combination of specific transformations and the reparameterization trick. Further, the generative model fE is informed by the current network topology representation hi, placing emphasis on the connection density between users with unique attributes under given topological contexts. The idea of dynamically adjusting the weights between different edges is worth noting for sparse propagation structures. Thanks to the distinct structural features of DAGs with user attributions, we are able to easily implement a lightweight, interpretable attention mechanism. For the n-th historical user in the Φ sequence, we model it as a vector where the n-th position is 1 and all other positions are 0. This vector signifies that the n-th node will ultimately establish a connection with the new user, serving as the value in our attention mechanism. In a similar interpretive vein, we designate the new user’s feature information as the query, while the historical users’ feature information constitutes the key. To further refine our model, we aggregate the most recently updated global topological information, hi, into the query and key. Lastly, based on the query and key, we are able to calculate the weight corresponding to each value. And multiplying the calculated weights by their corresponding value yields the probability of the existence of each edge, thereby quantifying the likelihood of the connection between the historical users and the current user. More specifically, the query and key are initially transformed through a linear transformation followed by an activation function: Qm = σ(Wq · querym + bq), (6) Kn = σ(Wq · keyn + bq), ∀n ∈(1, 2, ...i), (7) where querym = cat(Fm, hi), keyn = cat(Fn, hi), Wq, bq, Wk, and bk represent the weights and biases in linear transformations of the query and key, respectively. Then, the scores are computed by taking the dot product between the transformed query and the transpose of the transformed key: scoresmn = Softmax((QT m · Kn), ∀n ∈(1, 2, ...i)). (8) Finally, the likelihood of the connection between the historical users n and the current user m is obtained by multiplying the weighted score with the corresponding value: likelihoodmn = scoresmn · valuen. (9) Loss Function Defination So far, the definition of the generative model is complete. To ensure effective training, a valuable loss function needs to be established. The definition of the objective function is as follows: loss = lossbce + α ∗lossKL, (10) where lossbce=− 1−P AG |V |2 y log(ˆy)+ P AG |V |2 (1−y) log(1−ˆy)  represents the reconstruction error loss during the generation process. Here a custom-weighted version is used to emphasize the importance of edges. AG denotes the adjacency matrix corresponding to graph GΦ, ˆy is the predicted probability of the edge corresponding to φ, and y signifies the presence or absence of an edge in AG. lossKL=Norm(log z λ(k)  + λ(k) z -1) represents the normalized KL divergence between the latent representation and standard exponential distribution, z is the latent representation of the E-VAE corresponding to the graph-level generation. Tricks for New Graph Generation In the new graph generation process, we ensure robustness from three aspects: the selection of new candidates, the achievable scale of generation, and the mapping of social phenomena in our models. First, in order to ensure rigor in the sampling process during the generation of G′, we construct a joint historical relationship network, or a union graph G. Specifically, for any k propagation DAG G1, G2, ..., Gk from the same platform and the same period, we merge nodes with identical unique user identifiers (UIDs) across these trees to fuse the k propagation DAGs into a single union graph G. G serves as a foundational basis for sampling new users in the process of new graph generation. Secondly, as the scale of the network increases, it becomes increasingly resource-intensive for the GRU to handle the growing input size corresponding to the expanded adjacency length of each node in GΦ at every timestep. Fortunately, the proposed BFS permutation with node importance ensures stronger contextual information between a node vi and its closer adjacent nodes in ordering Φ. Since under the ordering Φ, nodes slightly ahead or behind vi can be either more or less crucial nodes at the same DAG depth of vi, or nodes from the adjacent depth layers of vi. This approach obviates the need to consider the far-distant relationship between vi and the first node v0 when |V | is very large. Thus, the permutation strategy enables the effective utilization of a sliding window in two aspects. During the training phase, if the size of the training dataset |V (GΦ)| exceeds the sliding window size d (which concurrently serves as the input size for the GRU), we have the capacity to select the most recent d elements from each row within GΦ. This effectively maps the data from a higher dimension R|V |×|V | to a lowerdimensional space R|V |×|d|. During the new graph generation phase, for an expected graph with |V ′| nodes, where |V ′| exceeds the sliding window size d, generating node from d to |V ′| in fG requires focusing solely on the information of the most recent d nodes. Moreover, our featuredriven attention mechanism is unaffected by the length of the value, eliminating the concern of |V ′| and |V | in fE. The CEE phenomenon is inspired by analogous patterns observed generally in diverse fields such as advertising (Darke and Ritchie 2007) and communication (Metzger, Flanagin, and Medders 2010). The Credibility Erosion Effect (CEE) here refers to a gradual decline of credibility from the same person who repeatedly spreads and shares information. This effect has been widely observed in social networks (Turcotte et al. 2015), especially in the context of fake news and rumor diffusion, and regarding social media influencers. If a source consistently spreads questionable information, people may start doubting the credibilThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8512 ity of this source or be influenced by other participants in this event. The effectiveness of incorporating this CEE phenomenon into the new graph generation process of DAVA has been successfully validated with real-world propagation data. To model this phenomenon, we introduce a decay mechanism, denoted as Ψ, to optimize the edge generation process Ω(Ψ(φi)). After predicting the edge probabilities between a new user vj and each historical user through fE, these probabilities are adjusted. If a historical user vi has k succeeding users, then the probability of an edge from vi to vj is reduced by a cumulative decay factor of βk, where β is a decay factor very close to but less than 1. We have conducted extensive experiments to validate the effectiveness of these techniques. Experiments Datasets and Baselines We used three datasets collected from two real-world social media platforms, Weibo and Twitter, for graph generation, namely Weibo (Ma, Gao, and Wong 2017), Twitter15, and Twitter16 (Liu et al. 2015; Ma et al. 2016). Based on the user IDs in 6,059 public propagation cascades from Twitter and Weibo, we collected the profiles of corresponding nearly 1 million distinct users, including verification status, number of tweets, registration date, number of fans, number of followings, and ratio of fans to followings. Through the extensive data collection, we formed joint historical relationship networks with user profiles of 480,405, 289,504, and 2,856,519, respectively. These networks serve as a priori knowledge for the graph generation task. The relevant information of the three datasets is shown in Tab. 1. Statistic Twitter15 Twitter16 Weibo #users 480,987 289,675 2,856,741 #users in G 480,405 289,504 2,856,519 #relations in G 565,948 334,603 3,508,596 #tweets 1490 818 4664 #rumors 370 205 2244 #non-rumors 746 412 2082 Table 1: Statistics of the datasets. G is the largest component of the joint historical relationship network based on UIDs. Comprehensive Evaluation of DAVA Due to the absence of a unified standard for assessing the generation of propagation DAGs in the social network, a comprehensive extension of the evaluation metrics for DAVA has been undertaken from three facets. And we compare the SOTA methods of DAGG (Han et al. 2023), GVAE MM (Zahirnia et al. 2022), D-VAE (Zhang et al. 2019), GraphVAE (Simonovsky and Komodakis 2018), GraphRNN (You et al. 2018b). MMD Based Metric Evaluation First, the Maximum Mean Discrepancy (MMD) metric is widely used in the domain of graph generation, primarily assessing the similarity between two data distributions. A smaller value indicates a closer approximation. We follow and employ squared MMD between two sets of samples from distributions p and q based on the Reproducing Kernel Hilbert Space (RKHS) (Kawai, Mukuta, and Harada 2019), as shown in Eq. (11). MMD2(p∥q) = Ex,y∼p[k(x, y)] + Ex,y∼q[k(x, y)] −2Ex∼p,y∼q[k(x, y)], (11) where k corresponds to the kernel function that operates on individual samples x and y drawn from the distributions. Metrics MMD2 Time (h) Group T15 T16 Wb 100 1,000 10,000 DAGG 0.287 0.244 0.291 0.417 GVAE MM 0.316 0.263 0.242 0.25 0.667 D-VAE 0.271 0.216 0.255 1.167 GraphVAE 0.358 0.355 0.306 0.083 0.25 GraphRNN 0.331 0.357 0.401 0.133 0.333 DAVA 0.149 0.134 0.201 0.002 0.01 0.5 Table 2: The generation performance evaluation of different methods based on MMD metric. The time signifies the approximate hours needed for the model to generate a single graph of 100, 1,000, and 10,000 nodes, respectively. Symbol “-” indicates that the model could not successfully generate the graph of the corresponding scale using the maximum valid facility. The bold values represent the best results. DAVA consistently outperforms all tested datasets, reducing the MMD by an average of 40% compared to the optimal D-VAE baseline and increasing the generation scale by two orders of magnitude. The superiority of more accurate generation can be attributed to four key factors: (1) Using statistical analysis from real-world data as prior knowledge, the E-VAE is more capable of representing the latent space of real-world topology structures. (2) The constructed joint relationship network provides historical prior knowledge of user relationships. (3) The attention mechanism dynamically focuses on edge connections based on the user feature representation. (4) We identify a phenomenon of CEE in social network propagation and effectively incorporate it into the generation process. The larger scale generation in less time is due to two reasons: (1) The BFS permutation with node importance creates a strong correlation for each node’s context, allowing the use of a lightweight length sliding window for effective sequential generation without needing to concern the global context in the graph-level autoregressive process. (2) The interpretable attention mechanism allows for a lightweight module design, significantly reducing the various attention parameters compared to other attention models. In summary, DAVA generates propagation DAGs that are closer to reality, larger in scale, and quicker to produce. Assessing the Realism of Generated Data Based on CCDF Second, we compare the characteristics of our generated data with those of real propagation data. Specifically, we examine topological characteristics such as breadth, depth, and structural virality, as well as the distribution of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8513 attribute values of source and participating users across different intervals, in both rumor and non-rumor propagation DAGs fθ(G). Due to the limited space, we present the most focused structural virality (Goel et al. 2016). Intuitively, assessing and comparing the difference in CCDF distributions in a visualization way between generative graphs with original real-world propagation data allows for a straightforward and convenient way to observe the goodness-of-fit of the generated data to the topological features or user attributes of real-world data from a statistical analysis perspective. As shown in Fig. 3, the CCDF curve of the DAVA-generated data closely matches that of the original data, demonstrating a high degree of similarity. By comparing these distributions, we visually assessed that DAVA has the capability to generate data most similar to the real propagation data, in an intuitive and straightforward manner. 2.0 2.5 3.0 3.5 4.0 0.0 0.2 0.4 0.6 0.8 1.0 CCDF structural virality GVAE_MM DAGG GraphRNN DAVA D-VAE GraphVAE original rumor 2.0 2.5 3.0 3.5 4.0 0.0 0.2 0.4 0.6 0.8 1.0 CCDF structural virality GVAE_MM DAGG GraphRNN DAVA D-VAE GraphVAE original non-rumor Figure 3: Comparation of the CCDF distribution differences in structural virality between graphs generated by different methods and real propagation data from Twitter15. Utility of Generated Data in Downstream Tasks Third, numerous downstream tasks, such as influence maximization, fake news detection, information diffusion analysis, and source localization, rely on propagation models. We use source localization as an example to explore if using generated data can enhance model predictive ability in real scenarios. Here, two localization models, GCNSI (Dong et al. 2019) and TGASI (Hou et al. 2023), are used. In the experiment, the original groups train localization models on 9/10 of the Twitter propagation data and test on the remaining 1/10. For comparison, the augmentation groups additionally generate 1,000 real-world propagation graphs by DAVA or SOTAs for training, while the control groups simulate 1,000 snapshots based on SI, SIR, IC, and LT models. Tab. 3 shows that the use of simulation data from traditional propagation models leads to a decrease in the performance of downstream tasks in real-world propagation scenarios, suggesting that these models may have limited relevance for real-world tasks. Conversely, augmenting with real generated data improves outcomes, particularly improving most when using propagation data generated by DAVA, thereby emphasizing the importance of graph generation and DAVA’s utility. Ablation Study We further investigate the impact of each module in DAVA on the performance of the graph generation to demonstrate their necessity. The critical modules of DAVA include the E-VAE, user relationship attention, loss function, and Strategy Original Augmented (DAVA/SOTA) Control GCNSI 0.532 0.613/0.582 0.512 TGASI 0.787 0.825/0.808 0.755 Table 3: Source detection accuracy of localization methods under different groups of training sets. CEE-based decay mechanism. So some variants of DAVA are developed to compare with DAVA. DAVA Norm uses the normal distribution to represent the latent space of the graph-level generation in Eq. (5). DAVA Att uses the attention module in Transformer (Vaswani et al. 2017) to replace the proposed interpretable attention in Eqs. (6)-(9). DAVA GRU uses the autoregressive generative model to replace the proposed attention in Eqs. (6)-(9). DAVA EN replaces the unique loss function in Eq. (10) with the binary cross-entropy loss function. DAVA CEE−removes the decay mechanism. Due to the limited space, we only present the variants of DAVA in the Twitter16 dataset. As shown in Tab. 4, it would lead to a generation similarity decrease, a generation scale reduction, or a generation time increase, no matter removing or replacing critical modules. MMD2 Graph Scale Time (h) DAVA 0.134 x104 0.001/0.01/0.5 DAVA Norm 0.207 x104 0.001/0.01/0.5 DAVA Att 0.151 x103 0.005/0.083/DAVA GRU 0.322 x103 0.017/0.217/DAVA EN 0.236 x104 0.001/0.01/0.5 DAVA CEE− 0.181 x104 0.001/0.01/0.5 Table 4: The generation performance evaluation of the variant model from DAVA based on MMD metric in Twitter16. Conclusion In this paper, we generate large-scale and diverse social media propagation graphs by incorporating user attributes from Twitter and Weibo. Our analysis of nearly a million users’ social attributes, focusing on propagation characteristics and user features, revealed a prevalent exponential distribution and the presence of a credibility erosion effect in these media. Leveraging these prior knowledge, we develop a DAVA model to enhance the realism of generated data in a lowcost way. In the future, we are committed to collecting more propagation data and generating larger-scale graphs. Acknowledgements This work was supported by the National Key R&D Program (no. 2022YFE0112300); the National Natural Science Foundation for Distinguished Young Scholars (no. 62025602); the National Natural Science Foundation of China (nos. U22B2036, 62261136549, 11931015 and 61976181); the Fok Ying-Tong Education Foundation, China (Grant No. 171105); and the Tencent Foundation and XPLORER PRIZE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8514 References Allport, G. W.; and Postman, L. 1947. The psychology of rumor. Russell & Russell. Chen, W.; Castillo, C.; and Lakshmanan, L. V. 2022. Information and influence propagation in social networks. Springer Nature. Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, 1–9. Darke, P. R.; and Ritchie, R. J. 2007. The defensive consumer: Advertising deception, defensive processing, and distrust. Journal of Marketing research, 44(1): 114–127. Dong, M.; Zheng, B.; Quoc Viet Hung, N.; Su, H.; and Li, G. 2019. Multiple Rumor Source Detection with Graph Convolutional Networks. In Proceedings of the ACM International Conference on Information and Knowledge Management, 569–578. Goel, S.; Anderson, A.; Hofman, J.; and Watts, D. J. 2016. The structural virality of online diffusion. Management Science, 62(1): 180–196. Guille, A.; Hacid, H.; Favre, C.; and Zighed, D. A. 2013. Information diffusion in online social networks: A survey. ACM Sigmod Record, 42(2): 17–28. Han, X.; Chen, X.; Ruiz, F. J.; and Liu, L.-P. 2023. Fitting Autoregressive Graph Generative Models through Maximum Likelihood Estimation. Journal of Machine Learning Research, 24(97): 1–30. Hou, D.; Wang, Z.; Gao, C.; and Li, X. 2023. Sequential attention source identification based on feature representation. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 4794–4802. Hou, D.; Yin, S.; Gao, C.; Li, X.; and Wang, Z. 2024. Propagation Dynamics of Rumor vs. Non-rumor across Multiple Social Media Platforms Driven by User Characteristics. arXiv:2401.17840. Jiang, J.; Ren, X.; and Ferrara, E. 2023. Retweet-BERT: political leaning detection using language features and information diffusion on social networks. In Proceedings of the International AAAI Conference on Web and Social Media, volume 17, 459–469. Jin, W.; Barzilay, R.; and Jaakkola, T. 2018. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, 2323–2332. PMLR. Kawai, W.; Mukuta, Y.; and Harada, T. 2019. Scalable generative models for graphs with graph attention mechanism. arXiv preprint arXiv:1906.01861. Kempe, D.; Kleinberg, J.; and Tardos, ´E. 2003. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, 137–146. Leskovec, J.; Krause, A.; Guestrin, C.; Faloutsos, C.; VanBriesen, J.; and Glance, N. 2007. Cost-effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, 420–429. Li, Q.; Hu, B.; Xu, W.; and Xiao, Y. 2022. A group behavior prediction model based on sparse representation and complex message interactions. Information Sciences, 601: 224–241. Li, Y.; Vinyals, O.; Dyer, C.; Pascanu, R.; and Battaglia, P. 2018. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324. Ling, C.; Jiang, J.; Wang, J.; and Liang, Z. 2022. Source localization of graph diffusion via variational autoencoders for graph inverse problems. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining, 1010–1020. Liu, X.; Nourbakhsh, A.; Li, Q.; Fang, R.; and Shah, S. 2015. Real-time Rumor Debunking on Twitter. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 1867–1870. Ma, J.; Gao, W.; Mitra, P.; Kwon, S.; Jansen, B. J.; Wong, K.-F.; and Meeyoung, C. 2016. Detecting Rumors from Microblogs with Recurrent Neural Networks. In The 25th International Joint Conference on Artificial Intelligence. AAAI. Ma, J.; Gao, W.; and Wong, K.-F. 2017. Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, 708–717. Metzger, M. J.; Flanagin, A. J.; and Medders, R. B. 2010. Social and heuristic approaches to credibility evaluation online. Journal of communication, 60(3): 413–439. Shakarian, P.; Bhatnagar, A.; Aleali, A.; Shaabani, E.; Guo, R.; Shakarian, P.; Bhatnagar, A.; Aleali, A.; Shaabani, E.; and Guo, R. 2015. The independent cascade and linear threshold models. Diffusion in Social Networks, 35–48. Simonovsky, M.; and Komodakis, N. 2018. Graphvae: Towards generation of small graphs using variational autoencoders. In Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I 27, 412–422. Springer. Turcotte, J.; York, C.; Irving, J.; Scholl, R. M.; and Pingree, R. J. 2015. News recommendations from social media opinion leaders: Effects on media trust and information seeking. Journal of computer-mediated communication, 20(5): 520– 535. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2018. Graph attention networks. International Conference on Learning Representations, 1–12. Vinyals, O.; Bengio, S.; and Kudlur, M. 2015. Order matters: Sequence to sequence for sets. International Conference on Learning Representations, 1–11. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8515 Vosoughi, S.; Roy, D.; and Aral, S. 2018. The spread of true and false news online. science, 359(6380): 1146–1151. Wang, J.; Jiang, J.; and Zhao, L. 2022. An Invertible Graph Diffusion Neural Network for Source Localization. In Proceedings of the ACM Web Conference, 1058–1069. Wang, Z.; Hou, D.; Gao, C.; Huang, J.; and Xuan, Q. 2022. A rapid source localization method in the early stage of large-scale network propagation. In Proceedings of the ACM web conference 2022, 1372–1380. Wang, Z.; Hou, D.; Gao, C.; Li, X.; and Li, X. 2023. Lightweight source localization for large-scale social networks. In Proceedings of the ACM Web Conference 2023, 286–294. Welling, M.; and Kipf, T. N. 2016. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 1–14. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1): 4–24. Xia, W.; Li, Y.; Wu, J.; and Li, S. 2021. DeepIS: Susceptibility estimation on social networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 761–769. You, J.; Liu, B.; Ying, Z.; Pande, V.; and Leskovec, J. 2018a. Graph convolutional policy network for goaldirected molecular graph generation. Advances in neural information processing systems, 31. You, J.; Ying, R.; Ren, X.; Hamilton, W.; and Leskovec, J. 2018b. Graphrnn: Generating realistic graphs with deep auto-regressive models. In International conference on machine learning, 5708–5717. PMLR. Zahirnia, K.; Schulte, O.; Naddaf, P.; and Li, K. 2022. Micro and macro level graph modeling for graph variational autoencoders. In Advances in Neural Information Processing Systems, volume 35, 30347–30361. Zhang, M.; Jiang, S.; Cui, Z.; Garnett, R.; and Chen, Y. 2019. D-VAE: A Variational Autoencoder for Directed Acyclic Graphs. In Advances in Neural Information Processing Systems, 1586–1598. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8516
2024
946
18,791
Social-Aware Group Display Configuration in VR Conference Bay-Yuan Hsu,1 Chih-Ya Shen,2 Hao Shan Yuan,3 Wang-Chien Lee,4 De-Nian Yang5,6 1 Department of Industrial Engineering and Engineering Management, National Tsing Hua University 2 Department of Computer Science, National Tsing Hua University 3 Institute of Information Systems and Applications, National Tsing Hua University 4 Department of Computer Science and Engineering, Pennsylvania State University, USA 5 Institute of Information Science, Academia Sinica 6 Research Center of Information Technology Innovation, Academia Sinica [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Virtual Reality (VR) has emerged due to advancements in hardware and computer graphics. During the pandemic, conferences and exhibitions leveraging VR have gained attention. However, large-scale VR conferences, face a significant problem not yet studied in the literature – displaying too many irrelevant users on the screen which may negatively impact the user experience. To address this issue, we formulate a new research problem, Social-Aware VR Conference Group Display Configuration (SVGD). Accordingly, we design the Social Utility-Aware VR Conference Group Formation (SVC) algorithm, which is a 2-approximation algorithm to SVGD. SVC iteratively selects either the P-Configuration or S-Configuration based on their effective ratios. This ensures that in each iteration, SVC identifies and chooses the solution with the highest current effectiveness. Experiments on real metaverse datasets show that the proposed SVC outperforms 11 baselines by 75% in terms of solution quality. Introduction Virtual Reality (VR) has experienced a surge in adoption as industries increasingly utilize it to promote products and enhance services (Mileva 2022; Ning et al. 2021). During the COVID-19 pandemic, VR conferences and exhibitions gained popularity as in-person events are constrained by travel restrictions and social distancing policies. To support virtual gatherings, various metaverse platforms have been exploited. For instance, the ACM SIGKDD 2020 conference used vFair’s 3D platform for a virtual conference. Meta Horizon Workrooms offers advanced features like persistent whiteboards and mixed-reality pass-through functions, enabling users to collaborate in a virtual space (Meta 2023). Spatial allows users to customize their 3D virtual space for immersive galleries or exhibitions (Spatial 2023). Mozilla Hubs provides spatial audio and media-sharing functions for socializing with custom avatars in a virtual space (Hubs 2023). Engage replicates face-to-face interactions, accommodating up to 5,000 live virtual reality users for events, training, and education (ENGAGE 2023). However, these platforms lack the support for personalized display, e.g., Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. highlighting important nearby users in Head-Mounted Displays (HMDs), especially in crowded conference environments. Current VR social apps lack personalization, which displays all users uniformly in the virtual environment and neglects the potential benefits of customization. This curbs the social interactions in VR conferences and diminishes user satisfaction due to three drawbacks. D1) Obstructed View. In crowded virtual spaces, users often struggle to find friends or individuals of interest because nearby strangers obstruct their views. D2) Lack of Customization. The presence of undesirable strangers in proximity to a target user may lead to undesirable interactions. D3) Overload. In large-scale VR exhibitions/conferences, individuals may become overwhelmed by the abundance of participants for interactions or, conversely, become disinterested due to the lack of social connections. These three disadvantages, in combination, lead to VR-induced Social Isolation, where users may struggle to connect with individuals they are interested in or socially close to, resulting in reduced interactions and satisfaction. An effective solution to this issue is Personalized Display, which allows users to selectively enable or disable the rendering of other users. In VR environments, different users are not required to view the same set of individuals. Prior studies demonstrate that tailoring user displays in VR enhances social experiences (Pluto 2018). However, existing VR customization research mainly revolves around analyzing factory safety measures in computer vision and data mining, aiming to recommend items to users (Lacko 2020; Ko et al. 2020). These research overlook personalization factors among users. A personalized display enables the VR-based social metaverse with the following advantages. A1) Social Relationships. Enabling two users to see each other in the metaverse can enhance their satisfaction, fostering a feeling of shared presence (Bulu 2012). Thus, displaying a group of sociallyclose friends on an individual’s VR screen could be advantageous (Pluto 2018). A2) Personal Preferences. Personalized displays, aligning with users’ preferences, enhance satisfaction; for example, some users prefer seeing prominent scholars or conference organizers at AAAI, while others anticipate connecting with colleagues sharing their reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8517 User Configuration Ranking ku Andy Personal preferences Bella Carl Faye Eddy Dora 1 Social utilities Eddy Faye Bella Dora Carl Bella Personal preferences Faye Carl Andy Eddy Dora 2 Social utilities Eddy Dora Andy Faye Carl Carl Personal preferences Andy Dora Eddy Bella Faye 1 Social utilities Faye Bella Eddy Andy Dora Dora Personal preferences Andy Eddy Bella Faye Carl 2 Social utilities Faye Bella Eddy Carl Andy Eddy Personal preferences Faye Carl Dora Bella Andy 2 Social utilities Andy Bella Carl Faye Dora Faye Personal preferences Dora Andy Carl Eddy Bella 2 Social utilities Dora Carl Bella Eddy Andy Table 1: Personal preferences ranking, social utilities ranking, and display limitation ku for each user search interests. A3) Limited Display Slots. To address issues D2 and D3, a limited number of users may be selected for VR displays based on personal preferences and social relationships. Users can specify their preferred number of display slots, enabling them to see those they are interested in or have close social relationships with. Allocating restricted display slots per user efficiently resolves issue D1 by enabling prioritization of interactions with preferred individuals, thus averting overcrowding. However, achieving a delicate balance between social connections and personal preferences while limiting the number of displayed individuals presents a challenge. Motivated by the advantages A1, A2, and A3, we propose a new approach, named Social-Aware VR Conference Group Display Configuration (SVGD), to address the issues D1, D2, and D3. Given users’ social networks, personal preferences, and social utilities (the likelihood of two users having active and joyful interactions.), SVGD seeks to identify the ideal personalized display configuration for each user within confined slots, maximizing overall user satisfaction. Example 1. (Motivating Example). Fig. 1 presents an illustrative example. Given a social network G = (V, E) of six VR users, as shown in Fig. 1(a), where the hollow user images next to each user indicate the number of slots in her VR display, and a solid user image represents an occupied slot. Table ?? presents the personal preferences rankings, social utilities rankings, and display limitations of each user. The User column lists the users. The Configuration column outlines the configuration factors, and the Ranking column indicates the priority of other users to each individual. The ku column denotes the number of slots available in each user’s VR display. In Figs. 1(b), 1(c), and 1(d), we present three different approaches to configure each user’s VR display (called configuration hereafter). i) Preference-based configuration (e.g., conventional friend recommendation). The top-ku users of interest to each user are configured based on personal preferences. Fig. 1(b) represents the ”sees-in-VR” graph of this preferencebased configuration, where an arrow from user x to user y indicates that x sees y in her VR display. In an example, Andy has a display slot number ku of 1. Despite having a high social utility with Eddy, in the preference-based configuration, Andy sees Bella, who is ranked the highest by Andy’s personal preferences. Similarly, Bella has a ku of 2 and sees Faye and Carl, the top-ranked individuals according to her personal preferences. In this configuration, each user sees ku users during the VR conference, but they do not see each other. This configuration only considers A2 (personal preferences) and neglects A1 (social relationships), leading to limited interaction among users during the VR conference and worsening the issue of VR-induced Social Isolation. ii) Social-based configuration (e.g., conventional cohesive group extraction). Fig. 1(c) represents the ”sees-inVR” graph of this social-based configuration. A two-way arrow between two users indicates that they can see each other in the VR display, while a blue user picture indicates that both users have high social utilities. While this configuration promotes social interactions during the VR conference, it comes at the cost of individual preferences. For instance, Andy may desire to see Bella, but due to their significant social distance, Bella is not displayed, despite Andy’s interest. Conversely, Bella, with a display slot count of 2 (ku), can see Dora and Eddy, who are ranked higher according to her social utilities. This social-based configuration primarily considers aspect A1 (social relationships) but overlooks aspect A2 (personal preferences), potentially leading to decreased user satisfaction. This reduces users’ satisfaction and neglects the crucial purpose of VR conferences: to meet and get acquainted themselves with new people. iii) Preferences and socially balanced configuration. This configuration’s ”sees-in-VR” graph is presented in Fig. 1(d). For instance, Dora sees Eddy due to her high personal preferences for Eddy, and she also sees Faye since they have a high social utility ranking between them. Likewise, Bella sees Faye as she has a strong personal preference for her, while Bella and Andy see each other due to their relatively high social utility and personal preferences rankings. This configuration, which takes into account A1, A2, and A3 (limited display slots), enhances user satisfaction and mitigates VR-induced Social Isolation. The SVGD problem is distinct from conventional friend recommendation and personalized recommendation approaches (Cheng et al. 2019; Chen et al. 2017; Zhao et al. 2016; Bagci and Karagoz 2016; Lin et al. 2017), as well as cohesive group extraction in social networks (Lu et al. 2022; Al-Baghdadi and Lian 2020; Ma et al. 2022; Sanei-Mehri et al. 2021; Dong et al. 2021; Yang et al. 2021, 2012a). Conventional friend recommendation focuses on suggesting potential friends based on preferences, without considering the impact of other users’ configurations in a VR setting. Cohesive group extraction in social networks aims to identify socially-close friends but does not incorporate the inclusion of strangers based on user preferences or interests, which could hinder the formation of new friendships. More importantly, the SVGD problem allows the rendering of different users on individual VR displays, setting it apart from social/preferences group queries and group recommendations. In summary, conventional research does not consider sevThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8518 Empty ku slots in the input data User connections Andy Bella Carl Dora Eddy Faye (a) Input social network Viewing preferred individuals Fulfilling with the ku slots Andy Bella Carl Dora Eddy Faye (b) Personal preferences-only Andy Bella Carl Dora Eddy Faye Reciprocated viewing of individuals Both users have high social utility (c) Social relationships-only Socializing with preferred users Andy Bella Carl Dora Eddy Faye Both users have high social utility (d) Our idea Figure 1: An illustrative example of preference-based, social-based, and balanced configuration eral factors particularly important in VR conferences, including i) personal preferences, ii) the impact of other users’ configurations on user satisfaction in VR, iii) co-display among users (two users appear in each other’s VR displays), iv) handling over-crowding in VR events, and v) different users to be rendered on different users’ VR displays. For instance, the user recommendation approaches maximize individual users’ personal preferences and satisfaction, but it neglects ii), iii), iv) . Similarly, group formation and recommendation tend to find socially-close friends, but they do not consider the inclusion of strangers and overlook i), ii), iv), and v). Therefore, previous approaches cannot be applied directly to the SVGD problem studied in this paper. In this paper, we prove the NP-hardness of the SVGD problem and propose a 2-approximation algorithm, named Social Utility-Aware VR Conference Group Formation (SVC). This algorithm addresses the challenges posed by personal preferences and social relationships within limited VR display slots. We introduce the concept of SVD utility to quantify user satisfaction and propose the SVD kuconfiguration, where ku represents the maximum number of users displayable in a user’s VR setup. We evaluate the SVC algorithm on real datasets. The results demonstrate the effectiveness of our approach. The paper’s contributions are summarized as follows. • We present the new notion of SVD ku-configuration under the context of VR conferences and formulate a new research problem, Social-Aware VR Conference Group Display Configuration (SVGD). SVGD aims to identify an SVD ku-configuration that facilitates social interactions without sacrificing users’ individual preferences. • We analyze the NP-hardness of SVGD and propose a 2-approximation algorithm, named Social Utility-Aware VR Conference Group Formation (SVC). • We conducted extensive experiments on 5 real datasets. The results indicate that SVC significantly outperforms other baselines in terms of solution quality and efficiency. Related Works VR applications. A wide spectrum of VR applications have emerged recently, such as online VR shopping (Ko et al. 2020), friend-making (Raber, Schommer, and Kr¨uger 2019), social interactions in VR (McVeigh-Schultz, Kolesnichenko, and Isbister 2019), and social VR in edge computing (Wang et al. 2018a). We envisage that the proposed SVGD problem helps users obtain a better VR group conference experience by selecting a suitable set of users for their VR display with the maximum personal preferences and social utilities. To the best of our knowledge, similar functions are not currently available in VR conference products on the market (Meta 2023; Spatial 2023). Dense subgraph extraction. Extracting dense subgraphs in social networks has been actively studied for decades, e.g., (Shen et al. 2015; Lu et al. 2022; Al-Baghdadi and Lian 2020; Ma et al. 2022; Chen et al. 2018a; Shen et al. 2017; Hsu, Shen, and Yan 2019). However, they cannot be applied directly to our VR conference scenario due to the following reasons. i) The limited number of slots in the VR display (allowed to vary for each user) is not considered, and different users can be rendered on different users’ VR displays. ii) The important notion of co-display is not incorporated, and many users might suffer from VR-induced Social Isolation. iii) The users’ personal preferences and social utilities are not jointly examined, causing poor interactions between the users in the VR conference. Personalized recommendation. Personalized recommendations are widely used in E-commerce, suggesting products based on user preferences and browsing history (Chen et al. 2017; Liao et al. 2018; Zhao et al. 2022). However, these approaches fail to jointly consider the personal preferences and social interactions. In SVGD, both personal preferences and social interactions are crucially considered, accounting for diverse users displayed on different VR screens. This distinction sets the problem apart from social/preferences group queries (which identify identical users for the entire group)(Yang et al. 2012a) and group recommendations (which suggest the same items for all users). Problem Formulation and Hardness Result Given a directed social network G = (V, E), where V represents the set of users and edge set E specifies their social relationships, we first introduce Social-Aware VR Conference Display with ku-configuration (SVD ku-configuration), which configures the limited display slots of users in a largescale VR conference event. Definition 1. Social-Aware VR Conference Display with ku-configuration (SVD ku-configuration). Given ku display slots specified for user u to display other users in a VR The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8519 conference, an SVD ku-configuration is a collection of sets A = {Au|∀u ∈V }, where a set Au, corresponding to user u, contains at most ku other users that appear in user u’s VR display. For a large-scale VR conference, each user u sees at most ku other users to avoid the overcrowded view. As a consequence, some users may not be rendered in a specific user’s VR display due to the limited display slots. With the definition of SVD ku-configuration in hand, we now introduce co-display as follows. Definition 2. Co-display (u ↔v) and I(u, v). Let u ↔v denote that users u and v appear in each other’s VR display, i.e., v ∈Au and u ∈Av. In this case, u ↔v is referred to as a co-display. We employ a binary indicator function I(u, v) = 1 to indicate this co-display relationship, i.e., I(u, v) = 1 if u ↔v, and I(u, v) = 0, otherwise. Since the VR display slot is limited for each user, users u and v might not see each other, i.e., I(u, v) = 0, and they cannot have social interactions. Previous studies (Wang et al. 2018b; Gao et al. 2018) indicate that a user’s satisfaction in a group activity is affected by two important factors: personal preferences and social utilities. Therefore, to address the above issue and enhance social interactions, it is important to consider personal preferences and social utilities jointly in our problem formulation. Specifically, given a pair of users u and v in a social network G = (V, E) with (u, v) ∈E, let p(u, v) ≥0 denote the personal preference of u on v when v is rendered in u’s VR display, and let τ(u, v) ≥0 denote the social utility for u against v, i.e., how likely u believes that u and v would have active and joyful interactions (Lai et al. 2019; Shuai et al. 2013). Please note that p(u, v) (τ(u, v)) and p(v, u) (τ(v, u)) may be different. The assignment of personal preferences and social utilities can be done either by the user themselves, obtained through the use of social-aware recommendation models (Sankar et al. 2021; Fan et al. 2019), or inferred by event recommendation models (Liao et al. 2018; Yang et al. 2023). Given the above definitions and following (Ko et al. 2020; Wang et al. 2018b; Chen and Yang 2022; Tong, Meng, and She 2015), we define the SVD utility, which integrates personal preferences and social utilities and acts as a metric for a proper configuration of other users appearing on each user’s VR display. Definition 3. SVD utility (wAu(u, v)). Given an SVD kuconfiguration A = {A1, A2, ..., A|V |}, the SVD utility of user u on user v ∈Au combines personal preference and social utility, wAu(u, v) = (1 −λ) · p(u, v) + λ · τ(u, v) · I(u, v), (1) where λ ∈[0, 1] is a weighting factor, which can be directly set by a user or implicitly learned from existing models (Zhao, McAuley, and King 2014; Liao et al. 2018). An alternative approach for incorporating preferential and social factors is to employ an end-to-end machine learning approach, generating user and item representations and using a neural network aggregator to compute overall user satisfaction (Cao et al. 2018). However, this approach demands an algorithm to generate potential configurations for ranking and relies heavily on substantial training data to fine-tune the aggregator’s parameters. In contrast, prior research (Wang et al. 2018b; Liao et al. 2018; Zhao, McAuley, and King 2014) has shown that a blend of preferential and social factors, combined with assigned or learned weights, can effectively evaluate user satisfaction. Various objective functions, such as wAu(u, v) = min(p(u, v), τ(u, v)), wAu(u, v) = max(p(u, v), τ(u, v)), and wAu(u, v) = p(u, v) · τ(u, v), yield subpar solutions and user satisfaction. In specific contexts, min and max solely consider personal or social factors. Moreover, when either min or product is exceedingly low for either p or τ, the outcomes might not favor a particular individual despite deserving consideration in real scenarios. Hence, akin to (Wang et al. 2018b; Ko et al. 2020), we frame the SVD utility as a weighted fusion of aggregated personal preferences and social utilities, governed by the parameter λ. Here, wAu(u, v) is a directional utility from user u to a user v in her configuration, ∀u ∈V . The users selected in user u’s configuration appear in user u’s VR display, but user u may not be in their configurations. Hence, SVD utility wAu(u, v) is a directional utility. The SVD utility wAu(u, v) incorporates both the personal preference p(u, v) and social utility τ(u, v). The social utility τ(u, v) takes effect only when the co-display condition holds, i.e., u ↔v holds (I(u, v) = 1). This is because users u and v can interact only when they see each other in their own VR displays. In this paper, we formulate the Social-Aware VR Conference Group Display Configuration (SVGD) problem to configure suitable users in every user’s VR display by identifying the best SVD ku-configuration. Here, SVGD includes two additional constraints: i) Personal preference constraint θ, which requires that for a user u, any user v rendered in u’s VR display must have a personal preference at least θ, i.e., p(u, v) ≥θ, ∀u ∈V, v ∈Au. Following (Hsu, Shen, and Chang 2020; Hsu, Lan, and Shen 2018), the above constraint aims to meet the minimum required personal preferences to prevent users from becoming extremely dissatisfied. In SVGD, this personal preference constraint (referred to as preference constraint hereafter) avoids rendering users that receive a low personal preference; ii) Display slot ku for all u ∈V , which specifies the maximum number of other users that can be rendered on user u’s VR display, to prevent too many people being rendered on users’ VR displays, making the displays too crowded1. Here, even if two users’ social utilities are low, e.g., previously unknown to each other, they may still want to see each other on their VR displays if they have a high preference. Therefore, we do not set the social utility constraint in our problem. Please note that SVGD allows each user to see different surrounding 1The constraint ku can be modified to a weighted constraint to account for users’ varying weights based on their proximity to each other such that the sum of the weights of all the users in a given user’s configuration does not exceed ku. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8520 users, which makes the problem different from traditional group query and group recommendation problems. As mentioned in Section , there are three disadvantages, D1, D2, and D3 in current VR, and it is not feasible to let everyone see each other. Therefore, we employ A1 and A2 to customize the display for each user (addressing D2 and D3), and use A3 to avoid the user’s display being overcrowded (addressing D1). Moreover, we define the total SVD utility of A as P u∈V P v∈Au wAu(u, v), which is the summation of the utility values of each user u and the other users in u’s VR display, i.e., Au. Here, wAu(u, v) is the SVD utility defined in Definition 3. Specifically, the SVGD problem is formulated as follows. Problem.: Social-Aware VR Conference Group Display Configuration (SVGD). Given: A social network G = (V, E), personal preference p(u, v) for all u, v ∈V , social utility τ(u, v) for each directed edge (u, v) ∈E, weighting parameter λ, personal preference constraint θ, and display slot ku, ∀u ∈V . Objective: To find an SVD ku-configuration A∗that maximizes the total SVD utility: P u∈V P v∈Au wAu(u, v), where each set Au ∈A∗contains at most ku other users (ku can vary for each individual), and for each v in Au, its personal preference for u is at least θ. An alternative objective is to maximize users’ exposure, aiming for each user to appear in the displays of the largest possible number of other users on average. However, this objective does not consider the preferences factor, leading to poor potential interactions. Another potential problem formulation is to maximize the social utilities while requiring the personal preferences to be at least θ (the personal preference constraint). However, this formulation lacks the flexibility to cover various VR conference scenarios and cater to varying user intentions. Our problem formulation permits adjusting these two trade-off parameters to fit different scenarios. Theorem 1. SVGD is NP-hard. Proof. We prove this theorem by reducing it from the Exact Cover by Three Sets problem (Cormen et al. 2022). The complete proof is shown in Appendix A. Approximation Algorithm For SVGD In this section, we present the Social Utility-Aware VR Conference Group Formation (SVC) algorithm, a 2approximation algorithm for SVGD. A simple greedy approach falls short of achieving a guaranteed performance bound. For instance, the preference-based configuration (illustrated in Fig. 1) solely accounts for A2 (personal preferences), while the social-based setup only considers A1 (social relationships), leading to compromised user satisfaction. The SVC algorithm efficiently tackles the SVGD problem by assigning near-optimal configurations to users to maximize SVD utility. It employs a Quantified Social Benefit Network (QSB network) to accommodate two selection strategies: Personal Preference Attention Configuration (PConfiguration) and Social utilities Aware Configuration (SConfiguration). During the configuration, the QSB network integrates social utilities and personal preferences into the original graph, using directed edges for P-Configurations and undirected edges for S-Configurations. Next, in the Configuration Comparison stage, the effective SVD utility ratio between the two configurations is compared in order to determine the sequence of users to be included in the configuration. A superlative sequence is created for each user’s configuration, ensuring that the utility obtained by each user’s choice of the configuration decreases iteratively. This ensures a better utility to be extracted earlier. Detailed Algorithm Design Effective SVD utility with P/S-Configuration. To facilitate the effective design of P-Configuration and S-Configuration, we first extend the SVD utility defined in Equation (1), and propose the concept of effective SVD utility to capture the increment of SVD utility when executing P-Configuration or S-Configuration. For a user u in a P-Configuration, when user v is added to Au, the effective SVD utility with PConfiguration of user u is denoted as ∆(u →v). Similarly, the concept of effective SVD utility with S-Configuration of users u and v is denoted as ∆(u ↔v). The S-Configuration involves two users u and v, i.e., SVC selects user u into Av and selects user v into Au simultaneously. Effective SVD utility ratio with P/S-Configuration (Effective P/S ratio). To enable SVC to identify appropriate users to include in a user u’s VR display set Au, we first present the effective SVD utility mentioned above. We define the effective SVD utility ratio with P/S-Configuration, the effective SVD utility normalized by the number of selected users whose ku are not full, in order to find the effective SVD utility increment when adding a user to another user’s display slots under both configurations. The effective SVD utility ratio with P-Configuration (referred to as effective P-Configuration ratio for short) of users u on v is denoted as ρ(u →v). The effective SVD utility ratio with SConfiguration (referred to as effective S-Configuration ratio for short) of users u and v is denoted as ρ(u ↔v). The two effective ratios are critical because the display slots are limited. By measuring the normalized increment of SVD utility in user selections, SVC effectively identifies the candidates that result in better solutions with limited display slots. Construction of Quantified Social Benefit Network. Given the input social network G = (V, E), with i) personal preference constraint θ, ii) display slot ku, ∀u ∈V , SVC embeds the social utilities and personal preferences into the original graph and constructs the QSB network. The SVD utility containing only personal preferences is embedded in the two weighted directed edges between each pair of users, and the sum of SVD utilities between two users with personal preferences and social utilities is embedded as a weighted undirected edge between users. We present an example in Appendix C. Upon performing the configuration, the edge weights between the selected users are updated to preserve the approximation ratio. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8521 SVC maintains a collection S = {Su | u ∈V } of user sets. Each set Su ∈S represents the users selected by SVC to appear in the VR display of user u. Initially, SVC generates a collection Aθ = {Aθ u | u ∈V } of potential sets. Each set Aθ u ∈Aθ contains only the users who satisfy the preference constraint, i.e., Aθ u = {v | p(u, v) ≥θ, ∀v ∈V }, to filter out impossible user selections. Subsequently, SVC iteratively selects the P-Configuration or S-Configuration with the greater effective ratio. This approach ensures that the user selected in each iteration represents the current best solution. After each selection, SVC updates the corresponding ρ(·) accordingly. Specifically, if ρ(x ↔y) < ρ(u →v), SVC performs a P-Configuration and identifies the user v ∈Aθ u and the corresponding Su that maximizes the effective PConfiguration ratio ρ(u →v). SVC then adds user v to Su. To find ρ(u →v), i) if user u is not in Av, then ∆(u →v) = (1 −λ) · p(u, v), because in this case I(u, v) = 0, indicating that user u does not appear in user v’s VR display. ii) Otherwise, if user u is already in Av, then ∆(u →v) = (1 −λ) · p(u, v) + λ · (τ(u, v) + τ(v, u)), because users u and v establish the co-display relationship and I(u, v) = 1, and thus λ · (τ(u, v) + τ(v, u)) is included in ∆(u →v). Please note that p(v, u) is excluded here as it has previously been introduced in a P-Configuration ∆(v →u). Then, the effective P-Configuration ratio ρ(u →v), which is ∆(u→v) 1 , where the 1 in the denominator indicates that only 1 user, i.e., user v, is selected at this iteration. Next, after a P-Configuration that adds v to Su, SVC updates ρ(v →u) to (1−λ)·p(v, u)+λ·(τ(u, v)+τ(v, u)) and sets ρ(u ↔v) to 0, because the SVD utility of v on u increases and v can no longer be selected for u by an S-Configuration. In previous iterations, a user may have chosen another user based on a high effective P-Configuration ratio. However, it is possible that the social utilities between these two users are greater than other users’ P-Configuration. Therefore, the social utilities between the users who have previously made a P-Configuration needs to be examined to ensure that the current selection is still the best option, i.e., the largest ratio. In addition, the process of edge update is also important, since it involves one-way and opposite selections to partition the globally optimal solution. One-way selection refers to the situation whether a user u chooses another user v, whereas the opposite selection refers to user v selecting user u. With one-way selection, we partition the global optimal solution into two segments, with one segment chosen by SVC, while the other is not. Similarly, to carry out an S-Configuration, SVC identifies two users x and y that maximize the effective SConfiguration ratio ρ(x ↔y) so as to add users x and y to Sy and Sx, respectively. The calculation of ρ(x ↔y) is based on the effective SVD utility with S-Configuration ∆(u ↔v), which is the SVD utility increment when user u is selected into Av and user v is selected into Au simultaneously. Therefore, ∆(u ↔v) = (1−λ)·(p(u, v)+p(v, u))+ λ · ((τ(u, v) + τ(v, u))). In this case, I(u, v) = 1, and the co-display relationship between users u and v thereby holds. Then, ρ(u ↔v) = ∆(u↔v) 2 because 2 users (i.e., users u and v) are chosen for Av and Au, respectively. After performing an S-Configuration of x and y, SVC also needs to update the effective SVD utility. In this case, SVC sets ρ(x →y) and ρ(y →x) to 0, as x and y can no longer be selected for each other. The updating of edge weights in the S-Configuration is used to prevent redundant selection. Since x and y are already in the configuration of each other, if the weights are not updated in time, redundant P-Configurations may be executed in the future, which impacts the efficiency. Next, in each iteration, SVC adds new users to the collection S by performing either a P-Configuration or SConfiguration with the greater effective ratio. If the number of users in a set Su for a user u reaches the limit of ku display slots, SVC stops adding users to that set. If all the sets in S are full or no more users can be selected, SVC stops and returns S as the final solution A∗. We present a running example and the pseudocode in Appendix D. Analysis of Approximation Ratio of SVC In this section, we prove that SVC is a 2-approximation algorithm. The core idea is that, given any optimal solution O, the selected users can be viewed as a sequence of singleuser selections with a descending order based on the effective SVD utility, i.e., {uO 1 , uO 2 , ...}. This is because the optimal solution is the result with the highest total SVD utility, the configuration with a high SVD utility must be selected firstly. For the proposed SVC, it also selects a sequence of users {u1, u2, ...} with P/S-Configurations. We prove that the effective SVD utility of uO i being less or equal to two times the effective SVD utility for each user ui must hold. In other words, the effective SVD utility ratio with P/SConfiguration is bounded by two on the selected users of SVC. The time complexity of SVC is O(max∀u(ku) · |V | · log(|E|)). Theorem 2. SVC is a 2-approximation algorithm to SVGD with a time complexity O(max∀u(ku)|V | · log(|E|)). Proof. The detailed proof is presented in Appendix E. In real applications, ku is usually small, e.g., ku < 100, enabling SVC to find the solution efficiently. Moreover, the size of users V in the VR scenarios is much smaller than the number of users V in each dataset in Section , because it only encompasses the individuals participating in the VR event. Therefore, SVC is very efficient in practical scenarios. Experimental Results Performance Evaluation The detailed experiment setup is presented in Appendix I. To evaluate the effectiveness and the efficiency of the proposed SVC, we compare SVC with 12 various baselines on 5 real datasets. i) Timik (Jankowski, Michalski, and Br´odka 2017), ii) Pokec (Takac and Zabovsky 2012), iii) Youtube (Yang and Leskovec 2012), iv) SMMnet (Moraes and Cordeiro 2019), and v) Facebook (McAuley and Leskovec 2012). The specifics of these datasets are described in Appendix I. While there is no existing algorithm for the SVGD problem, we implement 12 baseline approaches. i) PER (Yang The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8522 -2.5E+00 0.0E+00 2.5E+00 Timik Youtube Pokec SMMnet Total SVD Utility (ku = 25, λ = 0.7, θ = 0.1) SVC (Ours) PER SOC RAND MAXD KOAVG SSGQ BCC MAXGF COMUR GraFrank FSGSel (a) Total SVD utility on diff. datasets 0.0E+00 3.5E+00 7.0E+00 Timik Youtube Pokec SMMnet Time (s) (ku = 25, λ = 0.7, θ = 0.1) SVC (Ours) PER SOC RAND MAXD KOAVG SSGQ BCC MAXGF COMUR GraFrank FSGSel (b) Time on diff. datasets Figure 2: Results on four large-scale datasets et al. 2012b), ii) SOC (Yang et al. 2012b), iii) RAND, iv) MAXD (Behnezhad and Derakhshan 2020), v) KOAVG (Ko et al. 2020), vi) SSGQ (Chen et al. 2018b), vii) BCC (Dong et al. 2021), viii) COMUR (Chen and Yang 2022), ix) GraFrank (Sankar et al. 2021), x) FSGSel (Shen et al. 2022), xi) MAXGF (Shen et al. 2020), and xii) BF, which enumerates all combinations to find the optimal solution. The baselines are introduced in Appendix H. We evaluate them by the following metrics: i) total SVD utility, and ii) total execution time. All algorithms are implemented in a server with an Intel Xeon W-2245 CPU with 128 GB RAM. Personal preferences and social utilities are assigned according to (Ko et al. 2020; Chen and Yang 2022). Sensitivity tests and comparisons with optimal solutions on the small dataset. To understand the performance gap between the proposed approach and the optimal solution, we first compare the results on the small dataset, Facebook. The results demonstrate that our algorithm outperforms other baselines. Moreover, we conduct additional sensitivity tests on dataset Facebook with different λ, θ, ku, and different inputs generated by Random, RevGNN (Li et al. 2021), GraphRec (Fan et al. 2019), and DGRec (Yang et al. 2023). Details of the experiments are shown in Appendix J. Scalability and sensitivity tests on large networks. Fig. 2 compares the total SVD utility on the four large social networks, Timik, Youtube, Pokec, and SMMnet. We set λ = 0.7, θ = 0.1, and all users’ display slots ku = 25 by default. Figs. 2(a) and 2(b) present the objective values and execution time of the proposed SVC and other baselines. Because the numbers of users in these datasets are different, we performed Z-Score Normalization (Patro and Sahu 2015) on the results. The results illustrate that the proposed SVC outperforms the other baselines in terms of solution quality and efficiency. Fig. 3 presents the sensitivity tests on the 3D VR metaverse social network dataset, Timik. In Figs. 3(a) and 3(b), we compare the results with different values of ku = {23, 25, 28, 30} and values of λ = {0.5, 0.6, 0.7, 0.8}. SVC achieves the best performance over all the baselines. PER and SOC do not perform well because they consider either personal or social aspect only. MAXD employs the degree of edges to create the configuration and thereby is difficult to ensure that the chosen edges have a higher personal preference or social utility. KOAVG is designed to recommend items to a group of users, and thereby ignoring users’ per0.0E+00 1.3E+06 2.5E+06 23 25 28 30 Total SVD Utility ku (λ = 0.7, θ = 0.5) SVC (Ours) PER SOC RAND MAXD KOAVG SSGQ BCC MAXGF COMUR GraFrank FSGSel (a) Total SVD utility on diff. ku SVC (Ours) PER SOC RAND MAXD KOAVG SSGQ BCC MAXGF COMUR GraFrank FSGSel 0.0E+00 1.5E+06 3.0E+06 Total SVD Utility λ (ku = 25, θ = 0.5) Personal preference Social utility 0.6 0.7 0.5 0.8 (b) Total SVD utility on diff. λ 0.0E+00 2.5E+00 5.0E+00 23 25 28 30 Time (s) ku (λ = 0.7, θ = 0.5) SVC (Ours) PER SOC RAND MAXD KOAVG SSGQ BCC MAXGF COMUR GraFrank FSGSel (c) Time on diff. ku 0.0E+00 2.5E+00 5.0E+00 0.5 0.6 0.7 0.8 Time (s) λ (ku = 25 , θ = 0.5) SVC (Ours) PER SOC RAND MAXD KOAVG SSGQ BCC MAXGF COMUR GraFrank FSGSel (d) Time on diff. λ Figure 3: Sensitivity tests on the large-scale Timik dataset sonal preferences. BCC and SSGQ do not pay special attention to personal preferences and do not guarantee that users can see each other, leading to high social utility limitations. GraFrank, COMUR, FSGSel, and MAXGF establish configurations based on different factors, which might disregard the important personal preferences and social utilities in the process. Moreover, SVC outperforms other baselines by 75% on average, while the p-values of SVC to the baselines are all less than 0.05, indicating that the SVD utility of SVC is statistically greater than other baselines. Figs. 3(c) and 3(d) present the execution time of the sensitivity test. The value of λ only influences the calculation of the total SVD utility, does not affect the execution time. Conclusion This paper explores the new research problem, SVGD, to configure display slots for users in VR conferences by jointly considering three important factors. We analyze the hardness of SVGD and propose a 2-approximation algorithm, named SVC, to tackle SVGD. In our experiments with 5 real datasets manifest that SVC surpasses other baseline methods in both solution quality and efficiency. In our future research, we plan to extend SVC for more generalized scenarios, such as the user’s view being obstructed. Acknowledgments This work is supported in part by the Ministry of Science and Technology, Taiwan, through grants MOST 111-2221E-007-127-. References Al-Baghdadi, A.; and Lian, X. 2020. Topic-based community search over spatial-social networks. VLDB. Bagci, H.; and Karagoz, P. 2016. Context-Aware Friend Recommendation for Location Based Social Networks Using Random Walk. In WWW. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8523 Behnezhad, S.; and Derakhshan, M. 2020. Stochastic Weighted Matching:(1-ϵ) Approximation. In FOCS. Bulu, S. T. 2012. Place presence, social presence, copresence, and satisfaction in virtual worlds. Computers & Education. Cao, D.; et al. 2018. Attentive Group Recommendation. In SIGIR. Chen, B.-J.; and Yang, D.-N. 2022. User Recommendation in Social Metaverse with VR. In CIKM. Chen, J.; et al. 2017. Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention. In SIGIR. Chen, L.; et al. 2018a. Maximum Co-Located Community Search in Large Scale Social Networks. VLDB. Chen, Y.-L.; et al. 2018b. On efficient processing of group and subsequent queries for social activity planning. TKDE. Cheng, S.; et al. 2019. Friend recommendation in social networks based on multi-source information fusion. IJMLC. Cormen, T. H.; et al. 2022. Introduction to algorithms. MIT press. Dong, Z.; et al. 2021. Butterfly-Core Community Search over Labeled Graphs. VLDB. ENGAGE. 2023. ENGAGE. https://engagevr.io/. Accessed:2023-1-15. Fan, W.; et al. 2019. Graph Neural Networks for Social Recommendation. In WWW. Gao, S.; et al. 2018. Multi-role event organization in social networks. Information Sciences. Hsu, B.-Y.; Lan, Y.-F.; and Shen, C.-Y. 2018. On automatic formation of effective therapy groups in social networks. TCSS. Hsu, B.-Y.; Shen, C.-Y.; and Chang, M.-Y. 2020. WMEgo: Willingness Maximization for Ego Network Data Extraction in Online Social Networks. In CIKM. Hsu, B.-Y.; Shen, C.-Y.; and Yan, X. 2019. Network intervention for mental disorders with minimum small dense subgroups. IEEE Transactions on Knowledge and Data Engineering, 33(5): 2121–2136. Hubs. 2023. Hubs Mozilla. https://hubs.mozilla.com/ ?utm medium=referral&utm source=xr4work.com. Accessed:2021-11-3. Jankowski, J.; Michalski, R.; and Br´odka, P. 2017. A multilayer network dataset of interaction and influence spreading in a virtual world. Scientific data. Ko, S.-H.; et al. 2020. Optimizing Item and Subgroup Configurations for Social-Aware VR Shopping. VLDB. Lacko, J. 2020. Health Safety Training for Industry in Virtual Reality. In Cybernetics & Informatics (K&I). Lai, H.-C.; et al. 2019. Social-Aware VR Configuration Recommendation via Multi-Feedback Coupled Tensor Factorization. In CIKM. Li, G.; et al. 2021. Training Graph Neural Networks with 1000 Layers. In ICML. Liao, Y.; et al. 2018. Joint Modeling of Participant Influence and Latent Topics for Recommendation in Event-Based Social Networks. ACM Trans. Inf. Syst. Lin, K.-P.; Shen, C.-Y.; Chang, T.-L.; and Chang, T.-M. 2017. A consumer review-driven recommender service for web e-commerce. In 2017 IEEE 10th Conference on Service-Oriented Computing and Applications (SOCA), 206–210. IEEE. Lu, Z.; et al. 2022. On Time-optimal (k, p)-core Community Search in Dynamic Graphs. In ICDE. Ma, C.; et al. 2022. Finding Locally Densest Subgraphs: A Convex Programming Approach. VLDB. McAuley, J.; and Leskovec, J. 2012. Learning to Discover Social Circles in Ego Networks. In NIPS. McVeigh-Schultz, J.; Kolesnichenko, A.; and Isbister, K. 2019. Shaping Pro-Social Interaction in VR: An Emerging Design Framework. In CHI. Meta. 2023. Workrooms. https://www.oculus.com/ workrooms/?utm medium=referral&utm source=xr4work. com. Accessed:2021-11-3. Mileva, G. 2022. 50+ metaverse statistics. https:// influencermarketinghub.com/metaverse-stats/. Moraes, L. M. P.; and Cordeiro, R. L. F. 2019. SMMnet: A Social Network of Games Dataset. In SBBD. Ning, H.; et al. 2021. A Survey on Metaverse: the State-ofthe-art, Technologies, Applications, and Challenges. Patro, S.; and Sahu, K. K. 2015. Normalization: A preprocessing stage. arXiv preprint arXiv:1503.06462. Pluto. 2018. A 2018 survey by Pluto VR and The Extended Mind. https://www.extendedmind.io/2018-surveyof-social-vr-users. Accessed: 2022-9-25. Raber, F.; Schommer, C.; and Kr¨uger, A. 2019. FriendGroupVR: Design Concepts Using Virtual Reality to Organize Social Network Friends. In INTERACT. Sanei-Mehri, S.-V.; et al. 2021. Mining largest maximal quasi-cliques. TKDD. Sankar, A.; et al. 2021. Graph Neural Networks for Friend Ranking in Large-Scale Social Platforms. In WWW. Shen, C.-Y.; Shuai, H.-H.; Hsu, K.-F.; and Chen, M.-S. 2017. Task-Optimized Group Search for Social Internet of Things. In EDBT, 108–119. Shen, C.-Y.; et al. 2015. Socio-spatial group queries for impromptu activity planning. TKDE. Shen, C.-Y.; et al. 2020. Activity organization for friendmaking optimization in online social networks. TKDE. Shen, C.-Y.; et al. 2022. Density Personalized Group Query. VLDB. Shuai, H.-H.; et al. 2013. Willingness Optimization for Social Group Activity. VLDB. Spatial. 2023. Spatial. https://spatial.io/?utm medium= referral&utm source=xr4work.com. Accessed:2021-11-3. Takac, L.; and Zabovsky, M. 2012. Data analysis in public social networks. International Scientific Conference and International Workshop Present Day Trends of Innovations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8524 Tong, Y.; Meng, R.; and She, J. 2015. On bottleneck-aware arrangement for event-based social networks. In ICDE Workshops. Wang, L.; et al. 2018a. Service Entity Placement for Social Virtual Reality Applications in Edge Computing. In INFOCOM. Wang, X.; et al. 2018b. Joint User- and Event- Driven Stable Social Event Organization. In WWW. Yang, C.-H.; Shuai, H.-H.; Shen, C.-Y.; and Chen, M.-S. 2021. Learning to solve task-optimized group search for social Internet of Things. IEEE Transactions on Knowledge and Data Engineering, 34(11): 5429–5445. Yang, D.-N.; Shen, C.-Y.; Lee, W.-C.; and Chen, M.-S. 2012a. On Socio-Spatial Group Query for Location-Based Social Networks. In KDD, 949–957. Yang, J.; and Leskovec, J. 2012. Defining and Evaluating Network Communities Based on Ground-Truth. In ICDM. Yang, L.; et al. 2023. DGRec: Graph Neural Network for Recommendation with Diversified Embedding Generation. In WSDM. Yang, X.; et al. 2012b. On Top-k Recommendation Using Social Networks. In RecSys. Zhao, K.; et al. 2022. Joint Learning of E-Commerce Search and Recommendation with a Unified Graph Neural Network. In WSDM. Zhao, T.; McAuley, J.; and King, I. 2014. Leveraging Social Connections to Improve Personalized Ranking for Collaborative Filtering. In CIKM. Zhao, W. X.; et al. 2016. Connecting Social Media to ECommerce: Cold-Start Product Recommendation Using Microblogging Information. TKDE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8525
2024
947
18,792
AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model Teng Hu1*, Jiangning Zhang2*, Ran Yi1†, Yuzhen Du1, Xu Chen2, Liang Liu2, Yabiao Wang2, Chengjie Wang1,2 1Shanghai Jiao Tong University 2Youtu Lab, Tencent {hu-teng, ranyi, Haaaaaaaaaa}@sjtu.edu.cn; {vtzhang, cxxuchen, leoneliu, caseywang, jasoncjwang}@tencent.com; Abstract Anomaly inspection plays an important role in industrial manufacture. Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data. Although anomaly generation methods have been proposed to augment the anomaly data, they either suffer from poor generation authenticity or inaccurate alignment between the generated anomalies and masks. To address the above problems, we propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model, which utilizes the strong prior information of latent diffusion model learned from largescale dataset to enhance the generation authenticity under few-shot training data. Firstly, we propose Spatial Anomaly Embedding, which consists of a learnable anomaly embedding and a spatial embedding encoded from an anomaly mask, disentangling the anomaly information into anomaly appearance and location information. Moreover, to improve the alignment between the generated anomalies and the anomaly masks, we introduce a novel Adaptive Attention Re-weighting Mechanism. Based on the disparities between the generated anomaly image and normal sample, it dynamically guides the model to focus more on the areas with less noticeable generated anomalies, enabling generation of accurately-matched anomalous image-mask pairs. Extensive experiments demonstrate that our model significantly outperforms the state-of-the-art methods in generation authenticity and diversity, and effectively improves the performance of downstream anomaly inspection tasks. The code and data are available in https://github.com/sjtuplayer/anomalydiffusion. Introduction In recent years, industrial anomaly inspection algorithms, i.e., anomaly detection, localization, and classification, play a crucial role in industrial manufacture (Duan et al. 2023). However, in real-world industrial production, the anomaly samples are very few, posing a significant challenge for anomaly inspection (Fig. 1-top). To mitigate the issue of few anomaly data, existing anomaly inspection mostly relies on unsupervised learning methods that only use normal samples (Zavrtanik, Kristan, and Skoˇcaj 2021; Li et al. 2021), or few-shot supervised learning methods (Zhang et al. 2023a). *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Large Amount of Normal Samples Few Anomalies Normal Samples Anomaly Generation (Ours) Anomaly Diffusion Generated Anomalies (d) Ours Factory Unsupervised Learning Few-shot Supervised Learning Previous Methods Task1: AD Task2: AL Task3: AC Produce Apply Good Performance Limited Performance Good Performance Good Performance Good Performance Incapable DRAEM ICCV’21 (a) Crop&Paste ICME’21 (b) DFMGAN AAAI’23 (c) Figure 1: Top: Our model generates extensive anomaly data, which supports the downstream Anomaly Detection (AD), Localization (AL) and Classification (AC) tasks, while previous methods mainly rely on unsupervised learning or fewshot supervised learning due to the limited anomaly data; Bottom: Generated anomaly results on hazelnut-crack and capsule-squeeze of our model and existing anomaly generation methods, where our results are the most authentic. Although these methods perform well in anomaly detection, they have limited performance in anomaly localization and cannot handle anomaly classification. To cope with the problem of scarce anomaly samples, researchers propose anomaly generation methods to supplement the anomaly data, which can be divided into two types: 1) The model-free methods randomly crop and paste patches from existing anomalies or anomaly texture dataset onto normal samples (Li et al. 2021; Lin et al. 2021; Zavrtanik, Kristan, and Skoˇcaj 2021). But such methods exhibit poor authenticity in the synthesized data (Fig. 1-bottom-a/b). 2) The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8526 GAN-based methods (Zhang et al. 2021; Niu et al. 2020; Duan et al. 2023) utilize Generative Adversarial Networks (GANs) (Goodfellow et al. 2014) to generate anomalies, but most of them require a large amount of anomaly samples for training. The only few-shot generation model DFMGAN (Duan et al. 2023) employs StyleGAN2 (Karras et al. 2020) pretrained on normal samples, and then performs domain adaption with a few anomaly samples. But the generated anomalies are not accurately aligned with the anomaly masks (Fig. 1-bottom-c). To sum up, the existing anomaly generation methods either fail to generate authentic anomalies or accurately-aligned anomalous image-mask pairs by learning from few-shot anomaly data, which limits their improvement in the downstream anomaly inspection tasks. To address the above issues, we propose AnomalyDiffusion, a novel anomaly generation method based on the diffusion model, which generates anomalies onto the input normal samples with the anomaly masks. By leveraging the strong prior information of a pretrained LDM (Rombach et al. 2022) learned from large-scale dataset (Schuhmann et al. 2021), we can extract better anomaly representation using only a few anomaly images and boost the generation authenticity and diversity. To generate anomalies with specified type and locations, we propose Spatial Anomaly Embedding, which disentangles anomaly information into an anomaly embedding (a learned textual embedding to represent the appearance type of anomaly) and a spatial embedding (encoded from an anomaly mask to indicate the locations). By disentangling anomaly location from appearance, we can generate anomalies in any desired positions, which enables producing a large amount of anomalous imagemask pairs for the downstream tasks. Moreover, we propose an Adaptive Attention Re-weighting Mechanism to allocate more attention to the areas with less noticeable generated anomalies, which dynamically adjusts the cross-attention maps based on disparities between the generated images and input normal samples during the diffusion inference stage. This adaptive mechanism results in accurately aligned generated anomaly images and anomaly masks, which greatly facilitates downstream anomaly localization tasks. Extensive qualitative and quantitative experiments and comparisons demonstrate that our AnomalyDiffusion outperforms state-of-the-art anomaly generation models in terms of generation authenticity and diversity. Moreover, our generated anomaly images can be effectively applied to downstream anomaly inspection tasks, yielding a pixel-level 99.1% AUROC and 81.4% AP score in anomaly localization on MVTec (Bergmann et al. 2019). The main contribution of this paper can be summarized as follows: • We propose AnomalyDiffusion, a few-shot diffusionbased anomaly generation method, which disentangles anomalies into anomaly embedding (for anomaly appearance) and spatial embedding (for anomaly location), and generates authentic and diverse anomaly images. • We design Adaptive Attention Re-weighting Mechanism, which adaptively allocates more attention to the areas with less noticeable generated anomalies, improving the alignment between the generated anomalies and masks. • Extensive experiments demonstrate the superiority of our model over the state-of-the-art competitors, and our generated anomaly data effectively improves the performance of downstream anomaly inspection tasks, which will be released to facilitate future research. Related Work Generative Models Generative models. VAEs (Kingma and Welling 2013) and GANs (Goodfellow et al. 2014) have achieved great progress in image generation. Recently, diffusion model (Nichol and Dhariwal 2021) demonstrates a more enhanced potential in generating images in a wide range of domains. Latent diffusion model (LDM) (Rombach et al. 2022) further improves the generation ability through compression of the diffusion space and obtains strong prior information by training on LAION dataset (Schuhmann et al. 2021). Few-shot image generation. Few-shot image generation aims to generate diverse images with limited training data. Early methods propose modifying network weights (Mo, Cho, and Shin 2020), using various regularization techniques (Li et al. 2020) and data augmentation (Tran et al. 2021) to prevent overfitting. To deal with the extremely limited data (less than 10), recent works (Ojha et al. 2021; Wang et al. 2022; Hu et al. 2023a) introduce cross-domain consistency losses to keep the generated distribution. Textual Inversion (Gal et al. 2022) and Dreambooth (Ruiz et al. 2023) encode a few images into the textual space of a pretrained LDM, but cannot control the generated locations accurately. Anomaly Inspection Anomaly inspection. The anomaly inspection task consists of anomaly detection, localization and classification. Some existing methods (Schlegl et al. 2017, 2019; Liang et al. 2023) rely on image reconstruction, comparing the differences between reconstructed images and anomaly images to achieve anomaly detection and localization. Moreover, deep feature modeling-based methods (Lee, Lee, and Song 2022; Cao et al. 2022; Roth et al. 2022; Gu et al. 2023; Wang et al. 2023) build a feature space for input images and then compare the differences between features to detect and localize anomalies. Additionally, some supervised learningbased methods (Zhang et al. 2023a) utilize a small number of anomaly samples to enhance the anomaly localization capabilities. Some studies conduct zero-/few-shot AD without using or with only a small number of anomaly samples (Jeong et al. 2023; Cao et al. 2023; Chen, Han, and Zhang 2023; Chen et al. 2023; Zhang et al. 2023b; Huang et al. 2022). Although these methods have shown promising results in anomaly detection, their performance in anomaly localization is still limited due to the lack of anomaly data. Anomaly generation. The scarcity of anomaly data has sparked research interest in anomaly generation. DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021), CutPaste (Li et al. 2021), Crop-Paste (Lin et al. 2021) and PRN (Zhang et al. 2023a) crop and paste unrelated textures or existing anomalies into normal sample. But they either The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8527 Figure 2: Overall framework of our AnomalyDiffusion: 1) The Spatial Anomaly Embedding e, consisting of an anomaly embedding ea (a learned textual embedding to represent anomaly appearance type) and a spatial embedding es (encoded from an input anomaly mask m to indicate anomaly locations), serves as the text condition to guide the anomaly generation process; 2) The Adaptive Attention Re-weighting Mechanism computes the weight map wm based on the difference between the denoised image ˆx0 and the input normal sample y, and adaptively reweights the cross-attention map mc by the weight map wm to help the model focus more on the less noticeable anomaly areas during the denoising process. generate less realistic anomalies or have limited generated diversity. The GAN-based model SDGAN (Niu et al. 2020) and Defect-GAN (Zhang et al. 2021), generate anomalies on normal samples by learning from anomaly data. But they require a large amount of anomaly data and cannot generate anomaly mask. DMFGAN (Duan et al. 2023) transfers a StyleGAN2 (Karras et al. 2020) pretained on normal samples to anomaly domain, but lacks generation authenticity and accurate alignment between generated anomalies and masks. In contrast, our model incorporates spatial anomaly embedding and adaptive attention re-weighting mechanism, which can generate anomalous image-mask pairs with great diversity and authenticity. Method Our AnomalyDiffusion aims to generate a large amount of anomaly data aligned with anomaly masks, by learning from a few anomaly samples. The inputs to our model include an anomaly-free sample y and an anomaly mask m, and the output is an image with anomalies generated in the mask area, while the remaining region is consistent with the input anomaly-free sample. As shown in Fig. 2, our AnomalyDiffusion is developed based on Latent Diffusion Model (Rombach et al. 2022). To disentangle the anomaly location information from anomaly appearance, we propose Spatial Anomaly Embedding e, which consists of an anomaly embedding ea (for anomaly appearance) and a spatial embedding es (for anomaly location). Moreover, to enhance the alignment between the generated anomalies and given masks, we introduce an Adaptive Attention Re-weighting Mechanism, which helps the model to allocate more attention to the areas with less noticeable generated anomalies (Fig. 3(c)). Specifically, the anomaly embedding ea provides the anomaly appearance type information, with one ea corresponding to a certain type of anomaly (e.g., hazelnut-crack, capsule-squeeze), which is learned by our masked textual inversion (Sec. ). And the spatial embedding es provides the anomaly location information, which is encoded from the input anomaly mask m by a spatial encoder E (shared among all anomalies). By combining the anomaly embedding ea with spatial embedding es, the spatial anomaly embedding e contains both the anomaly appearance and spatial information, which serves as the text condition in the diffusion model to guide the generation process. With the the spatial anomaly embedding as condition, given a normal sample, we generate an anomaly image with the blended diffusion process (Avrahami, Lischinski, and Fried 2022): xt−1 = pθ(xt−1|xt, e) ⊙m + q(yt−1|y0) ⊙(1 −m), (1) where xt is the generated anomaly image at timestep t, y0 is the input normal sample, m is the anomaly mask, and q(·) and pθ(·) are the forward and backward process in diffusion as illustrated in Sec. . Preliminaries Denoising diffusion probabilistic models (DDPM) (Ho, Jain, and Abbeel 2020) has achieved significant success in image generation tasks. It employs a forward process to add noise into the data and then learns denoising during the backward process, thereby accomplishing the fitting of the training data distribution. With the training image x0, the forward process q(·) in diffusion model is formulated as: q (x1, . . . , xT | x0) := T Y t=1 q (xt | xt−1) , q (xt | xt−1) := N  xt; p 1 −βtxt−1, βtI  , (2) where βt is the variance at timestep t. The backward process is approximated by predicting the mean µθ(xt, t) and variance Σθ (xt, t) (set as a constant in DDPM) of a Gaussian distribution iteratively by: pθ (xt−1 | xt) := N (xt−1; µθ (xt, t) , Σθ (xt, t)) . (3) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8528 Textual inversion (Gal et al. 2022) utilizes a pre-trained Latent Diffusion Model to extract the shared content information in few-shot input samples by optimizing text embeddings. With the refined text embeddings as condition c, textual inversion can generate novel images x0 with similar contents of input images by: x0 = T Y t=1 pθ(xt−1|xt, c), xT ∼N(0, 1). (4) Spatial Anomaly Embedding Disentangle spatial information from anomaly appearance. We aim at controllable anomaly generation with specified anomaly type and location. A direct solution is to control anomaly type by textual embedding learned from textual inversion (Gal et al. 2022), and control anomaly location by the input mask. However, textual inversion tends to capture the location of anomalies along with the anomaly type information, which results in the generated anomalies only distributed in specific locations. To address the issue, we propose to disentangle the textual embedding into two parts, where one part (the spatial embedding es) is directly encoded from the anomaly mask to indicate the anomaly location, leaving the rest (the anomaly embedding ea) to only learn anomaly type information. We name our decomposed textual embedding as Spatial Anomaly Embedding. Anomaly embedding is a learned textual embedding that represents the anomaly appearance type information. Different from textual inversion method that learns the features of the entire image, in anomaly generation, our model only needs to focus on anomaly areas, without requiring information of the entire image. Therefore, we introduce masked textual inversion, where we mask out irrelevant background and normal regions of the anomaly image, and only the anomaly regions are visible to the model. We initialize the anomaly embedding ea with k tokens and optimize it using the masked diffusion loss: Ldif = ∥m ⊙(ϵ −ϵθ (zt, t, {ea, es}))∥2 2 , (5) where ϵ ∼N(0, 1) and zt is the noised latent code of the input image x at timestep t. Spatial embedding. To provide accurate spatial information of the anomaly locations, we introduce a spatial encoder E that encodes the input anomaly mask m into spatial embedding es, which is in the form of textual embedding and contains precise location information from the mask. Specifically, we input the anomaly mask into ResNet-50 (He et al. 2016) to extract the image features in different layers and fuse them together by Feature Pyramid Networks (Lin et al. 2017). Finally, several fully-connected networks are employed to map the fused features into textual embedding space, with each network predicting one text token, thereby outputting the final spatial embedding es with n tokens. Overall training framework. For each anomaly type i, we employ an anomaly embedding ea,i to extract its appearance information, while all anomaly categories share a common spatial encoder E. For a set of image-mask pairs (xi, mi) in the training data, we first input anomaly mask mi into spatial (a) Mask (b) Ours (c)w/o AAR Figure 3: Comparison between the models w/ (Ours) and w/o Adaptive Attention Re-weighting (AAR). The model w/o AAR cannot generate anomalies to fill the entire mask. encoder E to obtain the spatial embedding es = E(mi). Then, we concatenate the anomaly embedding ea,i and the spatial embedding es together to obtain our spatial anomaly embedding e = {ea, es}. Finally, the concatenated textual embedding e is used as the text condition to the diffusion model, and the training process can be formulated as: e∗ a, E∗= arg min ea,E Ez∼E(xi),mi,ϵ,tLdif. (6) where E(·) is the image encoder of latent diffusion model and ϵ ∼N(0, 1). Adaptive Attention Re-Weighting With the spatial anomaly embedding e, we can use it as the text condition to guide the generation of anomaly images by Eq. (1). However, the generated anomaly images sometimes fail to fill the entire mask, especially when there are multiple anomaly regions in the mask or when the mask has irregular shapes (Fig. 3-a/c). In such cases, the generated anomalies are usually not well aligned with the mask, which limits the improvement in downstream anomaly localization task. To address this problem, we propose an adaptive attention re–weighting mechanism, which allocates more attention to the areas with less noticeable generated anomalies during the denoising process, thereby facilitating better alignment between the generated anomalies and the anomaly masks. Adaptive attention weight map. Specifically, at the tth denoising step, we calculate the corresponding ˆx0 = D(pθ(ˆz0|zt, e)) (where D is the decoder of LDM). Then, we calculate the pixel-level difference between ˆx0 and the normal sample y within the mask m. Based on the difference, we calculate the weight map wm by the Adaptive Scaling Softmax (ASS) operation: wm = ∥m∥1 · Softmax(f(∥m ⊙y −m ⊙ˆx0∥2 2)), (7) where f(x) = 1 x when x! = 0 and f(x) = −∞otherwise. For the regions within the mask that are similar to normal samples, the generated anomalies in these regions are less noticeable. To enhance the anomaly generation effects, these regions are assigned higher weights by Eq. (7) and allocated with more attention by attention re-weighting. Attention re-weighting. We employ the weight map wm to adaptively control the cross-attention, in order to guide our model to focus more on the areas with less noticeable generated anomalies. In our cross-attention calculation, Query is calculated from the latent code zt, and Key and Value are calculated from our spatial anomaly embedding e: Q = W (i) Q · φi (zt) , K = W (i) K · e, V = W (i) V · e, (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8529 Category DiffAug CDC Crop-Paste SDGAN Defect-GAN DFMGAN Ours IS ↑ IC-L ↑ IS ↑ IC-L ↑ IS ↑ IC-L ↑ IS ↑ IC-L IS ↑ IC-L ↑ IS ↑ IC-L ↑ IS ↑ IC-L ↑ bottle 1.59 0.03 1.52 0.04 1.43 0.04 1.57 0.06 1.39 0.07 1.62 0.12 1.58 0.19 cable 1.72 0.07 1.97 0.19 1.74 0.25 1.89 0.19 1.70 0.22 1.96 0.25 2.13 0.41 capsule 1.34 0.03 1.37 0.06 1.23 0.05 1.49 0.03 1.59 0.04 1.59 0.11 1.59 0.21 carpet 1.19 0.06 1.25 0.03 1.17 0.11 1.18 0.11 1.24 0.12 1.23 0.13 1.16 0.24 grid 1.96 0.06 1.97 0.07 2.00 0.12 1.95 0.10 2.01 0.12 1.97 0.13 2.04 0.44 hazel nut 1.67 0.05 1.97 0.05 1.74 0.21 1.85 0.16 1.87 0.19 1.93 0.24 2.13 0.31 leather 2.07 0.06 1.80 0.07 1.47 0.14 2.04 0.12 2.12 0.14 2.06 0.17 1.94 0.41 metal nut 1.58 0.29 1.55 0.04 1.56 0.15 1.45 0.28 1.47 0.30 1.49 0.32 1.96 0.30 pill 1.53 0.05 1.56 0.06 1.49 0.11 1.61 0.07 1.61 0.10 1.63 0.16 1.61 0.26 screw 1.10 0.10 1.13 0.11 1.12 0.16 1.17 0.10 1.19 0.12 1.12 0.14 1.28 0.30 tile 1.93 0.09 2.10 0.12 1.83 0.20 2.53 0.21 2.35 0.22 2.39 0.22 2.54 0.55 toothbrush 1.33 0.06 1.63 0.06 1.30 0.08 1.78 0.03 1.85 0.03 1.82 0.18 1.68 0.21 transistor 1.34 0.05 1.61 0.13 1.39 0.15 1.76 0.13 1.47 0.13 1.64 0.25 1.57 0.34 wood 2.05 0.30 2.05 0.03 1.95 0.23 2.12 0.25 2.19 0.29 2.12 0.35 2.33 0.37 zipper 1.30 0.05 1.30 0.05 1.23 0.11 1.25 0.10 1.25 0.10 1.29 0.27 1.39 0.25 Average 1.58 0.09 1.65 0.07 1.51 0.14 1.71 0.13 1.69 0.15 1.72 0.20 1.80 0.32 Table 1: Comparison on IS and IC-LPIPS on MVTec dataset. Our model generates the most high-quality and diverse anomaly data, achieving the best IS and IC-LPIPS. Bold and underline represent optimal and sub-optimal results, respectively. Bottle broken large Capsule squeeze Pill crack Ours DFMGAN AAAI’23 DefectGAN WACV’21 SDGAN TASE’20 CDC CVPR’21 MVTec Metal_nut bent Wood color Figure 4: Comparison on the generation results on MVTec. Our model generates high quality anomaly images that are accurately aligned with the anomaly masks. where φi is the intermediate representation of the U-Net (ϵθ) and the W (i)s are the learnable projection matrices. The cross-attention calculation process is then formulated as Attn(Q, K, V ) = mc · V , where mc = Softmax( QKT √ d ) is the cross-attention map. Considering the cross-attention map mc controls the generated layout and effects, where higher attention leads to stronger generation effects (Hertz et al. 2022), we reweight the cross-attention map by our weight map: m′ c = mc⊙wm. The new cross-attention map m′ c focuses more on the areas with less noticeable generated anomalies, thereby enhancing the alignment accuracy between the generated anomalies and the input anomaly masks. The re-weighted cross attention is formulated as RW-Attn(Q, K, V ) = m′ c · V. Mask Generation Recall that our model requires anomaly masks as inputs. However, the number of real anomaly masks in the training datasets is very few, and the mask data lacks diversity even after augmentation, which motivates us to generate more anomaly masks by learning the real mask distribution. We employ textual inversion to learn a mask embedding em, which can be used as text condition to generate extensive anomaly masks. Specifically, we initialize the mask embedding em as k′ random tokens and optimize it by: e∗ m = arg min em Ez∼E(m),ϵ,t h ∥ϵ −ϵθ (zt, t, em)∥2 2 i . (9) With the learned mask embedding, we can generate extensive anomaly masks for each type of anomaly. Experiments Experiment Settings Dataset. we conduct experiments on the widely used MVTec (Bergmann et al. 2019) dataset. We employ onethird of the anomaly data with the lowest ID numbers as the training set, reserving the remaining two-thirds for testing. Implementation details. We assign k = 8 tokens for anomaly embedding, n = 4 tokens for spatial embedding, and k′ = 4 tokens for mask embedding. For each type of anomaly, we generate 1000 anomalous image-mask pairs for the downstream anomaly inspection tasks. More details are recorded in the supplementary material. Metric. 1) For generation, due to the limited anomaly data, FID (Heusel et al. 2017) and KID (Bi´nkowski et al. 2018) are not reliable since the overfitted model tends to yield better scores (best) (Duan et al. 2023). Therefore, we employ Inception Score (IS), which is independent of the given The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8530 Task Pixel-level Anomaly Localization Image-level Anomaly Detection Category DRAEM PRN DFMGAN Ours DRAEM PRN DFMGAN Ours AUC AP F1 AUC AP F1 AUC AP F1 AUC AP F1 AUC AP F1 AUC AP F1 AUC AP F1 AUC AP F1 bottle 96.7 80.2 74.0 97.5 76.4 71.3 98.9 90.2 83.9 99.4 94.1 87.3 99.3 99.8 98.9 94.9 98.4 94.1 99.3 99.8 97.7 99.8 99.9 98.9 cable 80.3 21.8 28.3 94.5 64.4 61.0 97.2 81.0 75.4 99.2 90.8 83.5 72.1 83.2 79.2 86.3 92.0 84.0 95.9 97.8 93.8 100 100 100 capsule 76.2 25.5 32.1 95.6 45.7 47.9 79.2 26.0 35.0 98.8 57.2 59.8 93.2 98.7 94.0 84.9 95.8 94.3 92.8 98.5 94.5 99.7 99.9 98.7 carpet 92.6 43.0 41.9 96.4 69.6 65.6 90.6 33.4 38.1 98.6 81.2 74.6 95.3 98.7 93.4 92.6 97.8 92.1 67.9 87.9 87.3 96.7 98.8 94.3 grid 99.1 59.3 58.7 98.9 58.6 58.9 75.2 14.3 20.5 98.3 52.9 54.6 99.8 99.9 98.8 96.6 98.9 95.0 73.0 90.4 85.4 98.4 99.5 98.7 hazelnut 98.8 73.6 68.5 98.0 73.9 68.2 99.7 95.2 89.5 99.8 96.5 90.6 100 100 100 93.6 96.0 94.1 99.9 100 99.0 99.8 99.9 98.9 leather 98.5 67.6 65.0 99.4 58.1 54.0 98.5 68.7 66.7 99.8 79.6 71.0 100 100 100 99.1 99.7 97.6 99.9 100 99.2 100 100 100 metal nut 96.9 84.2 74.5 97.9 93.0 87.1 99.3 98.1 94.5 99.8 98.7 94.0 97.8 99.6 97.6 97.8 99.5 96.9 99.3 99.8 99.2 100 100 100 pill 95.8 45.3 53.0 98.3 55.5 72.6 81.2 67.8 72.6 99.8 97.0 90.8 94.4 98.9 95.8 88.8 97.8 93.2 68.7 91.7 91.4 98.0 99.6 97.0 screw 91.0 30.1 35.7 94.0 47.7 49.8 58.8 2.2 5.3 97.0 51.8 50.9 88.5 96.3 89.3 84.1 94.7 87.2 22.3 64.7 85.3 96.8 97.9 95.5 tile 98.5 93.2 87.8 98.5 91.8 84.4 99.5 97.1 91.6 99.2 93.9 86.2 100 100 100 91.1 96.9 89.3 100 100 100 100 100 100 toothbrush 93.8 29.5 28.4 96.1 46.4 46.2 96.4 75.9 72.6 99.2 76.5 73.4 99.4 99.8 97.6 100 100 100 100 100 100 100 100 100 transistor 76.5 31.7 24.2 94.9 68.6 68.4 96.2 81.2 77.0 99.3 92.6 85.7 79.6 80.5 71.4 88.2 88.9 84.0 90.8 92.5 88.9 100 100 100 wood 98.8 87.8 80.9 96.2 74.2 67.4 95.3 70.7 65.8 98.9 84.6 74.5 100 100 100 77.5 92.7 86.7 98.4 99.4 98.8 98.4 99.4 98.8 zipper 93.4 65.4 64.7 98.4 79.0 73.7 92.9 65.6 64.9 99.4 86.0 79.2 100 100 100 98.7 99.7 97.6 99.7 99.9 99.4 99.9 100 99.4 Average 92.2 54.1 53.1 96.9 66.2 64.7 90.0 62.7 62.1 99.1 81.4 76.3 94.6 97.0 94.4 91.6 96.6 92.4 87.2 94.8 94.7 99.2 99.7 98.7 Table 2: Comparison on pixel-level anomaly localization and image-level anomaly detection on MVTec dataset by training an U-Net on the generated data from DRAEM, PRN, DFMGAN and our model with AUC, AP, and F1-max metrics. GT Ours DRAEM DFMGAN Input Figure 5: Quantitative anomaly localization comparison with an U-Net trained on the data generated by DRAEM, DFMGAN, and our model. It shows that our model achieves the best anomaly localization results. anomaly data, for a direct assessment of generation quality; we also introduce Intra-cluster pairwise LPIPS distance (IC-LPIPS) (Ojha et al. 2021) to measure the generation diversity. 2) for anomaly inspection, we utilize AUROC, Average Precision (AP), and the F1-max score to evaluate the accuracy of anomaly detection and localization. Comparison in Anomaly Generation Baseline. The compared anomaly generation methods can be classified into 2 groups: 1) the models (Crop&Paste (Lin et al. 2021), DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021), PRN (Zhang et al. 2023a) and DFMGAN (Duan et al. 2023)) that can generate anomalous image-mask pairs, which are employed to compare anomaly detection and localization; 2) the models (DiffAug (Zhao et al. 2020), CDC (Ojha et al. 2021), Crop&Paste, SDGAN (Niu et al. 2020), DefectGAN (Zhang et al. 2021) and DFMGAN) that can generate specific anomaly types, which are employed to compare anomaly generation quality. Anomaly generation quality. We compare our model with DiffAug, CDC, Crop&Paste, SDGAN, DefectGAN and DFMGAN on anomaly generation quality and diversity in Tab. 1. Since DRAEM and PRN crop random textures to imitate anomalies, we cannot compute IC-LPIPS for them. For each anomaly category, we allocate one-third of the anomaly data for training and generate 1000 anomaly images to compute IS and IC-LPIPS. It demonstrates that our model generates anomaly data with both the highest quality and diversity. Moreover, we exhibit the generated anomalies in Fig. 4. It can be seen that our model excels in producing highquality authentic anomalies that accurately align with their corresponding masks. In contrast, CDC yields visually perplexing outcomes, particularly for structural anomaly categories like capsule-squeeze. SDGAN and DefectGAN yield poor outputs, frequently encountering difficulties in generating anomalies such as pill-crack. The state-of-the-art model DFMGAN sometimes struggles to produce authentic anomalies and fails to keep the alignment between the generated anomalies and masks, as shown in metal nut-bent. More results are presented in supplementary material. Anomaly generation for anomaly detection and localization. We compare the performance of our approach with existing anomaly generation methods in downstream anomaly detection and localization. Due to the inability of DiffAug and SDGAN to generate anomaly masks, we only compare our method with Crop&Paste, DRAEM, PRN, and DFMGAN. For each method, we generate 1000 images per anomaly category and train an U-Net (Ronneberger, Fischer, and Brox 2015) alongside normal samples for anomaly localization. The localization outcomes are aggregated using average pooling to derive confidence scores for imagelevel anomaly detection (the same as DREAM). We compute pixel-level metrics including AUROC, AP, F1-max. The results, as presented in Tab. 2, illustrate that our model outperforms other anomaly generation models at most condiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8531 Category Unsupervised Supervised KDAD CFLOW DRAEM SSPCAB CFA RD4AD PatchCore DevNet DRA PRN Ours bottle 94.7/50.5 98.8/49.9 99.1/88.5 98.9/88.6 98.9/50.9 98.8/51.0 97.6/75.0 96.7/67.9 91.7/41.5 99.4/92.3 99.3/94.1 cable 79.2/11.6 98.9/72.6 94.8/61.4 93.1/52.1 98.4/79.8 98.8/77.0 96.8/65.9 97.9/67.6 86.1/34.8 98.8/78.9 99.2/90.8 capsule 96.3/ 9.9 99.5/64.0 97.6/47.9 90.4/48.7 98.9/71.1 99.0/60.5 98.6/46.6 91.1/46.6 88.5/11.0 98.5/62.2 98.8/57.2 carpet 91.5/45.8 99.7/67.0 96.3/62.5 92.3/49.1 99.1/47.7 99.4/46.0 98.7/65.0 94.6/19.6 98.2/54.0 99.0/82.0 98.6/81.2 grid 89.0/ 7.6 99.1/87.8 99.5/53.2 99.6/58.2 98.6/82.9 98.0/75.4 97.2/23.6 90.2/44.9 86.2/28.6 98.4/45.7 98.3/52.9 hazelnut 95.0/34.2 97.9/67.2 99.5/88.1 99.6/94.5 98.5/80.2 94.2/57.2 97.6/55.2 76.9/46.8 88.8/20.3 99.7/93.8 99.8/96.5 leather 98.2/26.7 99.2/91.1 98.8/68.5 97.2/60.3 96.2/60.9 96.6/53.5 98.9/43.4 94.3/66.2 97.2/ 5.1 99.7/69.7 99.8/79.6 metal nut 81.7/30.6 98.8/78.2 98.7/91.6 99.3/95.1 98.6/74.6 97.3/53.8 97.5/86.6 93.3/57.4 80.3/30.6 99.7/98.0 99.8/98.7 pill 90.1/23.1 98.9/60.3 97.7/44.8 96.5/48.1 98.8/67.9 98.4/58.1 97.0/75.9 98.9/79.9 79.6/22.1 99.5/91.3 99.8/97.0 screw 95.4/ 5.9 98.8/45.7 99.7/72.9 99.1/62.0 98.7/61.4 99.1/51.8 98.7/34.2 66.5/21.1 51.0/ 5.1 97.5/44.9 97.0/51.8 tile 78.6/26.7 98.0/86.7 99.4/96.4 99.2/96.3 98.6/92.6 97.4/78.2 94.9/56.0 88.7/63.9 91.0/54.4 99.6/96.5 99.2/93.9 toothbrush 95.6/20.0 99.1/56.9 97.3/49.2 97.5/38.9 98.4/61.7 99.0/63.1 97.6/37.1 96.3/52.4 74.5/ 4.8 99.6/78.1 99.1/76.5 transistor 76.0/25.9 98.8/40.6 92.2/56.0 85.3/36.5 98.6/82.9 99.6/50.3 91.8/66.7 55.2/ 4.4 79.3/11.2 98.4/85.6 99.3/92.6 wood 88.3/24.7 98.9/47.2 97.6/81.6 97.2/77.1 97.6/25.6 99.3/39.1 95.7/54.3 93.1/47.9 82.9/21.0 97.8/82.6 98.9/84.6 zipper 95.1/30.5 96.5/63.9 98.6/73.6 98.1/78.2 95.9/53.9 99.7/52.7 98.5/63.1 92.4/53.1 96.8/42.3 98.8/77.6 99.4/86.0 Average 89.6/24.9 98.7/65.3 97.7/69.0 96.2/65.5 98.3/66.3 98.3/57.8 97.1/56.6 86.4/49.3 84.8/25.7 99.0/78.6 99.1/81.4 Table 3: Comparison on pixel-level anomaly localization (AUROC/AP) between the simple U-Net trained on our generated dataset and the existing anomaly detection methods with their official codes or pre-trained models. Method Metric SAE Masked L AAR AUROC AP F1-max 81.3 31.1 46.5 ✓ 90.3 51.2 60.7 ✓ ✓ 95.0 64.9 68.8 ✓ ✓ 95.5 67.5 68.9 ✓ ✓ ✓ 99.1 81.4 76.3 Table 4: Ablation study on our spatial anomaly embedding (SAE), masked diffusion loss (Masked L) and adaptive attention re-weighting mechanism (AAR). tions.Furthermore, we also evaluate image-level AUROC, AP, and F1-max scores in Tab. 2. It demonstrates our model has the best anomaly detection performance compared to other methods. We also compare the qualitative results on anomaly localization in Fig. 5, which shows our superior performance in localizing the anomalies. Comparison with Anomaly Detection Models To further validate the efficacy of our model, we conduct a comparative experiment with the state-of-the-art anomaly detection methods CFLOW (Gudovskiy, Ishizaka, and Kozuka 2022), DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021), CFA (Lee, Lee, and Song 2022), RD4AD (Deng and Li 2022), PatchCore (Roth et al. 2022), DevNet (Pang et al. 2021), DRA (Ding, Pang, and Shen 2022) and PRN (Zhang et al. 2023a). We employ their official codes or pre-trained models and evaluate them on the same testing dataset that we use. It is worth noting that due to the absence of the open-source code for PRN, we utilize the data provided in its paper. The comparison results on pixel-level AUROC and AP are presented in Tab. 3. It can be seen that although our model is only a simple U-Net, with the help of our generated anomaly data, it has a good performance in anomaly localization with the highest AP of 81.4% and AUROC of 99.1%, indicating the profound significance of our generated data for downstream anomaly inspection tasks. Ablation Study We evaluate the effectiveness of our components: spatial anomaly embedding (SAE), masked diffusion loss (Masked L), and adaptive attention re-weighting mechanism (AAR). Not that the models without SAE employ only an anomaly embedding trained by textual inversion. We train 5 models: 1) with none of these components; 2) only SAE; 3) SAE + masked L; 4) masked L + AAR and 5) the full model (ours). We employ these models to generate 1000 anomalous image-mask pairs and train an U-Net for anomaly localization. We compare the pixel-level localization results in Tab. 4. It demonstrates that the omission of any of the proposed modules leads to a noticeable decline in the model’s performance on anomaly localization, which validates the efficacy of the proposed modules. For more experiments, please refer to the supplementary material (Hu et al. 2023b). Conclusion In this paper, we propose Anomalydiffusion, a novel anomaly generation model which generates anomalous image-mask pairs. We disentangle anomaly information into anomaly appearance and location information represented by anomaly embedding and spatial embedding in the textual space of LDM. Moreover, we also introduce an adaptive attention reweighting mechanism, which helps our model focus more on the areas with less noticeable generated anomalies, thus improving the alignment between the generated anomalies and masks. Extensive experiments show that our model outperforms the existing anomaly generation methods and our generated anomaly data effectively improves the performance of the downstream anomaly inspection tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8532 Acknowledgments This work was supported by National Natural Science Foundation of China (62302297, 72192821, 62272447), Young Elite Scientists Sponsorship Program by CAST (2022QNRC001), Shanghai Sailing Program (22YF1420300), Beijing Natural Science Foundation (L222117), the Fundamental Research Funds for the Central Universities (YG2023QNB17), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Science and Technology Commission (21511101200), CCF-Tencent Open Research Fund (RAGR20220121). References Avrahami, O.; Lischinski, D.; and Fried, O. 2022. Blended diffusion for text-driven editing of natural images. In CVPR, 18208–18218. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In CVPR, 9592–9600. Bi´nkowski, M.; Sutherland, D. J.; Arbel, M.; and Gretton, A. 2018. Demystifying mmd gans. arXiv preprint arXiv:1801.01401. Cao, Y.; Wan, Q.; Shen, W.; and Gao, L. 2022. Informative knowledge distillation for image anomaly segmentation. Knowledge-Based Systems, 248: 108846. Cao, Y.; Xu, X.; Sun, C.; Cheng, Y.; Du, Z.; Gao, L.; and Shen, W. 2023. Segment Any Anomaly without Training via Hybrid Prompt Regularization. arXiv preprint arXiv:2305.10724. Chen, X.; Han, Y.; and Zhang, J. 2023. A Zero-/FewShot Anomaly Classification and Segmentation Method for CVPR 2023 VAND Workshop Challenge Tracks 1&2: 1st Place on Zero-shot AD and 4th Place on Few-shot AD. arXiv preprint arXiv:2305.17382. Chen, X.; Zhang, J.; Tian, G.; He, H.; Zhang, W.; Wang, Y.; Wang, C.; Wu, Y.; and Liu, Y. 2023. CLIP-AD: A LanguageGuided Staged Dual-Path Model for Zero-shot Anomaly Detection. arXiv preprint arXiv:2311.00453. Deng, H.; and Li, X. 2022. Anomaly detection via reverse distillation from one-class embedding. In CVPR, 9737– 9746. Ding, C.; Pang, G.; and Shen, C. 2022. Catching both gray and black swans: Open-set supervised anomaly detection. In CVPR, 7388–7398. Duan, Y.; Hong, Y.; Niu, L.; and Zhang, L. 2023. Few-Shot Defect Image Generation via Defect-Aware Feature Manipulation. In AAAI, volume 37, 571–578. Gal, R.; Alaluf, Y.; Atzmon, Y.; Patashnik, O.; Bermano, A. H.; Chechik, G.; and Cohen-Or, D. 2022. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. NIPS, 27. Gu, Z.; Liu, L.; Chen, X.; Yi, R.; Zhang, J.; Wang, Y.; Wang, C.; Shu, A.; Jiang, G.; and Ma, L. 2023. Remembering Normality: Memory-guided Knowledge Distillation for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16401–16409. Gudovskiy, D.; Ishizaka, S.; and Kozuka, K. 2022. Cflowad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 98–107. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. NIPS, 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Hu, T.; Zhang, J.; Liu, L.; Yi, R.; Kou, S.; Zhu, H.; Chen, X.; Wang, Y.; Wang, C.; and Ma, L. 2023a. Phasic Content Fusing Diffusion Model with Directional Distribution Consistency for Few-Shot Model Adaption. In ICCV, 2406–2415. Hu, T.; Zhang, J.; Yi, R.; Du, Y.; Chen, X.; Liu, L.; Wang, Y.; and Wang, C. 2023b. AnomalyDiffusion: FewShot Anomaly Image Generation with Diffusion Model. arXiv:2312.05767. Huang, C.; Guan, H.; Jiang, A.; Zhang, Y.; Spratling, M.; and Wang, Y.-F. 2022. Registration based few-shot anomaly detection. In ECCV, 303–319. Springer. Jeong, J.; Zou, Y.; Kim, T.; Zhang, D.; Ravichandran, A.; and Dabeer, O. 2023. Winclip: Zero-/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19606–19616. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2020. Analyzing and improving the image quality of stylegan. In CVPR, 8110–8119. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Lee, S.; Lee, S.; and Song, B. C. 2022. Cfa: Coupledhypersphere-based feature adaptation for target-oriented anomaly localization. IEEE Access, 10: 78446–78454. Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. Cutpaste: Self-supervised learning for anomaly detection and localization. In CVPR, 9664–9674. Li, Y.; Zhang, R.; Lu, J.; and Shechtman, E. 2020. Few-shot image generation with elastic weight consolidation. arXiv preprint arXiv:2012.02780. Liang, Y.; Zhang, J.; Zhao, S.; Wu, R.; Liu, Y.; and Pan, S. 2023. Omni-frequency channel-selection representations The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8533 for unsupervised anomaly detection. IEEE Transactions on Image Processing. Lin, D.; Cao, Y.; Zhu, W.; and Li, Y. 2021. Few-shot defect segmentation leveraging abundant defect-free training samples through normal background regularization and cropand-paste operation. In ICME, 1–6. IEEE. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017. Feature pyramid networks for object detection. In CVPR, 2117–2125. Mo, S.; Cho, M.; and Shin, J. 2020. Freeze the discriminator: a simple baseline for fine-tuning gans. arXiv preprint arXiv:2002.10964. Nichol, A. Q.; and Dhariwal, P. 2021. Improved denoising diffusion probabilistic models. In ICML, 8162–8171. PMLR. Niu, S.; Li, B.; Wang, X.; and Lin, H. 2020. Defect image sample generation with GAN for improving defect recognition. IEEE Transactions on Automation Science and Engineering, 17(3): 1611–1622. Ojha, U.; Li, Y.; Lu, J.; Efros, A. A.; Lee, Y. J.; Shechtman, E.; and Zhang, R. 2021. Few-shot image generation via cross-domain correspondence. In CVPR, 10743–10752. Pang, G.; Ding, C.; Shen, C.; and Hengel, A. v. d. 2021. Explainable deep few-shot anomaly detection with deviation networks. arXiv preprint arXiv:2108.00462. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In CVPR, 10684–10695. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. Roth, K.; Pemula, L.; Zepeda, J.; Sch¨olkopf, B.; Brox, T.; and Gehler, P. 2022. Towards total recall in industrial anomaly detection. In CVPR, 14318–14328. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 22500–22510. Schlegl, T.; Seeb¨ock, P.; Waldstein, S. M.; Langs, G.; and Schmidt-Erfurth, U. 2019. f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54: 30–44. Schlegl, T.; Seeb¨ock, P.; Waldstein, S. M.; Schmidt-Erfurth, U.; and Langs, G. 2017. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, 146–157. Springer. Schuhmann, C.; Vencu, R.; Beaumont, R.; Kaczmarczyk, R.; Mullis, C.; Katta, A.; Coombes, T.; Jitsev, J.; and Komatsuzaki, A. 2021. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114. Tran, N.-T.; Tran, V.-H.; Nguyen, N.-B.; Nguyen, T.-K.; and Cheung, N.-M. 2021. On data augmentation for gan training. IEEE Transactions on Image Processing, 30: 1882–1897. Wang, Y.; Peng, J.; Zhang, J.; Yi, R.; Wang, Y.; and Wang, C. 2023. Multimodal Industrial Anomaly Detection via Hybrid Fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8032–8041. Wang, Y.; Yi, R.; Tai, Y.; Wang, C.; and Ma, L. 2022. Ctlgan: Few-shot artistic portraits generation with contrastive transfer learning. arXiv preprint arXiv:2203.08612. Zavrtanik, V.; Kristan, M.; and Skoˇcaj, D. 2021. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In ICCV, 8330–8339. Zhang, G.; Cui, K.; Hung, T.-Y.; and Lu, S. 2021. DefectGAN: High-fidelity defect synthesis for automated defect inspection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2524–2534. Zhang, H.; Wu, Z.; Wang, Z.; Chen, Z.; and Jiang, Y.-G. 2023a. Prototypical residual networks for anomaly detection and localization. In CVPR, 16281–16291. Zhang, J.; Chen, X.; Xue, Z.; Wang, Y.; Wang, C.; and Liu, Y. 2023b. Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly Detection. arXiv preprint arXiv:2311.02612. Zhao, S.; Liu, Z.; Lin, J.; Zhu, J.-Y.; and Han, S. 2020. Differentiable augmentation for data-efficient gan training. NIPS, 33: 7559–7570. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8534
2024
948
18,793
Learning Time Slot Preferences via Mobility Tree for Next POI Recommendation Tianhao Huang1*, Xuan Pan12*, Xiangrui Cai134†, Ying Zhang13, Xiaojie Yuan123 1College of Computer Science, Nankai University 2Tianjin Key Laboratory of Network and Data Security Technology, Tianjin, China 3Key Laboratory of Data and Intelligent System Security, Ministry of Education, China 4Science and Technology on Communication Networks Laboratory, Shijiazhuang, China [email protected], [email protected], {caixr, yingzhang, yuanxj}@nankai.edu.cn Abstract Next Point-of-Interests (POIs) recommendation task aims to provide a dynamic ranking of POIs based on users’ current check-in trajectories. The recommendation performance of this task is contingent upon a comprehensive understanding of users’ personalized behavioral patterns through Locationbased Social Networks (LBSNs) data. While prior studies have adeptly captured sequential patterns and transitional relationships within users’ check-in trajectories, a noticeable gap persists in devising a mechanism for discerning specialized behavioral patterns during distinct time slots, such as noon, afternoon, or evening. In this paper, we introduce an innovative data structure termed the “Mobility Tree”, tailored for hierarchically describing users’ check-in records. The Mobility Tree encompasses multi-granularity time slot nodes to learn user preferences across varying temporal periods. Meanwhile, we propose the Mobility Tree Network (MTNet), a multitask framework for personalized preference learning based on Mobility Trees. We develop a four-step node interaction operation to propagate feature information from the leaf nodes to the root node. Additionally, we adopt a multitask training strategy to push the model towards learning a robust representation. The comprehensive experimental results demonstrate the superiority of MTNet over ten stateof-the-art next POI recommendation models across three realworld LBSN datasets, substantiating the efficacy of time slot preference learning facilitated by Mobility Tree. Introduction The advent of location-based social networks (LBSNs) facilitate users to share their geographical locations. The huge amount of geographical data opens up new opportunities to learn user preferences and recommend Point-of-Interests (POIs) to users. Next POI recommendation task aims to provide a ranked list of POIs that users are most likely to visit in the future, by discerning the dynamic preferences of users (Manotumruksa, Macdonald, and Ounis 2018) through their current check-in trajectories. It is beneficial for both user time scheduling and business expansion (Liu et al. 2017). Previous studies on next POI recommendation organize a trajectory of a user by a sequence of check-ins, where *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Mary … … Mall Aug.1 14:50 Restaurant 1 Aug.1 18:30 Supermarket Aug.3 17:10 Cinema Aug.4 16:10 Restaurant 2 Aug.4 19:00 … … … … … … 14:00 ~ 18:00 18:00 ~ 22:00 Peter … … Supermarket Aug.2 12:30 Restaurant 2 Aug.2 13:10 Bookshop 1 Aug.2 14:30 Restaurant 2 Aug.6 12:50 Bookshop 1 Aug.6 16:30 … … 10:00 ~ 14:00 14:00 ~ 18:00 Bookshop 2 Aug.2 15:50 … … Figure 1: Mary and Peter exhibit varying preferences that evolve over distinct periods in one day. For example, during the period of “14:00∼18:00”, Mary prefers to go shopping in a mall or watch a movie in a cinema, whereas Peter likes to visit several bookstores. the items are arranged in chronological order. This approach promotes the comprehension of sequential patterns and transitional dependencies, enabling the capture of users’ shortterm preferences (Feng et al. 2018; Manotumruksa, Macdonald, and Ounis 2018; Huang et al. 2019; Guo et al. 2020). Some attempts such as LSTPM (Sun et al. 2020) and STGN (Zhao et al. 2022) have boosted recommendation performance by integrating long-term and short-term user preference modeling strategies. To further enhance checkin communication, STAN (Luo, Liu, and Liu 2021) adopted the self-attention mechanism to facilitate interactions among non-consecutive check-ins. Recently, more and more studies (Rao et al. 2022; Yang, Liu, and Zhao 2022; Lim et al. 2022; Wang et al. 2021) have leveraged graph structure in check-in descriptions to exploit collaborative signals among different users and capture global transition relationships. While prior research has made considerable advancements in modeling sequential patterns and transitional relationships within trajectories, they primarily derive user preferences based solely on the sequencing of check-ins, thereby ignoring the personalized preferences across discrete time slots. We observe that users frequent specific POIs during a relatively fixed period, rather than a specific time point. As illustrated in Figure 1, wherein Mary’s propensity to visit a mall or a supermarket manifests not as a rigid adherence to a precise time point but rather within the broader temporal The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8535 window from 14:00 to 18:00. However, Peter’s inclination toward several bookshops is observed within the same 14:00 to 18:00 window. Notably, Mary’s preference for shopping at malls does not appear as pronounced during other temporal periods. This lends credence to the assumption that individualized preferences can be delineated even further at the temporal granularity level. Moreover, user behavior patterns are not uniformly consistent throughout each day. For example, Mary’s trajectory on a leisure day encompassing “Cinema – Mall – Supermarket – Restaurant”, which contrasts with her routine on a working day, organized by “Company – Caf´e – Supermarket – Apartment”. In the event that the last check-in within a given trajectory is visiting a supermarket, it is observed that divergent next POIs emerge across distinct days. On a rest day, it is a “Restaurant”, but on a workday, it is the “Apartment”. To summarize, the temporal variability underscores the necessity of subdividing user preferences beyond daily granularity and extending into specific temporal segments. Regrettably, existing methodologies lack any endeavors toward segregating user preferences across divergent time slots with varying granularities when modeling the check-in trajectories. In this paper, we propose a hierarchical structure, referred to as the “Mobility Tree”, as an innovative approach to encapsulate users’ check-in records. Different from the prevalent sequential-based or graph-based structures, the Mobility Trees are constructed by integrating multi-granularity time slot nodes. Each time slot node is tailored to aggregate check-in occurrences within a specific time period. This distinctive construction enables the discrimination of user preferences across different temporal phases, thus enhancing the holistic understanding of users’ varied behavioral patterns. In alignment with the formulation of the Mobility Tree concept, we accordingly introduce the Mobility Tree Network (MTNet) to learn users’ dynamic preferences for the next POI recommendation task. To adapt the specific topological structure of the Mobility Tree, we devise a four-step node interaction operation in MTNet for facilitating information propagation from raw check-in records towards the hierarchical time slot nodes. We adopt a multitask training strategy to enhance the representation ability and robustness of the Mobility Trees. This strategy orchestrates a collaborative prediction for the next POI and the corresponding contextual information. The weights of multitasks are adjusted by a self-adaptive approach. In summary, the main contributions of this paper are as follows: • We introduce a novel hierarchical check-in description method named Mobility Tree. Concretely, the trees consist of multi-granularity time slot nodes to capture users’ distinct preferences across diverse periods that have been ignored in previous works. • We propose Mobility Tree Network (MTNet) to grasp users’ dynamic preferences in the next POI recommendation task. In particular, we devise a four-step node interaction operation for message passing in Mobility Trees and adopt a multitask training strategy to push towards learning a robust representation. • Extensive experiments are conducted on three real-world LBSN datasets. The results demonstrate the superiority of MTNet when compared to ten state-of-the-art baselines. We also provide in-depth analysis of the proposed model via ablation study and visualization analysis. Related Work POI Recommendation is a popular service in LBSNs, and the next POI recommendation is a typical and well-studied branch (S´anchez and Bellog´ın 2022). The fundamental assumption in this problem is that users’ future movements and activities are strongly influenced by their latest check-in behaviors (Zhang et al. 2022). Currently, the prevailing paradigm for capturing user preferences involves encoding user check-ins based on sequential-based models, such as Recurrent Neural Networks (RNNs). Several extensions of the traditional RNN model have been proposed to enhance the representation of check-in features and user preference comprehension. For example, ST-RNN (Liu et al. 2016) incorporates spatialtemporal transition matrices within the recurrent structure to capture the check-in features. CARA (Manotumruksa, Macdonald, and Ounis 2018) improves upon gating mechanisms that control the influence of contextual information on hidden states between recurrent units. LSTPM (Sun et al. 2020) and STGN (Zhao et al. 2022) focus on modeling longterm and short-term preferences within LSTM-based architectures to enhance the correlation between check-ins. Recently, attention mechanisms have been employed in various sequential-based models to capture high-order dependencies among check-ins. For instance, ARNN (Guo et al. 2020) integrates RNNs with attention layers to select highly salient neighbors that are correlated with the current check-in at each time step. ATST-LSTM (Huang et al. 2019) introduces an attention-based spatiotemporal LSTM network that uses contextual information to highlight relevant historical check-ins in a sequence. In recent years, Transformers (Vaswani et al. 2017) have gained popularity across multiple domains due to their effectiveness in modeling long-range dependencies. GETNext (Yang, Liu, and Zhao 2022) adopts a transformer framework to integrate multiple elements into the preference representation. GeoSAN (Lian et al. 2020) proposes a self-attention-based geography encoder to capture spatial proximity between nearby locations. Some studies construct graphs to exploit global transition patterns for check-in interactions across different trajectories, such as GETNext (Yang, Liu, and Zhao 2022), HMTGRN (Lim et al. 2022), ASGNN (Wang et al. 2021), and Graph-Flashback (Rao et al. 2022). However, in the aforementioned models, the check-in context, such as geography and category information, is only considered as POI features and not decoupled as part of the user’s mobility behavior within the check-in sequence. In this paper, we propose to encode check-in context based on tree structures instead of sequences and graphs to provide a more detailed description of user mobility behavior in the real world. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8536 Problem Formulation In this section, we introduce the related definition and the problem formulation of next POI recommendation. We denote the set of users as U =  u1, u2, . . . , u|U| , the set of POIs as L =  l1, l2, . . . , l|L| , and the set of timestamps as T =  t1, t2, . . . , t|T | . Additionally, we define the list of POI categories (e.g., cafe or restaurant) as set C =  c1, c2, . . . , c|C| . Here, each POI li ∈L encompasses both geographic information (latitude and longitude of the POI) and categorical information, which is represented by a tuple ⟨lat, lon, cat⟩. On this basis, we put forth several definitions for the problem of next POI recommendation. DEFINITION 1: (Check-in) A check-in is denoted as a tuple s = ⟨u, l, t⟩∈U × L × T , which indicates that user u visited venue l at timestamp t. DEFINITION 2: (Trajectory) For each user u ∈U, we split all check-ins of the user u to a trajectory sequence denoted by Su = {Su 1 , Su 2 , . . . , Su n}. Each trajectory Su m ∈Su comprises a sequence of check-ins visited by the user u in a chronological order, i.e., Su m = {su 1, su 2, . . . , su k} and k is the index of the last check-in of trajectory Su m. DEFINITION 3: (Next POI recommendation) Given the current trajectory of user u, i.e. Su m = {su 1, su 2, . . . , su k}, the objective of next POI recommendation is to recommend top-k POIs that the user is most likely to visit in the future. Specifically, the recommendation model generates a list of probabilities for all candidates P =  p1, p2, . . . , p|L| , and then return the top-k POIs with the highest probabilities from the list P for recommendation. Method In this section, we introduce the proposed next POI recommendation model, MTNet. It consists of four modules corresponding to the Mobility Tree construction, node initialization, node information interaction, and multitask learning. Mobility Tree Construction As discussed in the introduction, the purpose of building Mobility Trees is to especially represent and learn users’ personalized preferences hidden in varied time slots, including a whole day and specific periods each day. As shown in Figure 2, given a trajectory of user u, i.e., Su m = {su 1, su 2, . . . , su k}, we can illustrate the corresponding Mobility Tree TR. It consists of the nodes representing the multigranularity time slots and the raw check-ins. More specifically, the blue nodes are the coarse-grained time slot nodes, which integrate the trajectory of a user in one day, called the day nodes. The child nodes of the day nodes, that is, the yellow ones, are the fine-grained time slot nodes, describing the trajectory of a user in a certain period of time, called the period nodes. The child nodes of the period nodes are the raw check-in nodes, representing the check-in information from the LBSN data. All the raw check-in nodes are the leaves of the Mobility Trees since they are no longer subdivided. Taking the trajectory in Figure 2 as an example, the user has a trajectory across two days. {s1, s2, . . . , s5} are the check-ins from Aug. 1, and {s6, s7, s8} are the check-ins from Aug. 2. If we divide the day into four periods, each day Day n (Aug 2) s7 s6 s8 s1 s2 s4 Day n-1 (Aug 1) Mobility Tree Day Node Period Node s1 s2 s3 s4 s5 s6 s7 s8 Aug 1 12:00~18:00 Aug 2 0:00~6:00 Aug 1 18:00~24:00 Aug 2 6:00~12:00 Trajectory 12:00~18:00 18:00~24:00 6:00~12:00 0:00~6:00 s3 s5 Time Slot Node Figure 2: Illustration of a Mobility Tree construction. has the periods of “0:00 6:00,” “6:00 12:00,” “12:00 18:00,” and “18:00 24:00”. Therefore we can divide the check-ins from the two days into four time periods. Then we can construct the Mobility Tree according to the arrangement of the time periods as the time slot nodes and the raw check-in nodes, as shown in the figure. Node Initialization We employ the embedding layers to encode the POI, user, geographical location, category, and time slot into latent representations as el ∈Rd, eu ∈Rd, eg ∈Rd, ec ∈Rd, et ∈ R4d, respectively. Here, for the timestamp of each check-in record, we divide one day into 24 time slots, corresponding to 24 hours, helping better represent the periodicity of check-in records. Moreover, the location information of the POIs is converted to the areas clustered by the k-means method. We denote |G| as the cluster number. Then, we assign areas with unique IDs and allocate the IDs to all the POIs’ locations based on the area they belong to. For each trajectory S, we initialize the raw check-in nodes as the concatenation of the embeddings, then we attach the time slot to the es as follows: es = [eu; el; ec; eg] + γet, (1) where [; ] denotes the concatenation operation, γ is for controlling the influence of the visiting time, and es id the embedding of the raw check-in node of s ∈S. Node Information Interaction Before providing the two-stage message-passing mechanism, we first introduce two basic operations, whose network structures are shown in Figure 3. The first one is Intrahierarchy Communication, IAC for short. It is used to realize the information exchange among the child nodes belonging to the same parent node. Given the child nodes E, we adopt a self-attention encoder (Vaswani et al. 2017) that includes The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8537 Dropout Add&Norm Feed-forward Dropout Add&Norm Multi−Head Self−Attention Selfattention σ σ σ + + tanh tanh e c 1 kc kn c kn h 1 k h IAC IRC ''e 'e e Figure 3: Network structures of Intra-hierarchy Communication (IAC) and Inter-hierarchy Communication (IRC). multi-head self-attention layers to transform each e ∈E as follows: e(h) ij = eiW(h) Q  ejW(h) K ⊤ p dz/H , α(h) ij = exp  e(h) ij  Pn k=1 exp  e(h) ik , z(h) i = n X j=1 α(h) ij  ejW(h) V  , zi = Concat  z(1) i , · · · , z(H) i  , ˜ei = LayerNorm (ei + zi) , e′ i = LayerNorm (˜ei + FC (ReLU (FC (˜ei)))) , (2) where each self-attention layer has a fully-connected (FC) layer and a normalization layer (LayerNorm), H is the head number, 1 ≤h ≤H, and W(h) Q , W(h) K , W(h) V ∈ Rdx×(dx/H) are learnable parameter matrices for “query”, “key” and “value”. Each head of each self-attention layer provides a learned relation between all the input e ∈E, the relation strength is determined in the attention weights α(h) ij . The second operation Inter-hierarchy Communication, IRC for short. It is used to realize the message-passing from the child nodes to the parent node. Given the a node e, its kth child node’s hidden state and memory cell are denoted as hℓand cℓ. We follow the N-ary structure of the TreeLSTM network (Tai, Socher, and Manning 2015) to perform the hidden state transition among the nodes as follows: i = σ W(i)e + N X ℓ=1 U(i) ℓhℓ+ b(i) ! , fk = σ W(f)e + N X ℓ=1 U(f) kℓhℓ+ b(f) ! , o = σ W(o)e + N X ℓ=1 U(o) ℓhℓ+ b(o) ! , (3) u = tanh W(u)e + N X ℓ=1 U(u) ℓ hℓ+ b(u) ! , c = ij ⊙u + N X ℓ=1 fℓ⊙cℓ, e′′ = o ⊙tanh (c) , Step1. IAC for raw check-in nodes s7 s6 s8 s1 s2 s3 s5 s4 Step2. IRC from raw check-in nodes to period nodes s7 s6 s8 s1 s2 s3 s5 s4 Step3. IAC for period nodes s7 s6 s8 s1 s2 s3 s5 s4 Step4. IRC from period nodes to day nodes s7 s6 s8 s1 s2 s3 s5 s4 Figure 4: Four-step node interaction. where σ and tanh are the sigmoid and hyperbolic tangent activation functions, respectively; ⊙is the Hadamard Product; W(i), W(f), W(o), W(u), U(i) ℓ, U(f) kℓ, U(o) ℓ, and U(u) ℓ are learnable parameter matrices. The node information interaction consists of four steps, as shown in Figure 4. Step 1. The raw check-in nodes belonging to the same period node exchange information using IAC. This step allow the check-in records can fully capture the features of other check-ins in the same period. Step 2. The period nodes process IRC to aggregate the check-in information from their raw check-in nodes. Step 3. The period nodes belonging to the same-day node exchange information using IAC. This step makes the period preference representations obtain the features from other periods. Step 4. The day nodes process IRC to aggregate the period preference from their period nodes. After four-step node interaction, we obtain the representation of the root node denoted as e(k). Multitask Learning To push the model towards learning a robust representation, we design a multitask training framework to simultaneously predict the next POI, geographical cluster, and category. ˆyl = e(k)Wl + bl, ˆyg = e(k)Wg + bg, ˆyc = e(k)Wc + bc, (4) where Wl ∈Rd×|L|, Wg ∈Rd×|G|, Wc ∈Rd×|C|, bl, bg, and bc are learnable parameters for dense layers. Inspired by (Kendall, Gal, and Cipolla 2018), our loss function can be represented as follows: Lfinal = 1 2σ2 l Ll + 1 2σ2g Lg + 1 2σ2c Lc + log σlσgσc, (5) where σl, σg, and σc are learnable parameters, and the last term serves as a regularization term for denoising, Ll, Lg, and Lc are cross-entropy loss for next POI, geographical The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8538 cluster, and category respectively. They are formulated as: Ll = − |L| X i=1 yi l log ˆyi l, Lg = − |G| X i=1 yi g log ˆyi g, Lc = − |C| X i=1 yi c log ˆyi c , (6) where yi ∗refers to the i-th item of the vector y∗. In the recommendation stage, the preference score of the next POI aggregates the prediction scores from the current day node, period node, and last raw check-in node, representing the preferences from the transition relationship, current day, and current period, respectively. The recommendation score is calculated as follows: ˆyrec l = ηˆyday l + δˆyperiod l + ˆycheck-in l , (7) where η and δ are used to control the influence of the day and period preference score. Experiments In this section, we conduct a series of experiments to thoroughly evaluate our proposed model MTNet. First, we introduce the datasets and the state-of-the-art baselines. Then, we compare the performance of MTNet with the baselines to show the superiority of our method. Furthermore, we conduct a set of comparative experiments to discuss the impact of time slot factor on our model. We also conduct an ablation experiment to check out the effectiveness of each component in our model. Additionally, we visualize the representations of users and POIs to demonstrate the fine-grained user preferences learned by MTNet. Experimental Setup Datasets We conduct experiments using three widely used datasets acquired from two LBSN platforms, namely Foursquare and Gowalla. Specifically, for Foursquare, we use data collected separately in Tokyo (Yang et al. 2015) and New York City (Yang et al. 2015) during the period between 12th April 2012 and 16th February 2013. For Gowalla, we utilize data collected in California and Nevada (Cho, Myers, and Leskovec 2011) spanning from February 2009 to October 2010. In these three datasets, each row corresponds to a check-in record, encompassing user ID, POI ID, POI category, check-in time, GPS coordinates. Following previous study (Yang, Liu, and Zhao 2022), we exclude inactive users who have fewer than 10 checkin records. We also eliminate unpopular POIs that are visited less than 10 times by the remaining users. Considering that a substantial time interval between two records weakens the temporal dependency between them, we divide the complete check-in sequences of users into trajectories by 24 hours. Additionally, with the requirement to convert trajectories into a Mobility Tree, we discard all trajectories with a length of less than 2. The statistical details of the processed datasets are presented in Table 1. Subsequently, We partition the datasets into training, validation, and test sets in chronological order. The training set, dataset user POI check-in trajectory NYC 1,075 5,099 104,074 14,160 TKY 2,281 7,844 361,430 44,692 CA 4,318 9,923 250,780 32,920 Table 1: Statistics of the three datasets. consisting of the initial 80% of check-ins, is used to train the model. Subsequently, the middle 10% of check-ins form the validation set, which is utilized for selecting the bestperforming model. Finally, we evaluate the model on the test set that consists of the last 10% of check-ins. Baseline Models We compare MTNet with ten state-ofthe-art methods: 1) FPMC (Rendle, Freudenthaler, and Schmidt-Thieme 2010): integrates Matrix Factorization and Markov Chains; 2) PRME (Feng et al. 2015): introduces a pair-wise metric embedding method; 3) LSTM (Hochreiter and Schmidhuber 1997): is a modified RNN architecture; 4) ST-RNN (Liu et al. 2016): extends RNN by considering time and distance transition matrices; 5) STGN (Zhao et al. 2022): enhances LSTM by adding spatiotemporal gates; 6) STGCN (Zhao et al. 2022): enhances STGN by using coupled input and forget gates; 7) PLSPL (Wu et al. 2022): employs attention mechanism and utilizes LSTM; 8) CFPRec (Zhang et al. 2022): models users’ past, present and future preferences; 9) STAN (Luo, Liu, and Liu 2021): is a spatiotemporal bi-attention model; 10) GETNext (Yang, Liu, and Zhao 2022): introduces a trajectory flow map to capture spatial transition information. Evaluation Metrics We utilize two commonly employed evaluation metrics, namely average Accuracy@K (Acc@K) and Mean Reciprocal Rank (MRR), to evaluate the effectiveness of the recommendation model. Both Acc@K and MRR serve as positive indicators. Experiment Settings We develop MTNet1 based on PyTorch and conduct experiments on hardware with AMD Ryzen 7 4800H CPU and NVIDIA GeForce RTX 2060 GPU. We keep the hyper-parameters consistent on NYC, TKY and CA. We set the number of time slots to 12 for TKY and CA, and 4 for NYC, according to the performance on the validation set. The user and POI embedding dimensions are both set to 128, while the category and geography embedding dimensions are set to 32. The hidden size for TreeLSTM module is 512. We employ the Adam (Kingma and Ba 2014) optimizer with an initial learning rate of 1 × 10−3 and a weight decay rate of 1 × 10−4. We set the influence of day node η = 1 and period node δ = 1. We generate 60 clusters for geographical information representation. Moreover, we utilize a step-by-step learning rate scheduler with a step size of 6 and a decay factor of 0.9. For the Transformer component, we incorporate 2 transformer layers, each consisting of 2 attention heads and a dimension of 1024. Additionally, we randomly drop the embeddings and parameters with a dropout rate of 0.4 and 0.6 respectively. Finally, we run each 1https://github.com/Skyyyy0920/MTNet The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8539 Models NYC TKY CA Acc@1 Acc@5 Acc@10 MRR Acc@1 Acc@5 Acc@10 MRR Acc@1 Acc@5 Acc@10 MRR FPMC 0.1003 0.2126 0.2970 0.1701 0.0814 0.2045 0.2746 0.1344 0.0383 0.0702 0.1159 0.0911 PRME 0.1159 0.2236 0.3105 0.1712 0.1052 0.2728 0.2944 0.1786 0.0521 0.1034 0.1425 0.1002 LSTM 0.1305 0.2719 0.3283 0.1857 0.1335 0.2728 0.3277 0.1834 0.0665 0.1306 0.1784 0.1201 ST-RNN 0.1483 0.2923 0.3622 0.2198 0.1409 0.3022 0.3577 0.2212 0.0799 0.1423 0.1940 0.1429 STGN 0.1716 0.3381 0.4122 0.2598 0.1689 0.3391 0.3848 0.2422 0.0810 0.1842 0.2579 0.1675 STGCN 0.1799 0.3425 0.4279 0.2788 0.1716 0.3453 0.3927 0.2504 0.0961 0.2097 0.2613 0.1712 PLSPL 0.1917 0.3678 0.4523 0.2806 0.1889 0.3523 0.4150 0.2542 0.1072 0.2278 0.2995 0.1847 CFPRec 0.1692 0.3867 0.4894 0.2680 0.2052 0.4028 0.4769 0.2963 0.0473 0.1420 0.1874 0.0911 STAN 0.2231 0.4582 0.5734 0.3253 0.1963 0.3798 0.4464 0.2852 0.1104 0.2348 0.3018 0.1869 GETNext 0.2435 0.5089 0.6143 0.3621 0.2254 0.4417 0.5287 0.3262 0.1357 0.2852 0.3590 0.2103 MTNet 0.2620 0.5381 0.6321 0.3855 0.2575 0.4977 0.5848 0.3659 0.1453 0.3419 0.4163 0.2367 Impro 7.60% 5.74% 2.90% 6.46% 14.24% 12.68% 10.61% 12.17% 9.45% 19.88% 15.96% 12.55% Table 2: Performance of the model on the NYC, TKY and CA datasets compared based on the Accuracy (Acc) and Mean Reciprocal Rank (MRR) metrics. We present the results in ascending order based on the model’s performance, highlighting the best results in bold, and underlining the second best results. model for a total of 50 epochs with a batch size of 1024. Results and Analysis We compare the performance of our proposed model and the baselines on the three datasets. We take Acc@1, Acc@5, Acc@10 and MRR as the evaluation metrics. For all models, we use the same data preprocessing method following GETNext (Yang, Liu, and Zhao 2022). The experimental results presentd in Table 2 show that our proposed model MTNet outperformes all other state-of-the-art baselines across all datasets in terms of all evaluation metrics. We find that all models perform the best on the NYC dataset, followed by TKY, while the performance on the CA dataset was the worst. The primary reason for this could be that the NYC and TKY datasets collect check-in records of a limited number of users in a small geographical area. These two datasets have fewer POIs and a smaller user population. In contrast, the CA dataset is collected across the larger regions of California and Nevada, with the most users (4318, four times that of NYC) and POIs (9923, nearly twice as many as in NYC). Therefore, CA presents a challenge for all recommendation models. The results show that MTNet performs exceptionally well on the CA dataset. It averages a superior performance over the second-ranked model, GETNext, by 15.26% in terms of Acc, and it exhibits a notable improvement of 12.55% in terms of MRR. The improvement once again demonstrates that MTNet effectively learns long-term and short-term preferences of the data. This is primarily due to the average trajectory length in the training set of CA being 8.68, whereas the value is 7.55 for NYC and 8.22 for TKY. This enhances the advantage of MTNet over other models in segregating user preferences across different time slots from the users’ check-in trajectories. It can be also observed that GETNext achieves the second best performance on the three datasets. GETNext designs a trajectory flow map to effectively capture common movement patterns of users, which tackle challenges related to inactive users and short trajectories well, while it is not able to learn fine-grained user preferences at different time slots. Besides, STAN also performs well on the three datasets using a bi-attention model to learn the spatio-temporal correlations of user’s trajectory. Impact of Time Slot Factor In the implementation, we convert timestamps to integers ranging from 1 to 24 for check-in time embedding learning. Then, the integer times are assigned to their corresponding time slots. To investigate the influence of the time slot factor on MTNet, we conduct experiments with various numbers of period nodes. It can be observed from Figure 5 that the performance with 12 period nodes outperforms other values across all metrics, where each period is 2-hour length. From empirical analysis on TKY dataset, we figure out that the visitation time window (the interval between the earliest and latest check-in times) for users to check-in the same POI is around 2 hours. The observation of the dataset aligns with the experimental results, indicating that the number of time slots used to construct the Mobility Tree should correspond to the density of user check-in timestamps. Moreover, when the time slot is set to 2, 3, 20, or 24, we observe a consistently poor performance of the model across all metrics. This suggests that if we divide the day into only 2 or 3 time periods (i.e., daytime and nighttime) or if we excessively fine-grain the time intervals (such as every hour), the model’s ability to learn user preferences at specific times diminishes. Ablation Study In this section, we conduct an ablation study to evaluate the contributions of each component of MTNet. We perform a total of eight experiments on all three datasets: 1) Full model; 2) Model without multi-task learning mechanism, which implies that the model’s loss function is simply the direct summation of the losses generated from predictions for POI, category, and geography; 3) Model without geography prediction; 4) Model without category prediction; 5) Model without multi-objective prediction task (Remove both category and geography prediction); 6) Model without IAC; 7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8540 2 3 4 6 8 12 16 20 24 number of period nodes 0.230 0.235 0.240 0.245 0.250 0.255 0.260 Acc@1 2 3 4 6 8 12 16 20 24 number of period nodes 0.475 0.480 0.485 0.490 0.495 0.500 Acc@5 2 3 4 6 8 12 16 20 24 number of period nodes 0.565 0.570 0.575 0.580 0.585 0.590 Acc@10 2 3 4 6 8 12 16 20 24 number of period nodes 0.345 0.350 0.355 0.360 0.365 0.370 MRR Figure 5: Performance of MTNet with different period node numbers of 2, 3, 4, 6, 8, 12, 16, 20 and 24 on TKY. Model without IRC; 8) Model without current period node and last check-in node prediction. The experimental results on TKY is documented in Table 3. It can be observed that the full model significantly surpasses other variants. Furthermore, we can observe that IAC and IRC have a more pronounced impact on the overall model performance. The model’s performance drops by 8.63% and 7.14% after removing the IAC and IRC modules, respectively, highlighting the substantial influence of IAC and IRC on the model’s ability to learn user trajectory preferences across different time slots. When the model does not employ multi-task learning, we can observe that the average performance of the model decreases by 4.52%. This indicates that the multi-task learning mechanism has a significant impact on the model’s training, as it helps the model learn more crucial information of check-ins. In addition, when we remove the auxiliary predictions of the current period node and last raw check-in node, we find a certain degree of performance decline as well. This shows that, during recommendation stage, the model not only relies on the user’s historical preferences but also heavily considers the most recent trajectory for predicting the next POI. Variants Acc@1 Acc@5 Acc@10 MRR Full Model 0.2575 0.4977 0.5848 0.3659 w/o multitask 0.2410 0.4770 0.5631 0.3499 w/o coo 0.2353 0.4766 0.5595 0.3453 w/o cat 0.2398 0.4799 0.5659 0.3484 w/o coo&cat 0.2377 0.4791 0.5669 0.3474 w/o IAC 0.2353 0.4757 0.5631 0.3469 w/o IRC 0.2391 0.4713 0.5504 0.3453 w/o node 0.2478 0.4806 0.5686 0.3567 Table 3: Ablation studies on TKY. The best results are highlighted in bold. User…ID 811 1128 1889 200 1223 548 1078 (a) time…slot 0:00 - 4:00 4:00 - 8:00 8:00 - 12:00 12:00 - 16:00 16:00 - 20:00 20:00 - 24:00 (b) Figure 6: (a) Visualization of different users within a unified time period (20:00-22:00) on TKY. (b) Visualization of clustering results for a user’s trajectories at different periods. Visualization To enhance the understanding of MTNet, we conduct visualizations of different users’ representations during the same time period, as well as visualizations of the representations of the same user’s trajectories across different time periods. We conduct an experiment to visualize trajectory embeddings for all users. While we can observe effective differentiation among users, the sheer volume of users (over 2000) makes it hard to discern individual patterns. Therefore, we intentionally selected users with a substantial trajectory count (over 150) to ensure a clear presentation, as depicted in Figure 6a. We can observe that during the same time period of 20:00-22:00, the representations of different users are effectively distinguished. This indicates that MTNet has indeed learned the distinct preferences of various users during the same time period. In Figure 6b, as shown, when a user is in different time periods, their trajectory representations also exhibit distinct variations. This demonstrates that the model can learn the unique preferences of users during different time slots. Conclusion This paper introduces MTNet, which leverages a novel user check-in description structure of Mobility Tree for the next POI recommendation. The Mobility Tree’s distinctive attribute lies in its multi-granularity time slot nodes, specially designed to encapsulate users’ diverse preferences across different periods. We propose a four-step node interaction operation, which facilitates the comprehensive propagation and aggregation of check-in features, traversing from the leaf nodes to the root node. In pursuit of a more robust representation, MTNet adopts a multitasking training strategy that involves the simultaneous prediction of the next POI with the contextual information, thereby improving the recommendation performance. Our experiments on three realworld LBSN datasets suggested that MTNet outperforms ten state-of-the-art methods for the next POI recommendation. For future work, we will expand the tree structure with heterogeneous nodes to facilitate more spatial-temporal context interaction for time slot preference exploration. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8541 Acknowledgments This work was supported by the National Key R&D Program of China (2022YFB3103202), the National Natural Science Foundation of China (No. U1936206, 62002178, 62272250), and the Natural Science Foundation of Tianjin, China (No. 22JCJQJC00150). References Cho, E.; Myers, S. A.; and Leskovec, J. 2011. Friendship and Mobility: User Movement in Location-Based Social Networks. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, 1082–1090. New York, NY, USA: Association for Computing Machinery. ISBN 9781450308137. Feng, J.; Li, Y.; Zhang, C.; Sun, F.; Meng, F.; Guo, A.; and Jin, D. 2018. Deepmove: Predicting human mobility with attentional recurrent networks. In Proceedings of the 2018 world wide web conference, 1459–1468. Feng, S.; Li, X.; Zeng, Y.; Cong, G.; and Chee, Y. 2015. Personalized ranking metric embedding for next new POI recommendation. In IJCAI’15 Proceedings of the 24th International Conference on Artificial Intelligence, 2069–2075. ACM. Guo, Q.; Sun, Z.; Zhang, J.; and Theng, Y.-L. 2020. An attentional recurrent neural network for personalized next location recommendation. In Proceedings of the AAAI Conference on artificial intelligence, volume 34, 83–90. Hochreiter, S.; and Schmidhuber, J. 1997. Long Short-Term Memory. Neural Computation, 9(8): 1735–1780. Huang, L.; Ma, Y.; Wang, S.; and Liu, Y. 2019. An attentionbased spatiotemporal lstm network for next poi recommendation. IEEE Transactions on Services Computing, 14(6): 1585–1597. Kendall, A.; Gal, Y.; and Cipolla, R. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7482–7491. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lian, D.; Wu, Y.; Ge, Y.; Xie, X.; and Chen, E. 2020. Geography-aware sequential location recommendation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2009–2019. Lim, N.; Hooi, B.; Ng, S.-K.; Goh, Y. L.; Weng, R.; and Tan, R. 2022. Hierarchical multi-task graph recurrent network for next poi recommendation. In Proceedings of the 45th international ACM SIGIR conference on Research and development in Information Retrieval. Liu, Q.; Wu, S.; Wang, L.; and Tan, T. 2016. Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). Liu, Y.; Pham, T.-A. N.; Cong, G.; and Yuan, Q. 2017. An experimental evaluation of point-of-interest recommendation in location-based social networks. Proceedings of the VLDB Endowment, 10(10): 1010–1021. Luo, Y.; Liu, Q.; and Liu, Z. 2021. STAN: Spatio-Temporal Attention Network for Next Location Recommendation. In Proceedings of the Web Conference 2021, WWW ’21, 2177–2185. New York, NY, USA: Association for Computing Machinery. ISBN 9781450383127. Manotumruksa, J.; Macdonald, C.; and Ounis, I. 2018. A contextual attention recurrent architecture for context-aware venue recommendation. In The 41st international ACM SIGIR conference on research & development in information retrieval, 555–564. Rao, X.; Chen, L.; Liu, Y.; Shang, S.; Yao, B.; and Han, P. 2022. Graph-flashback network for next location recommendation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1463– 1471. Rendle, S.; Freudenthaler, C.; and Schmidt-Thieme, L. 2010. Factorizing Personalized Markov Chains for NextBasket Recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW ’10, 811–820. New York, NY, USA: Association for Computing Machinery. ISBN 9781605587998. S´anchez, P.; and Bellog´ın, A. 2022. Point-of-interest recommender systems based on location-based social networks: a survey from an experimental perspective. ACM Computing Surveys (CSUR), 54(11s): 1–37. Sun, K.; Qian, T.; Chen, T.; Liang, Y.; Nguyen, Q. V. H.; and Yin, H. 2020. Where to go next: Modeling long-and short-term user preferences for point-of-interest recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 214–221. Tai, K. S.; Socher, R.; and Manning, C. D. 2015. Improved semantic representations from tree-structured long shortterm memory networks. arXiv preprint arXiv:1503.00075. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, D.; Wang, X.; Xiang, Z.; Yu, D.; Deng, S.; and Xu, G. 2021. Attentive sequential model based on graph neural network for next poi recommendation. World Wide Web, 24(6): 2161–2184. Wu, Y.; Li, K.; Zhao, G.; and Qian, X. 2022. Personalized Long- and Short-term Preference Learning for Next POI Recommendation. IEEE Transactions on Knowledge and Data Engineering, 34(4): 1944–1957. Yang, D.; Zhang, D.; Zheng, V. W.; and Yu, Z. 2015. Modeling User Activity Preference by Leveraging User Spatial Temporal Characteristics in LBSNs. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(1): 129–142. Yang, S.; Liu, J.; and Zhao, K. 2022. GETNext: Trajectory Flow Map Enhanced Transformer for Next POI Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, 1144–1153. New York, NY, USA: Association for Computing Machinery. ISBN 9781450387323. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8542 Zhang, L.; Sun, Z.; Wu, Z.; Zhang, J.; Ong, Y. S.; and Qu, X. 2022. Next point-of-interest recommendation with inferring multi-step future preferences. In IJCAI, 3751–3757. Zhao, P.; Luo, A.; Liu, Y.; Xu, J.; Li, Z.; Zhuang, F.; Sheng, V. S.; and Zhou, X. 2022. Where to Go Next: A SpatioTemporal Gated Network for Next POI Recommendation. IEEE Transactions on Knowledge and Data Engineering, 34(5): 2512–2524. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8543
2024
949
18,794
VIXEN: Visual Text Comparison Network for Image Difference Captioning Alexander Black1, Jing Shi2, Yifei Fan2, Tu Bui1, John Collomosse1,2 1CVSSP, University of Surrey 2Adobe Research {alex.black|t.v.bui}@surrey.ac.uk, {jingshi|yifan|collomos}@adobe.com Abstract We present VIXEN – a technique that succinctly summarizes in text the visual differences between a pair of images in order to highlight any content manipulation present. Our proposed network linearly maps image features in a pairwise manner, constructing a soft prompt for a pretrained large language model. We address the challenge of low volume of training data and lack of manipulation variety in existing image difference captioning (IDC) datasets by training on synthetically manipulated images from the recent InstructPix2Pix dataset generated via prompt-to-prompt editing framework. We augment this dataset with change summaries produced via GPT3. We show that VIXEN produces state-of-the-art, comprehensible difference captions for diverse image contents and edit types, offering a potential mitigation against misinformation disseminated via manipulated image content. Code and data are available at http://github.com/alexblck/vixen Introduction Image manipulation often forms the basis for fake news and misinformation. This threat may be countered by tools that encourage users to reflect upon the provenance and content of images. Given the reactionary nature of sharing, such tools should be intuitively comprehensible to enable users to make fast, informed trust decisions (Gregory 2019). This paper contributes VIXEN – a method for intuitively summarizing the visual change between a pair of images using a short passage of text. Emerging open standards (e.g. C2PA (Coalition for Content Provenance and Authenticity 2023)) describe provenance frameworks that match images circulating in the wild to a federated database of originals using perceptual hashing methods (Black et al. 2021b; Nguyen et al. 2021; Black et al. 2021a; Pizzi et al. 2022). VIXEN presents a comprehensible way to review any image manipulation evidenced by such a matching (Fig. 1). Image difference captioning (IDC) is typically addressed by representations that seek to model the spatial-semantic distribution of concepts present in a scene – for example, the relative positions of objects in CCTV footage (Jhamtani and Berg-Kirkpatrick 2018), or of primitive geometric shapes (Johnson et al. 2017). More complex kinds of manipulations require expertise to construct and thus can not Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. be easily scaled up in volume (Tan et al. 2019). To this end, we make three technical contributions: 1. Cross-modal image differencing. We present a novel image differencing concept comprising a 2-branch GPTJ architecture to embed and compare facts derived from the image pair using CLIP-based image encoding. The model generates text conditioned on that comparison to explain salient changes between the image pair. We show our textual explanations to be succinct and comprehensible to non-expert users and to be quantitatively closer to ground-truth edit captions than state-of-the-art captioning methods. 2. Synthetic Edit Training. We propose a synthetic pairwise training framework for our VIXEN leveraging recent prompt2prompt (P2P) and language-based image editing (LBIE) approaches to supervise fine-tuning on generative image content, showing good generalization to unseen content. 3. Augmented IP2P Dataset. We release an augmentation of the recent InstructPix2Pix (IP2P) dataset with synthetic change captions generated via GPT-3 as a basis for training and evaluating VIXEN. We demonstrate that VIXEN achieves higher performance than prior image difference captioning methods and is able to generalize to multiple datasets. Related Work Image difference captioning (IDC) is closely related to image captioning and visual question answering, both requiring a visual understanding system to model images and a language understanding system capable of generating syntactically correct captions. The revolution of IDC in recent years depends heavily on the advent of visual and text modeling approaches, together with cross-domain learning techniques that bridge the representation gap between them. Early visual content modeling approaches employ global CNN features such as VGG (Donahue et al. 2015) and ResNet (Rennie et al. 2017) as input signals to the text generation models, leveraging the semantically rich and compact representations deliverable from these models. To better capture multi-object representations and their relation, regional modeling methods are developed (Lu et al. 2017; Gu et al. 2018; Anderson et al. 2018; Huang et al. 2019). In The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 846 Figure 1: Visual change summarization produced by VIXEN for original-manipulated image pairs. VIXEN is able to observe both background (left) and main subject (mid) changes as well as generalize to other datasets (right). some, images are gridded into non-overlapping patches upon which CNN features are extracted, others instead use outputs from an early layer of a pretrained ResNet model to effectively capture spatial features in a grid fashion. In contrast, (Cornia et al. 2020; Anderson et al. 2018; Huang et al. 2019) employ Region Proposal Network (RPN) to extract features from potential candidate objects, offering better alignment with semantic objects mentioned in the paired captions. Alternative approaches include graph-based (Yang et al. 2019) and tree-based networks (Yao et al. 2019) to capture the relations of objects at multiple levels of granularity. For a long time RNN/LSTM (Graves and Graves 2012) have been used to model text due to its inherent sequential properties. Single-layer RNN (Vinyals et al. 2015; Mao et al. 2015) or double-layer LSTM (Donahue et al. 2015; Anderson et al. 2018; Yao et al. 2019) are employed along with various techniques to integrate image features deeper into the recurrent process, including additive attention (Stefanini et al. 2022). During inference, captions are generated in an autoregressive fashion – the prediction of a word is conditioned on all previous words. While this improves linguistic coherence, RNN/LSTM-based approaches struggle in modeling long captions. This problem is levitated in recent transformer-based approaches thanks to its fullattention mechanism (Luo et al. 2021; Wang et al. 2021; Cornia et al. 2020). More advanced transformer-based approaches such as BERT (Devlin et al. 2018), GPT (Brown et al. 2020) and LLaMA (Touvron et al. 2023) have been successfully applied in various visual-language tasks (Hu et al. 2022; Mokady, Hertz, and Bermano 2021; Gao et al. 2023; Zhang et al. 2021; Li et al. 2020). Visual language modeling aims to bridge the gap between image/video and text representations for specific tasks such as joint embedding (e.g. CLIP (Radford et al. 2021) and LIMoE (Mustafa et al. 2022) for cross-domain retrieval), text-to-image (e.g. Stable Diffusion (Rombach et al. 2022) for text-based image generation, InstructPix2Pix (Brooks, Holynski, and Efros 2022) for image editing) and imageto-text (e.g. visual question answering (Alayrac et al. 2022; Wang et al. 2021), visual instructions (Gao et al. 2023; Driess et al. 2023)). In the context of image captioning, image-text mapping strategies can be categorized into two research strands. The first strand involves the early fusion of image and text features for better alignment between image objects and words (Tsimpoukelli et al. 2021; Mokady, Hertz, and Bermano 2021; Wang et al. 2021; Li et al. 2020). These methods adopt BERT-like training strategies to input a pair of image and masked caption to the masked words. At inference, the input caption is simply replaced by a start token or a prefixed phrase e.g. ‘A picture of’. The second research strand focuses on learning a direct transformation from image to text embedding. Early CNN-based approaches feed image features as the hidden states of the LSTM text modules (Donahue et al. 2015; Vinyals et al. 2015; Yao et al. 2019; Karpathy and Fei-Fei 2015; Rennie et al. 2017) while later transformer-based methods favor cross-attention (Luo et al. 2021; Cornia et al. 2020). Recently in both research strands, there has been a trend of leveraging powerful pretrained large language and vision models to learn a simple mapping between two domains (Merullo et al. 2022; Eichenberg et al. 2021; Li et al. 2023; Tsimpoukelli et al. 2021; Mokady, Hertz, and Bermano 2021). Image difference captioning is a form of image captioning in which the caption would ideally ignore common objects between images and rather highlight subtle changes between them. As the first work addressing IDC, Spot-theDiff (Jhamtani and Berg-Kirkpatrick 2018) identifies potential change clusters and models them using an LSTMbased network. Their work relies on the difference between two input images at the pixel level, therefore sensitive to noises and geometric transformations. DUDA (Park, Darrell, and Rohrbach 2019) instead computes image difference at CNN semantic level, improving the robustness against slight global changes. In M-VAM (Shi et al. 2020) and VACC (Kim et al. 2021), a view-point encoder is proposed to mitigate potential view-point difference and VARD (Tu et al. 2023a) proposes a viewpoint invariant representation network to explicitly capture the change. Meanwhile, (Sun et al. 2022) uses bidirectional encoding to improve change localization and NCT (Tu et al. 2023b) aggregates neighboring features with a transformer. These methods mostly focus on image modality and take advantage of benchmarkspecific properties, such as near-identical views in Spotthe-Diff (Jhamtani and Berg-Kirkpatrick 2018) or synthetic scenes with limited objects and change types (color, texture, add, drop, remove) in CLEVR (Park, Darrell, and Rohrbach 2019). More recently, IDC-PCL (Yao, Wang, and Jin 2022) and CLIP4IDC (Guo, Wang, and Laaksonen 2022) adopt BERT-like training strategies to model difference captioning language, achieving state-of-art performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 847 Figure 2: Model architecture and data captioning augmentation pipeline diagram. We use a pre-trained image encoder network E to produce a representation of two images. Both of these are projected into the input space of a large language model (LM) by a trained linear projection layer P. Frozen layers are marked in blue, trainable in red. Methodology Our proposed method relies on synthetically generated image pairs and associated difference captions. We describe the creation process of visual and textual components of the dataset, details of the architecture of our proposed approach and training details necessary to reproduce the results. Data Generation To train our proposed approach, we require a large dataset of image pairs, each annotated with a summary of the changes between them. We propose using images generated by stable diffusion (Rombach et al. 2022) and edited with prompt-toprompt (Hertz et al. 2022) using the pipeline presented in InstructPix2Pix (Brooks, Holynski, and Efros 2022) (IP2P). One of our contributions is the introduction of difference summary captions to IP2P images, generated using GPT-3 (Brown et al. 2020) in a few-shot learning fashion. The InstructPix2Pix dataset is generated using the prompt-to-prompt editing framework, which provides textbased editing capabilities for synthesized images by injecting the attention maps associated with a specific word in the prompt to control the attention maps of the edited image. Therefore, all that is required to generate an image pair is two textual prompts with slight differences. IP2P uses a fine-tuned GPT-3 language model to generate plausible edits based on real input captions from LAION (Schuhmann et al. 2022). In addition to the image pairs and captions the dataset also contains an instruction that describes what edits have to be applied in order to generate the output image. While these instructions are sufficient for the original InstructPix2Pix task of text-based image editing, they often omit the information regarding the input content. For example, for the prompt pair ”a photo of a cat”/”a photo of a dog”, the edit instruction might be ”as a dog” or ”turn it into a dog”. We aim to summarize the changes by referencing both the original and edited image contents, therefore the desirable edit summarization caption would be ”the cat has been replaced by a dog”. To achieve this, we use GPT-3 language model in a few-shot learning fashion by including several examples of input-output-instruction-summary quadruplets where summary captions are constructed manually. While IP2P uses a fine-tuned GPT-3 to generate the instruction and second image captions, we have found the fine-tuning unnecessary in our case. Since our task does not require creativity from the model, but rather summarization of the input information, the pre-trained ’davinci’ version of GPT-3 is enough to produce the captions needed. Architecture Our image captioning approach is inspired by (Merullo et al. 2022), which uses a trainable linear mapping between the image encoder and a large language model. However, instead of passing the projected embeddings of a single image to the language model, we project the embeddings of two images and concatenate them before feeding them into the language model. This architecture is illustrated in Figure 2. Given a source image I and its edited version I′ we use an image encoder E to extract image feature maps f = E(I); f ′ = E(I′) ∈Rk×h, (1) where h is the size of feature maps and k is the prompt sequence length. We use a fully-connected layer P to linearly The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 848 Figure 3: Image-caption pairs with an average correspondence score of 3 (left): may contain global changes when only local ones are expected (top) or fail to produce desired edits due to vague captioning (bot); 4 (mid): partially satisfy the caption, occasionally only some properties are realized correctly (top) or an existing object is replaced rather than added to the background (bot); 5 (right): mostly faithful to the depicted edits. project the image features into dimensionality of a language model input e, creating a soft prompt sv: sv = [P(f), P(f ′)] ∈R2k×e, (2) where [, ] denotes concatenation. Finally, we append a prefix st made of embedding of tokens for ”The differences between the images are as follows: ”/”Edit instructions:” to the visual prompt sv to obtain the final prompt s = [sv, st] used for generating the summarization text. We explore two options for E. Firstly, following (Merullo et al. 2022; Eichenberg et al. 2021), we use CLIP RN50x16 as E. The feature map before the pooling layer has dimensions 12×12×3072, flattened to k ×h = 144×3072. Secondly, we use ViT-g, followed by a Q-Former from BLIP-2 (Li et al. 2023). In this case sequence length k = 257. We refer to CLIP and Q-Former versions of VIXEN as VIXENC and VIXEN-Q, respectively. For the language model, we use GPT-J(Wang and Komatsuzaki 2021), which has input space dimensionality l = 4096. Consequently, for both configurations of E, our linear projection layer P has input and output dimensions h = 3072 and l = 4096, respectively. The loss for the captioning task objective is defined as L = − m X i=1 l(sv, st 1, . . . , st i), (3) where m is a variable token length and l is next-token logprobability conditioned on the previous sequence elements l(sv, st 1, . . . , st i) = log p(ti|x, t1, . . . , ti−1). (4) Training During training, we may provide distractor image pairs with no changes present by providing the same image as both inputs I = I′. The frequency of the presence of distractor images is determined by probability pd. In such cases, the target difference summary text is chosen at random from a list of pre-defined sentences, all synonymous with ”there is no difference”. For all our models we first train with pd = 0 for two epochs, followed by two more epochs with pd = 0.5. Total training time is approximately 100 hours on a single A100 GPU. We use gradient accumulation to train with an effective batch size of 2048 and optimize the loss using AdamW optimizer with β1 = 0.9, β2 = 0.98 and weight decay 0.05. For baseline approaches CLIP4IDC and IDC, we implemented dataloaders for our dataset, precomputed all necessary supporting data (e.g., ResNet-101 features, negative samples, and a vocabulary dictionary for IDC) and followed their standard two-step training pipeline with default hyperparameters specified in the GitHub repos. Experiments Data We perform our main evaluation on a subset of the InstructPix2Pix (Brooks, Holynski, and Efros 2022) dataset, unseen by models during training. To ensure a high quality of the synthetically generated image-caption pairs, we score their correspondence via a user study. Additionally, we crowdsource annotations for a subset of images from the PSBattles (Heller, Rossetto, and Schuldt 2018) dataset and fine-tune and evaluate on Image Editing Request (Tan et al. 2019). InstructPix2Pix dataset presents challenges due to its synthetically generated nature, as some of the edit summarization captions fail to accurately describe the changes made to the image pairs. This is mainly due to prompt-to-prompt occasionally generating images that do not depict the desired change accurately enough. This is further discussed in the limitations section below and illustrated in Figure 5 (mid). To ensure a reliable evaluation, we conducted a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 849 Method MPNet B@4 C M R Instruct Pix2Pix @3 @4 @5 @3 @4 @5 @3 @4 @5 @3 @4 @5 @3 @4 @5 VIXEN-Q (ours) 56.9 59.1 62.3 16.5 18.5 20.8 80.3 93.9 134.9 17.1 18.4 20.6 38.0 40.1 42.2 VIXEN-C (ours) 59.3 61.4 61.5 16.8 18.2 19.2 96.6 107.0 126.1 17.6 18.6 19.6 39.2 40.8 39.3 CLIP4IDC 56.8 58.3 60.7 15.8 17.3 17.7 58.8 71.0 114.7 20.9 22.5 23.3 33.3 35.1 34.0 IDC 38.3 38.6 37.4 8.2 8.8 7.7 4.4 5.0 5.6 16.0 16.8 16.5 29.1 30.0 27.7 PSBattles VIXEN-Q (ours) 45.1 5.8 7.5 11.0 22.2 VIXEN-C (ours) 40.3 4.5 7.7 9.5 20.5 CLIP4IDC 32.7 3.2 5.0 10.1 21.7 IDC 27.0 1.0 0.7 9.2 19.5 Image Editing Request VIXEN-Q (ours, FT) 50.1 7.9 35.4 14.4 33.5 VIXEN-C (ours, FT) 52.5 8.6 38.1 15.4 42.5 VARD 10.0 35.7 14.8 39.0 CLIP4IDC 8.2 32.2 14.6 40.4 NCT 8.1 34.2 15.0 38.8 BiDiff 6.9 27.7 14.6 38.5 DUDA 6.5 27.8 12.4 37.3 rel-att 6.7 26.4 12.8 37.4 Table 1: Image difference captioning performance on IP2P, PSBattles and Image Editing Request datasets. Evaluated on semantic similarity (MPNet), BLEU-4 (B@4), CIDEr (C), METEOR (M) and ROUGE-L (R). For IP2P, performance is reported for subsets at image-caption correspondence thresholds of 3, 4, 5. Method MPNet B@4 C M R Instruct Pix2Pix VIXEN-C (ours) 59.3 16.8 96.6 17.6 39.2 VIXEN-C p=0 54.4 15.4 88.5 16.1 35.9 PSBattles VIXEN-C (ours) 40.3 4.5 7.7 9.5 20.5 VIXEN-C p=0 37.8 4.2 7.2 8.9 19.2 Table 2: Impact of distractor images on performance of the model evaluated on semantic similarity (MPNet), BLEU-4 (B@4), CIDEr (C), METEOR (M) and ROUGE-L (R). user study using Amazon Mechanical Turk (MTurk) on a sample of 5,000 images from the dataset. This results in a 837,466/93,052/5,000 train/validation/test splits. The study involved three participants per image-caption pair (95 unique participants) and aimed to rate the degree of correspondence between the image pair and its associated caption, using a scoring system from 1 (low) to 5 (high). The distribution of scores is 1: 5%, 2: 13%, 3: 26%, 4: 33%, 5: 24% . Figure 3 shows random samples of the image-caption pairs for different score threshold values. PSBattles is a dataset of images edited in Adobe PhotoshopTM, collected from the ‘Photoshopbattles’ subreddit. The dataset contains 10k original images, paired with several manipulated variants. There are 102k variants in total contributed by 31k artists. We randomly sample 100 image pairs for crowd-sourced annotation on MTurk and collect captions from 3 participants per image pair. Image Editing Request is a dataset of realistic photographs, paintings and illustrations paired with instructions written by humans. It contains 4k images-annotations pairs and incorporates a wide variety of edits, including affine edits and crops that are not present in the other datasets. Metrics We evaluate the performance of difference captioning methods using both traditional N-gram-based metrics (BLEU-4 (Papineni et al. 2002), CIDEr (Vedantam, Lawrence Zitnick, and Parikh 2015), METEOR (Banerjee and Lavie 2005) and ROUGE-L (Lin 2004)), as well as semantic similarity metric based on a language transformer model. We have found that due to a larger diversity of images and edits, the generated captions need to encompass a significantly larger vocabulary to accurately describe the changes. As a result, there are instances where the captions may not align word for word with the actual image differences, but they still convey a similar meaning. To account for this, we use a semantic textual similarity metric. We define semantic textual similarity Ssim between the target c and generated c′ summarizations Ssim = cos(E(c), E(c′)), (5) where cos(, ) = A·B ∥A∥∥B∥denotes cosine similarity and E is a sentence transformer. We use MPNet (Song et al. 2020) as the best-performing sentence transformer to map sentences to 768-dimensional normalized embeddings. We also assess the quality of captions via a crowd-sourced study on Amazon Mechanical Turk (MTurk). Participants are presented with both the original and edited images. For each image pair, participants are tasked to choose one of the 4 captions, arranged in a random order. In case all four captions do not summarize the differences well enough, participants may choose the ’none of the above’ option. Each task is performed by 3 unique participants. The preference is considered to be given to a particular method if two or more participants have voted for it. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 850 (a) InstructPix2Pix (b) PSBattles Figure 4: Examples of edit summarizations for global changes, object replacement and material changes produced by VIXEN and CLIP4IDC on InstructPix2Pix (a) and PSBattles (b) datasets. Failure case marked with a dashed red box. Results For the proposed datasets, we compare the performance of VIXEN against two baselines, IDC (Yao, Wang, and Jin 2022) and CLIP4IDC (Guo, Wang, and Laaksonen 2022). We train both of them on our augmented IP2P dataset, following the author’s guidelines. For the IER dataset, we finetune on IER training set and compare against reported numbers of multiple baselines. We report the evaluation results of evaluating both the proposed method as well as baselines in Table 1, with examples shown in Figure 4. Our methods achieve a higher score in all metrics, except METEOR (IP2P), where CLIP4IDC scores higher than both proposed architectures. This indicates that VIXEN is more tuned towards precision, rather than recall of n-grams as METEOR heavily favors recall. For IP2P, results are reported at three different correspondence thresholds. For lower threshold values, the best results are obtained by VIXEN-C. VIXEN-Q seems to benefit the most from threshold increase and outperforms other methods on pairs with a correspondence score of 5. While all methods suffer significant performance drops when evaluated on a dataset from a different domain, VIXEN-Q shows a better ability to generalize to new data by scoring the highest on the PSBattles dataset. After finetuning the model on Image Editing Request, VIXEN-C outperforms previous methods on most metrics, except B@4 of VARD(Tu et al. 2023a). The results of the crowd-sourced user preference study, shown in Figure 6, demonstrate that the users prefer difference captions generated by VIXEN more often than others. For the IP2P dataset, captions generated by VIXEN-Q and VIXEN-C obtained a majority vote in 32% and 26% of the cases, respectively, followed by CLIP4IDC and IDC with 24% and 15%. For the PSBattles dataset, the highest preference score is achieved by VIXEN-C with 15% of the votes. Participants chose the ’None of the above’ option in 75% of the cases for PSBattles, as opposed to just 2% in IP2P. This indicates that generalization to new data domains remains a challenging task. During inference we assume an input where one image is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 851 Figure 5: Limitations of the proposed method. Left: image captioning instead of difference captioning in case of unidentified edit. Middle: mismatch between target text-image pair and LM runoff. Right: edit described in reverse order. Figure 6: User preference study results. Study participants are shown an image pair and captions generated by four methods on IP2P and PSBattles datasets. Fusion method B@4 C M R Instruct Pix2Pix Concatenation 16.8 96.6 17.6 39.2 Subtraction 16.4 93.7 16.9 36.8 Addition 16.2 90.8 17.3 37.5 Multiplication 14.4 82.7 15.3 35.9 Mean 10.7 63.9 12.4 33.6 Table 3: Image feature fusion ablation of VIXEN-C an edited version of the other, but we demonstrate the benefits of having distractor same image pairs during training. The possibility of no edits case makes it harder for the model to guess the right answer by memorizing the most frequent edits within the dataset. Table 2 shows that setting the probability p = 0 of the same image pairs during training of VIXEN-C yields worse results on both IP2P and PSBattles datasets. Table 3 shows performance results for different feature fusion strategies that redefine sv in Eq 2. We have observed that concatenation leads to slightly better performance than subtraction, addition or multiplication and taking the mean of two features causes a significant performance drop. This shows that retaining the information of both image features without degradation is important for the task. Limitations In Figure 5 we show examples of VIXEN’s failure cases. We identify and discuss three main challenges. Left shows an example of a very minor difference between the two images. In such cases, VIXEN occasionally resorts to captioning the image content instead of summarizing the differences. Mid shows a mismatch between the summary and generated images: an image pair with a slightly changed book cover, but the target caption assumes that the style of the whole image has been changed to that of a comic book. As with other LLMs, VIXEN exhibits LM runoff: having identified a concept (”cartoon character”), it might continue generating a text with a strong linguistic prior (”big eyes and exaggerated features”), absent in the images. Right shows that occasionally VIXEN may describe the differences between the images in a reversed order. Conclusion We presented VIXEN – an image difference captioning approach that provides textual descriptions of the manipulations applied to an image. We have augmented the InstructPix2Pix dataset of generated images with difference summarization captions generated by GPT-3 in order to train and evaluate VIXEN. We have shown that VIXEN achieves higher performance than other image difference captioning methods. We have also demonstrated that, while VIXEN shows better generalizability to other datasets, there is still a performance gap when switching from synthetic to real data. Future works might alleviate this by including a varied spread of manipulations types into the training set, including insertion, deletion and text edits, which the current generative pipelines struggle with. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 852 References Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for few-shot learning. NeurIPS, 35: 23716–23736. Anderson, P.; He, X.; Buehler, C.; Teney, D.; Johnson, M.; Gould, S.; and Zhang, L. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proc. CVPR, 6077–6086. Banerjee, S.; and Lavie, A. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proc. ACL WS Intr. Extr. Eval. Measures Machine Trans. Summarization, 65–72. Black, A.; Bui, T.; Jenni, S.; Swaminathan, V.; and Collomosse, J. 2021a. Vpn: Video provenance network for robust content attribution. In Proc. CVMP, 1–10. Black, A.; Bui, T.; Jin, H.; Swaminathan, V.; and Collomosse, J. 2021b. Deep Image Comparator: Learning to Visualize Editorial Change. In Proc. CVPR WS, 972–980. IEEE. Brooks, T.; Holynski, A.; and Efros, A. A. 2022. Instructpix2pix: Learning to follow image editing instructions. arXiv preprint arXiv:2211.09800. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. NeurIPS, 33: 1877–1901. Coalition for Content Provenance and Authenticity. 2023. Technical Specification 1.3. Technical report, C2PA. Cornia, M.; Stefanini, M.; Baraldi, L.; and Cucchiara, R. 2020. Meshed-memory transformer for image captioning. In Proc. CVPR, 10578–10587. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Donahue, J.; Anne Hendricks, L.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; and Darrell, T. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proc. CVPR, 2625– 2634. Driess, D.; Xia, F.; Sajjadi, M. S. M.; Lynch, C.; Chowdhery, A.; Ichter, B.; Wahid, A.; Tompson, J.; Vuong, Q.; Yu, T.; Huang, W.; Chebotar, Y.; Sermanet, P.; Duckworth, D.; Levine, S.; Vanhoucke, V.; Hausman, K.; Toussaint, M.; Greff, K.; Zeng, A.; Mordatch, I.; and Florence, P. 2023. PaLM-E: An Embodied Multimodal Language Model. In arXiv preprint arXiv:2303.03378. Eichenberg, C.; Black, S.; Weinbach, S.; Parcalabescu, L.; and Frank, A. 2021. MAGMA–Multimodal Augmentation of Generative Models through Adapter-based Finetuning. arXiv preprint arXiv:2112.05253. Gao, P.; Han, J.; Zhang, R.; Lin, Z.; Geng, S.; Zhou, A.; Zhang, W.; Lu, P.; He, C.; Yue, X.; et al. 2023. LLaMAAdapter V2: Parameter-Efficient Visual Instruction Model. arXiv preprint arXiv:2304.15010. Graves, A.; and Graves, A. 2012. Long short-term memory. Supervised sequence labelling with recurrent neural networks, 37–45. Gregory, S. 2019. Ticks or it didn’t happen. Technical report, Witness.org. Gu, J.; Cai, J.; Wang, G.; and Chen, T. 2018. Stackcaptioning: Coarse-to-fine learning for image captioning. In Proc. AAAI, volume 32. Guo, Z.; Wang, T.-J.; and Laaksonen, J. 2022. CLIP4IDC: CLIP for Image Difference Captioning. In Proc. Conf. Asia-Pacific Chapter Assoc. Comp. Linguistics and Int. Joint Conf. NLP, 33–42. Heller, S.; Rossetto, L.; and Schuldt, H. 2018. The PSBattles Dataset – an Image Collection for Image Manipulation Detection. CoRR, abs/1804.04866. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626. Hu, X.; Gan, Z.; Wang, J.; Yang, Z.; Liu, Z.; Lu, Y.; and Wang, L. 2022. Scaling up vision-language pre-training for image captioning. In Proc. CVPR, 17980–17989. Huang, L.; Wang, W.; Xia, Y.; and Chen, J. 2019. Adaptively aligned image captioning via adaptive attention time. NeurIPS, 32. Jhamtani, H.; and Berg-Kirkpatrick, T. 2018. Learning to Describe Differences Between Pairs of Similar Images. In Proc. Conf. Empirical Methods NLP, 4024–4034. Johnson, J.; Hariharan, B.; Van Der Maaten, L.; Fei-Fei, L.; Lawrence Zitnick, C.; and Girshick, R. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proc. CVPR, 2901–2910. Karpathy, A.; and Fei-Fei, L. 2015. Deep visual-semantic alignments for generating image descriptions. In Proc. CVPR, 3128–3137. Kim, H.; Kim, J.; Lee, H.; Park, H.; and Kim, G. 2021. Agnostic change captioning with cycle consistency. In Proc. ICCV, 2095–2104. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Li, X.; Yin, X.; Li, C.; Zhang, P.; Hu, X.; Zhang, L.; Wang, L.; Hu, H.; Dong, L.; Wei, F.; et al. 2020. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In Proc. ECCV, 121–137. Springer. Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, 74–81. Lu, J.; Xiong, C.; Parikh, D.; and Socher, R. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proc. CVPR, 375–383. Luo, Y.; Ji, J.; Sun, X.; Cao, L.; Wu, Y.; Huang, F.; Lin, C.W.; and Ji, R. 2021. Dual-level collaborative transformer for image captioning. In Proc. AAAI, volume 35, 2286–2293. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 853 Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Huang, Z.; and Yuille, A. 2015. Deep captioning with multimodal recurrent neural networks (m-rnn). In Proc. ICLR. Merullo, J.; Castricato, L.; Eickhoff, C.; and Pavlick, E. 2022. Linearly mapping from image to text space. arXiv preprint arXiv:2209.15162. Mokady, R.; Hertz, A.; and Bermano, A. H. 2021. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734. Mustafa, B.; Ruiz, C. R.; Puigcerver, J.; Jenatton, R.; and Houlsby, N. 2022. Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts. In NeurIPS. Nguyen, E.; Bui, T.; Swaminathan, V.; and Collomosse, J. 2021. Oscar-net: Object-centric scene graph attention for image attribution. In Proc. ICCV, 14499–14508. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. Assoc. Comp. Linguistics, 311–318. Park, D. H.; Darrell, T.; and Rohrbach, A. 2019. Robust change captioning. In Proc. ICCV, 4624–4633. Pizzi, E.; Roy, S. D.; Ravindra, S. N.; Goyal, P.; and Douze, M. 2022. A self-supervised descriptor for image copy detection. In Proc. CVPR, 14532–14542. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In Proc. ICML, 8748–8763. PMLR. Rennie, S. J.; Marcheret, E.; Mroueh, Y.; Ross, J.; and Goel, V. 2017. Self-critical sequence training for image captioning. In Proc. CVPR, 7008–7024. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 10684–10695. Schuhmann, C.; Beaumont, R.; Vencu, R.; Gordon, C.; Wightman, R.; Cherti, M.; Coombes, T.; Katta, A.; Mullis, C.; Wortsman, M.; Schramowski, P.; Kundurthy, S.; Crowson, K.; Schmidt, L.; Kaczmarczyk, R.; and Jitsev, J. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. In NeurIPS, volume 35, 25278–25294. Shi, X.; Yang, X.; Gu, J.; Joty, S.; and Cai, J. 2020. Finding it at another side: A viewpoint-adapted matching encoder for change captioning. In Proc. ECCV, 574–590. Springer. Song, K.; Tan, X.; Qin, T.; Lu, J.; and Liu, T.-Y. 2020. Mpnet: Masked and permuted pre-training for language understanding. NeurIPS, 33: 16857–16867. Stefanini, M.; Cornia, M.; Baraldi, L.; Cascianelli, S.; Fiameni, G.; and Cucchiara, R. 2022. From show to tell: A survey on deep learning-based image captioning. IEEE TPAMI, 45(1): 539–559. Sun, Y.; Li, L.; Yao, T.; Lu, T.; Zheng, B.; Yan, C.; Zhang, H.; Bao, Y.; Ding, G.; and Slabaugh, G. 2022. Bidirectional difference locating and semantic consistency reasoning for change captioning. IJIS, 37(5): 2969–2987. Tan, H.; Dernoncourt, F.; Lin, Z.; Bui, T.; and Bansal, M. 2019. Expressing Visual Relationships via Language. arXiv:1906.07689. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Tsimpoukelli, M.; Menick, J. L.; Cabi, S.; Eslami, S.; Vinyals, O.; and Hill, F. 2021. Multimodal few-shot learning with frozen language models. NeurIPS, 34: 200–212. Tu, Y.; Li, L.; Su, L.; Du, J.; Lu, K.; and Huang, Q. 2023a. Viewpoint-Adaptive Representation Disentanglement Network for Change Captioning. IEEE Transactions on Image Processing, 32: 2620–2635. Tu, Y.; Li, L.; Su, L.; Lu, K.; and Huang, Q. 2023b. Neighborhood Contrastive Transformer for Change Captioning. arXiv:2303.03171. Vedantam, R.; Lawrence Zitnick, C.; and Parikh, D. 2015. Cider: Consensus-based image description evaluation. In Proc. CVPR, 4566–4575. Vinyals, O.; Toshev, A.; Bengio, S.; and Erhan, D. 2015. Show and tell: A neural image caption generator. In Proc. CVPR, 3156–3164. Wang, B.; and Komatsuzaki, A. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https: //github.com/kingoflolz/mesh-transformer-jax. Wang, Z.; Yu, J.; Yu, A. W.; Dai, Z.; Tsvetkov, Y.; and Cao, Y. 2021. Simvlm: Simple visual language model pretraining with weak supervision. In Proc. ICLR. Yang, X.; Tang, K.; Zhang, H.; and Cai, J. 2019. Autoencoding scene graphs for image captioning. In Proc. CVPR, 10685–10694. Yao, L.; Wang, W.; and Jin, Q. 2022. Image difference captioning with pre-training and contrastive learning. In Proc. AAAI, volume 36, 3108–3116. Yao, T.; Pan, Y.; Li, Y.; and Mei, T. 2019. Hierarchy parsing for image captioning. In Proc. ICCV, 2621–2629. Zhang, P.; Li, X.; Hu, X.; Yang, J.; Zhang, L.; Wang, L.; Choi, Y.; and Gao, J. 2021. Vinvl: Revisiting visual representations in vision-language models. In Proc. CVPR, 5579– 5588. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 854
2024
95
18,795
ReGCL: Rethinking Message Passing in Graph Contrastive Learning Cheng Ji1,2, Zixuan Huang2, Qingyun Sun1,2, Hao Peng1,2, Xingcheng Fu3, Qian Li1,2, Jianxin Li1,2* 1Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China 2School of Computer Science and Engineering, Beihang University, China 3Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, China {jicheng,penghao,liqian,lijx}@act.buaa.edu.cn, {huangzx,sunqy}@buaa.edu.cn, [email protected] Abstract Graph contrastive learning (GCL) has demonstrated remarkable efficacy in graph representation learning. However, previous studies have overlooked the inherent conflict that arises when employing graph neural networks (GNNs) as encoders for node-level contrastive learning. This conflict pertains to the partial incongruity between the feature aggregation mechanism of graph neural networks and the embedding distinction characteristic of contrastive learning. Theoretically, to investigate the location and extent of the conflict, we analyze the participation of message-passing from the gradient perspective of InfoNCE loss. Different from contrastive learning in other domains, the conflict in GCL arises due to the presence of certain samples that contribute to both the gradients of positive and negative simultaneously under the manner of message passing, which are opposite optimization directions. To further address the conflict issue, we propose a practical framework called ReGCL, which utilizes theoretical findings of GCL gradients to effectively improve graph contrastive learning. Specifically, two gradient-based strategies are devised in terms of both message passing and loss function to mitigate the conflict. Firstly, a gradient-guided structure learning method is proposed in order to acquire a structure that is adapted to contrastive learning principles. Secondly, a gradient-weighted InfoNCE loss function is designed to reduce the impact of false negative samples with high probabilities, specifically from the standpoint of the graph encoder. Extensive experiments demonstrate the superiority of the proposed method in comparison to state-of-the-art baselines across various node classification benchmarks. 1 Introduction Inspired by recent advances in contrastive learning (CL) (Liu et al. 2021) in the fields of computer vision (CV) (Logeswaran and Lee 2018; He et al. 2020; Chen et al. 2020; Chuang et al. 2020) and natural language processing (NLP) (Oord, Li, and Vinyals 2018), graph contrastive learning (GCL) has emerged as a powerful self-supervised learning technique (Wu et al. 2021b; Liu et al. 2022b; Ji et al. 2023a; Liang et al. 2022). The combination of expressive power in graph neural networks (GNNs) (Kipf and Welling 2017; Veliˇckovi´c et al. 2018) and the effective self-supervised *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Graph 𝒢= {𝐀, 𝐗} Encoder (e.g., GCN) Embedding 𝒖, 𝒗 CL Loss (e.g., InfoNCE) (a) Message-Passing Schema (b) Contrastive Loss Principle Target Node Neighbors & Negatives Non-Neighbors & Negatives Feature Aggregation Embedding Distinction Figure 1: Conflict arises between the message-passing mechanism and the contrastive loss function. (a) The feature of neighbors is aggregated to the target node through a messagepassing schema, resulting in close proximity between them. On the other hand, (b) the contrastive loss function aims to push them far apart, including the neighbors. learning ability of contrastive learning have sparked significant interest in investigating various aspects of GCL, such as augmentation mechanisms (Yu et al. 2022; Zhang et al. 2023), negative sampling techniques (Xia et al. 2022), and contrastive loss functions (Liu et al. 2022a). Nevertheless, there is a noticeable gap in the literature that specifically focuses on the core problem of graph encoders in GCL. It has been observed that GNN and GCL present specific conflict issues in this paper. Most existing works in GCL primarily employ graph neural networks as encoders (Zhu et al. 2020, 2021; Tong et al. 2021; Wang et al. 2022), similar to semi-supervised node classification. GNNs employ aggregation operators within the local neighborhood to collect features from neighboring nodes, leading to the generation of embeddings that exhibit higher similarity within the neighbors. (Kipf and Welling 2017). GCL then optimizes the model using noise-contrastive estimation loss, such as the InfoNCE loss function (Oord, Li, and Vinyals 2018), which is a noise contrastive estimation (NCE) based objective and identifies each sample by contrasting the differences between the target node and its negatives, including its neighbors aggregated by the GNN encoder (Zhu et al. 2020). The aforementioned approach has The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8544 demonstrated encouraging outcomes, thereby prompting a surge in research endeavors within this field (Zhu et al. 2020, 2021; Tong et al. 2021; Wang et al. 2022). However, most previous studies have overlooked the investigation of whether directly employing GNNs as encoders in GCL is in line with the fundamental principles of contrastive learning. An overlooked challenge is the conflict issue between the message-passing paradigm and noise contrastive estimation in node-level contrastive learning, as illustrated in Figure 1. Different from contrastive learning methods employed in other domains, GCL incorporates the step of neighborhood aggregation before the application of the contrastive loss function. The conflict stems from the disparity in approaches. The message-passing paradigm in GNNs attempts to propagate information between neighborhoods resulting in reducing the distances between adjacent nodes, thereby making them close to their neighbors. On the contrary, in accordance with the principle of InfoNCE, GCL employs a methodology where each node and its augmented version is considered a negative sample for all other samples. This effectively widens the distance between nodes in the latent space, enabling discrimination between samples. Consequently, a conflict arises between the feature aggregation of GNNs and the embedding distinction of GCL. Each node within the network undergoes partially contradictory optimization directions, as some nodes are encouraged by GNNs to move closer, while simultaneously being repelled from each other by GCL. To further investigate and address the conflict issue, a theoretical analysis is conducted from the perspective of gradients. Specifically, the effects of different samples (i.e., inter-view negative samples, intra-view negative samples, and positive samples) on both the positive and negative contributions to the gradients of GCL are explored. It is concluded that the conflict arises due to the simultaneous involvement of certain samples’ features (the neighbors of the target sample and the positive sample) in both positive and negative gradients. To mitigate this conflict, we propose ReGCL, which consists of a gradient-guided structure learning method (GGSL) and a gradient-weighted InfoNCE loss function (GW-NCE). Specifically, a CL-adapted adjacency matrix is contained by the gradient estimator of GGSL, weakening the conflicts brought by GNNs. The embeddings are subsequently inputted into the gradient selector of GW-NCE in order to derive coefficients for positive and negative samples within the InfoNCE loss function. The main contributions are summarized as follows: • We study the partial conflict issue between GNNs and node-level GCL under a theoretical analysis of gradients, exploring the location and extent of the conflict. To the best of our knowledge, it is the first attempt to study the conflict issue from the perspective of gradient. • Building upon the theoretical findings, we propose a solution named ReGCL, which aims to alleviate the conflict by incorporating gradient-guided structure learning and gradient-weighted InfoNCE. • Extensive experimental results demonstrate the superior performance of ReGCL in comparison to multiple stateof-the-art baselines on node classification benchmarks. 2 Preliminary Consider a graph G = (V, E), where V = {vi}N i=1 represents the set of nodes with a cardinality of N, and E ⊆V × V denotes the set of edges. Let A ∈{0, 1}N×N denote the adjacency matrix and X ∈RN×F be the feature matrix, where F denotes the dimension of features. Node-level graph contrastive learning methods first sample two augmentations t ∼T from a pool of augmentation functions T . The augmentations generate two distinct views G1 and G2 of the original graph, where G1 = (A1, X1) and G2 = (A2, X2). GCL subsequently employs message-passing neural networks to obtain the embeddings of nodes. Here, we focus on the single-layer graph convolution network (GCN). Consider a node ui ∈G1 as the target node: ui = Θ⊤X j∈Ni∪{i} eji p djdi xj, i ∈[1, N] (1) where di = 1 + P j∈[1,N] Aji,1, eji is the edge weight from source node j to target node i, Ni is the neighbors of ui in G1, xi ∈X1 is the input feature of the node ui, and ui is the learned embedding. After applying a projection function, graph contrastive learning seeks to identify the node ui using an InfoNCEbased loss. This loss function aims to keep the embeddings of the same node in different views (ui, vi) close together (i.e., positive pair), while simultaneously pushing other node pairs further apart (i.e., negative pair): Li =−log f(ui, vi) f(ui, vi)+P k̸=i f(ui, vk)+P k̸=i f(ui, uk), (2) where f(·, ·) = exp(sim(·, ·)/τ), and sim(ui, vi) = ui · vi/||ui|| · ||vi|| is the cosine similarity, τ is the temperature. 3 Theoretical Analysis: Conflicts between GNN and GCL The problem of conflict in node-level GCL arises when specific samples are simultaneously included in both the aggregation of positive and negative samples. Consequently, for a given target node ui, the same other samples may have opposite impacts on the optimization. This conclusion is reached by conducting a gradient analysis in this section, which is illustrated in Figure 2 showcasing three distinct conflicts. 3.1 Gradient Analysis In order to investigate the occurrence and magnitude of conflicts, a theoretical analysis is conducted on the gradient in graph contrastive learning. Unlike previous studies (Wang and Liu 2021; Wu et al. 2021a), we identify conflicts by analyzing the gradients with respect to the features x (i.e., by taking the messaging-passing into account). It is imperative to perform this as conflicts cannot be identified only by analyzing the gradients with respect to representations or similarities. Formally, for a target ui, the gradients of graph contrastive learning w.r.t.xi is as follows: ∂Li ∂xi = Φ(C(ui, vk) | {z } inter-view negatives + C(ui, uk) | {z } inter-view negatives + C(ui, vi) | {z } positives ) · ci, (3) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8545 (a) Inter-View Negatives 𝑣𝑘∈𝒱𝑛+ (b) Intra-View Negatives 𝑢𝑘∈𝒰𝑛+ (c) Positive 𝑣𝑖 Feature Aggregation Embedding Distinction Input Feature Node Embeddings + + + − + + − + + + − 𝑥𝑘 𝑣𝑘 𝑣𝑖 𝑢𝑖 𝑥𝑖 𝑥𝑖 ′ 𝑣𝑘 𝑣𝑖 𝑢𝑖 𝑢𝑘 𝑢𝑖 Figure 2: Conflict Identification. (a) Conflict on inter-view negative samples. The feature of negative xk is aggregated to the embeddings of itself vk and the positive samples vi in cases where the negative is also the neighbor of the positive sample. However, the action of contrastive loss to vk and vi are opposite, making xk play a contradictory role in the optimization process. (b) Conflict on intra-view negative samples. Part of the embedding of the intra-view negative sample uk has the participation of the target node feature xi when the negative is adjacent to the target node, which leads to the opposite effect of xi on optimization. (c) Conflict on inter-view negative samples. Similar to (a), the feature of positive sample x′ i also has the conflict issue. where Φ = W⊤Θ⊤represents the parameters of GCN and projection function, ci = I−˜ui ˜u⊤ i τ||ui||√didi Φ is a constant w.r.t.xi. Specifically, C(ui, vk) denotes the contribution of inter-view negative pairs, C(ui, uk) denotes the contribution of intraview negative pairs, and C(ui, vi) donates the contribution of the positive pairs: C(ui, vk) = X k̸=i X j∈Nk∪{k} P(ui, vk) ejk p dkdj xj, (4) C(ui, uk) = X k̸=i X j∈Nk∪{k} P(ui, uk) ejk p dkdj xj, (5) C(ui, vi) = X j∈Ni∪{i} (P(ui, vi) −1) eji p didj xj, (6) where P(i, j) = softmax(f(i, j)) ∈[0, 1] is the probability of i being identified as j. The proof is in the Appendix A. We have the following observations: (1) The gradient directions of each sample xj in Eq. (4-5) and Eq. (6) exhibit opposite directions. (2) There are specific samples that simultaneously contribute to both the gradients of negatives and positive instances, resulting in conflicts between GNN and GCL. 3.2 Conflict Identification and Quantification The emergence of conflicts is observed within specific samples that are engaged in both positive and negative gradients, as previously elucidated. To determine the location and nature of the conflict, we conducted a gradient analysis considering different types of conflicting samples, including inter-view negatives, intra-view negatives, and positives. Conflict on Inter-View Negative Samples. Consider a set of inter-view negative samples Vn for the target node ui, which can be divided into two disjoint subsets Vn = V+ n ∪V− n and V+ n ∩V− n = ∅. V+ n represents the set of samples adjacent to the positive sample vi, and V− n stands for nodes not adjacent to vi. Specifically, the conflict occurs in V+ n because these samples participate not only in Eq. (4) as well as being present in Eq. (6) as the neighbors of the positive sample. Formally, for vk ∈V+ n , the conflict in which vk participates is measured by the weight coefficients of xk: w(vk, −) = X j∈Nk∪{k} P(ui, vj) ekj p dkdj , (7) w(vk, +) = (P(ui, vi) −1) eki √didk , (8) where w(vk, −) is the weight of xk in the gradients of interview negatives and w(vk, +) represents the weight in positives. The directions of the above two weights are also opposite, which causes conflict. Conflict on Intra-View Negative Samples. Let Un denote the collection of intra-view negative samples pertaining to the target node ui. Un can be divided into two disjoint subsets Un = U+ n ∪U− n and U+ n ∩U− n = ∅, where U+ n represents the set of samples that are adjacent to the target node ui, while U− n refers to nodes that are not adjacent to ui. Specifically, the conflict is in U+ n because the target node participates in the message-passing of U+ n in Eq. (5), which should not be included in the gradients of negatives. Formally, given the target node ui, the conflict within U+ n can be quantified by the weight of xi in the gradients of intra-view negatives: w(ui, −) = X j∈U+ n P(ui, uj) eij p didj . (9) Conflict on Positive Samples. Denote vi as the positive sample of the target node ui. Similar to inter-view negatives, the conflict on the positive sample is caused by vi participating in both the message-passing of negatives in Eq. (4) and positives in Eq. (6). Formally, the conflict of vi is as follows: w(vi, −) = X j∈V+ n P(ui, uj) eij p didj , (10) w(vi, +) = (P(ui, vi) −1) eii √didi , (11) where w(vi, −) and w(vi, +) represent the conflict of vi in the gradients of negatives and positives. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8546 𝛼𝑖𝑖⋅𝑓 , + ∑𝛼𝑖𝑘⋅𝑓 , + ∑𝛼𝑖𝑘⋅𝑓 , 𝑓 , Gradient Estimation Graph 𝒢= {𝐀, 𝐗} Aug. + Node Encoder GNN(𝐀′, 𝐗; 𝚯) Node Embeddings 𝒖, 𝒗 Contrastive Loss ℒGW (a) Gradient-guided Structure Learning (GGSL) (b) Gradient-weighted InfoNCE (GW-NCE) Higher-Order Structure Enhancement GNN(෡𝐀, 𝐗; 𝜽) 𝚯 Parameter Sharing ෝ𝒖, ෝ𝒗 𝛁 Conflict Measuring Projection Gradient-guided Structure Learning sg[·] ෝ𝑤(𝑣𝑖,+/−) ෝ𝑤(𝑢𝑖, −) ෝ𝑤(𝑣𝐾,+/−) Intra. Neg. Inter. Neg. Positive Projection sg[·] Gradient Selection 𝑤𝑣𝑖, + + 𝑤𝑣𝑖, − 𝑤(𝑢𝑖, −) 𝑤𝑣𝐾, + + 𝑤𝑣𝐾, − Intra. Neg. Inter. Neg. Positive ℒ𝑖 GW = ෍ Target Positive Intra. Neg. Inter. Neg. Positive Inter. Neg Intra. Neg. 𝑝( ⋅|𝑢𝑖) re-wiring 𝐀′ = re-weighting Figure 3: ReGCL framework. (a) Gradient-guided structure learning (GGSL) learning a CL-adapted adjacency matrix A′ through higher-order structure enhancement and the gradient estimation, leveraging the theoretical analysis on GCL gradients. (b) Gradient-weighted InfoNCE (GW-NCE) generates the coefficients on positive and negatives in the contrastive loss. 4 Methodology In this section, we present our proposed model, ReGCL, as depicted in Figure 3. To address the conflict issues between GNN and GCL, ReGCL comprises two primary components: (1) Gradient-guided Structure Learning (GGSL) to weaken the impact of feature smoothing of latent negative samples in the message passing stage, and (2) Gradient-weighted InfoNCE (GW-NCE) is employed to decrease the weight assigned to potential false negatives in contrastive loss. 4.1 Gradient-guided Structure Learning To improve message passing and enhance its adaptability to graph contrastive learning, we propose a gradient-guided structure learning (GGSL) to learn new edges and weights, thereby mitigating the adverse effects of feature smoothing. Gradient Estimation. As gradients are not accessible in the context of message-passing, we propose using a gradient estimator prior to the graph encoder in order to acquire the aforementioned weights. To obtain the accurate gradient, we first employ a GNN(A, X; θ) as the gradient estimator to obtain the embeddings before feeding them into the encoder. In particular, the gradient estimator is associated with the same parameter as the graph encoder GNN(A, X; Θ), rather than being updated through back-propagation (i.e., θ ←Θ). Subsequently, we can calculate the estimated weights ˆw as: ˆu = GNN(A1,X1; θ), ˆv = GNN(A2, X2; θ), (12) ˆw = Ω(ˆu, ˆv, A1, A2), (13) where Ω(·) represents the functions of Eq. (7-11). Gradient-guided Structure Learning. It is observed that there are three weights, { ˆw(vk, +), ˆw(ui, −), ˆw(vi, −)}, which are deemed unsuitable for graph contrastive learning due to the impact of message-passing. The absolute values of these measurements indicate the intensity of the conflicts. To mitigate such conflicts, one can rebuild the edge weight based on the given values. Due to the correctness of GNN and GCL, we aim to find a trade-off solution. On one hand, it is necessary to attenuate all of their effects, resulting in a reduction of the corresponding edge weight in ˆw. Moreover, a higher value of ˆw indicates a greater likelihood of being considered as neighbors by the encoder compared to negative samples. Therefore, we propose to utilize an increasing function that has a range spanning from 0 to 1 as the function for projecting edge weights. A′ ij = 1 nij X 1 1 + exp(−sg[ ˜wij]), (14) where Aij ′ represents the edge weight from node i to node j, nij represents the total number of times the edge appears in { ˆw(vk, +), ˆw(ui, −), ˆw(vi, −)}, and ˜wij is the normalization of ˆwij, which represents the term that contains eij in the sum of ˆw. sg[·] means the stop-gradient operator as the parameter is updated by copying the encoder. We discard the edges with low weights by a ratio of the original edges (δ% times the number of edges |E|). Furthermore, to counteract that GGSL weakens the amount of information during the message passing, we introduce the higher-order neighbors to enhance the encoder. The k-th The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8547 order structure can be formally defined as follows: ˆA = A1 + A2 + · · · + Ak. (15) The higher-order structure is incorporated into the model, thereby replacing the original graph structure. It is important to note that, unlike other existing methods that also incorporate higher-order structure or multiple hops (Abu-El-Haija et al. 2019; Klicpera, Bojchevski, and Günnemann 2019), the input of the encoder in GGSL maintains a similar sparsity to the original graph, which is controlled by a threshold parameter δ. The incorporation of the higher-order structure in GGSL aims to enhance the acquisition of a more precise adjacency matrix by expanding the pool of potential neighboring candidates. This aspect is particularly beneficial as the range of negative samples in contrastive learning encompasses the entire dataset. 4.2 Gradient-weighted InfoNCE In addition to adapting GNN to GCL, it is necessary to appropriately adjust the penalty of negative samples based on the characteristics of GNNs. This adjustment aims to reduce the occurrence of false negative samples. Therefore, we propose a gradient-weighted InfoNCE (GW-NCE) that incorporates the gradient weight w as a guiding factor for re-weighting the positive and negative samples within the original InfoNCE loss function. Gradient Selection. In contrast to GGSL, the gradients are utilized in the computation of the loss function. We thus propose a mechanism for selecting gradients in order to obtain the variable w. The procedure for gradient estimation in GGSL is similar, with the exception that the graph encoder GNN(A, X; Θ) is employed as the representation learner to obtain node embeddings u/v and w. Gradient-weighted InfoNCE. As stated previously, the conflict within the set {w(vk, +), w(ui, −), w(vi, −)} arises from the approach used by the GNN. This also suggests the potential for a negative sample to be incorrectly classified as such (for instance, the likelihood of this occurring increases with higher values of w). Thus, one can reduce the weight of a negative sample with larger w. Furthermore, there are two additional weights, denoted as {w(vk), w(vi)}, where w(vk) = w(vk, −) + w(vk, +) and w(vi) = w(vi, −) + w(vi, +). The two weights illustrate the comparative scale of the conflict, which should correspond to the extent of the loss’s impact. Specifically, we propose to use a decreasing function to project w into a weight within InfoNCE: αij = 1 nij X 1 1 + exp(sg[ ˙wij]), (16) where ˙wij including the corresponding term in weights {w(vk, +), w(ui, −), w(vi, −), −w(vk), −w(vi)}. Finally, the GW-NCE is formed as follows: Li =−log f + ii αiif + ii +P k̸=i αikf inter ik +P k̸=i αikf intra ik , (17) where f + ii = f(ui,vi), f inter ik = f(ui,vk), and f intra ik = f(ui,uk). The detailed algorithm and complexity analysis of ReGCL can be found in Appendix B. 5 Experiments In this section, we verify the effectiveness of the proposed ReGCL1 by comparing the SOTA methods in graph learning. 5.1 Experimental Settings Datasets. We evaluate the proposed ReGCL on five node classification datasets, including the citation networks, copurchase networks, and co-authorship networks. Cora and Citeseer are citation networks that are widely used as node classification benchmarks (Kipf and Welling 2017), Amazon Photo is the Amazon co-purchase network (Shchur et al. 2018), and Coauthor CS includes the co-authorships of the academic graph (Shchur et al. 2018). Baselines. We compare ReGCL with representative graph learning methods: (1) semi-supervised GNNs: GCN (Kipf and Welling 2017) and GAT (Veliˇckovi´c et al. 2018), (2) unsupervised graph representation learning: DeepWalk (Perozzi, Al-Rfou, and Skiena 2014) and GAE (Kipf and Welling 2016), and (3) self-supervised graph contrastive learning: DGI (Velickovic et al. 2019), GRACE (Zhu et al. 2020), MVGRL (Hassani and Khasahmadi 2020), BGRL (Thakoor et al. 2021), GCA (Zhu et al. 2021), CCA-SSG (Zhang et al. 2021), GRADE (Wang et al. 2022). Evaluation Protocol. We adhere to the commonly employed evaluation procedure (Velickovic et al. 2019; Zhang et al. 2021). We initially train the model using all nodes without labels and subsequently proceed to train an additional classifier using the fixed node embeddings. For the baseline results, we use the public-reported results if their experimental setting is the same as ours. Otherwise, we reproduce them with the authors’ code. Please refer to Appendix C for more details on the dataset split and hyperparameter settings. 5.2 Comparison with State-of-the-art Methods We report the experimental results in Table 1. It is observed that ReGCL demonstrates superior performance compared to the state-of-the-art baselines, including both supervised and unsupervised methods. Specifically, ReGCL achieves an enhancement of 5.1% on average. Compared to GRACE, which can be regarded as an ablation version of ReGCL, the observed improvements amount to a 2.9% increase, thereby indicating the efficacy of the proposed gradientguided structure learning (GGSL) and gradient-weighted InfoNCE (GW-NCE). Furthermore, in comparison to more powerful data/model augmentation (e.g., GRADE), our proposed ReGCL demonstrates its efficacy. The results consistently indicate that the implementation of well-designed conflict mitigation mechanisms for the encoder leads to a higher level of performance. This suggests that careful attention to such mechanisms is crucial for achieving effective conflict mitigation. Compared to the baselines that employ alternative loss functions instead of InfoNCE (e.g., CCA-SSG), our proposed GW-InfoNCE demonstrates greater competitiveness. Please consult the ablation study in the subsequent section for a more comprehensive analysis. 1https://github.com/RingBDStack/ReGCL The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8548 Methods Input Cora Citeseer Pubmed Photo CS Supervised GCN (Kipf and Welling 2017) X,A,Y 82.5±0.4 71.2±0.3 79.2±0.3 92.4±0.2 93.0±0.3 Supervised GAT (Veliˇckovi´c et al. 2018) X,A,Y 83.0±0.7 72.5±0.7 79.0±0.3 92.6±0.4 92.3±0.2 Raw Features (Velickovic et al. 2019) X 47.9±0.4 49.3±0.2 69.1±0.3 78.5±0.0 90.4±0.0 DeepWalk (Perozzi, Al-Rfou, and Skiena 2014) A 70.7±0.6 51.4±0.5 74.3±0.9 89.4±0.1 84.6±0.2 GAE (Kipf and Welling 2016) X,A 71.5±0.4 65.8±0.4 72.1±0.5 91.6±0.1 90.0±0.7 DGI (Velickovic et al. 2019) X,A 82.3±0.6 71.8±0.7 76.8±0.6 91.6±0.2 92.2±0.6 GRACE (Zhu et al. 2020) X,A 81.9±0.4 71.2±0.5 80.6±0.4 92.2±0.2 92.9±0.0 MVGRL (Hassani and Khasahmadi 2020) X,A 83.5±0.4 73.3±0.5 80.1±0.7 91.7±0.1 92.1±0.1 BGRL (Thakoor et al. 2021) X,A 81.7±0.5 72.1±0.5 80.2±0.4 92.6±0.3 93.0±0.2 GCA (Zhu et al. 2021) X,A 83.4±0.3 72.3±0.1 80.2±0.4 92.5±0.2 93.1±0.0 CCA-SSG (Zhang et al. 2021) X,A 84.2±0.4 73.1±0.3 81.6±0.4 93.1±0.1 93.3±0.2 GRADE (Wang et al. 2022) X,A 83.3±0.5 68.2±0.6 81.5±0.5 92.6±0.3 93.2±0.3 ReGCL (Ours) X,A 84.8±0.1 74.3±0.3 83.9±0.3 92.6±0.3 93.7±0.3 Table 1: Test accuracy (%±standard deviation) of node classification task. (bold: best results, underlined: runner-ups.) 5.3 Ablation Study To further investigate the effectiveness of each component of the proposed ReGCL, we conduct the ablation study with the following ReGCL variants: 1. ReGCL w/o GGSL: we discard the gradient-guided structure learning and directly input the graph G = {A, X} into the augmentation and encoder to obtain the node embeddings while preserving the GW-NCE. 2. ReGCL w/o GW-NCE: we replace the gradient-weighted InfoNCE as a normal InfoNCE as Eq. (2). The GGSL is still used before the augmentation. 3. ReGCL w/o Both: we ablate both the two main components of ReGCL (i.e., GGSL and GW-NCE), which is the same as the architecture of GRACE model. 4. ReGCL w/o Higher-Order: for a finer ablation, we perform the gradient-guided structure learning without any higher-order structure enhancement (i.e., k = 1). 5. ReGCL w/o Re-wiring: to explore whether the validity should be attributed to the learned edges or their weights, we fix the edges as the original graphs. The results are in Figure 4 with the following observations. Effect of GGSL. After excluding the proposed gradientguided structure learning module (i.e., ReGCL w/o GGSL), the performance experiences a decrease of up to 1.1%. The observed decrease in performance demonstrates the impact of GGSL which serves to alleviate the conflict in GNN for graph contrastive learning. Furthermore, when comparing the outcomes presented in Table 1 with those derived from GRACE, it is evident that there is still an observed improvement of 1.4% in the performance of ReGCL without the use of GW-NCE. The only difference between this approach and GRACE lies in the encoding (i.e., whether to utilize GGSL). The observed effectiveness of the proposed gradient-guided structure learning mechanism is noteworthy. Effect of GW-NCE. It is evident that, in the absence of the GW-NCE, there is a decrease in performance by up to 1.7%. The results indicate that the proposed gradient-weighted approach effectively mitigates the conflict within the original InfoNCE loss function. Specifically, when compared to the GRACE (ReGCL w/o Both), ReGCL still achieves an 1.0% improvements when removing GGSL. The aforementioned two models exhibit variation solely in the design of the loss function, thereby demonstrating the superiority of the proposed gradient-guided InfoNCE. Additionally, the ReGCL w/o GGSL also outperforms CCA-SSG in the majority of cases. Therefore, the implementation of a well-designed objective aimed at alleviating conflicts can enhance the effectiveness of contrastive learning on graphs. In addition to conducting ablations on the main components of ReGCL, we also explore the effectiveness within the GGSL for a more comprehensive analysis. Effect of Higher-Order Structure Learning. We further assess the efficacy of higher-order structure enhancement by maintaining the order at k = 1. ReGCL with higher-order neighbors results in a 2.0% improvement compared to the absence of higher-order neighbors. The aforementioned statement illustrates that the higher-order structure enhancement proves beneficial for GGSL by mitigating the impact of firstorder neighbors. Please refer to the subsequent section for further elaboration. It has been observed that the removal of higher-order structure learning in the Photo and CS datasets leads to a significant decrease in effectiveness, even more so than removing both components. This highlights the importance of higher-order structure learning. Effect of CL-adapted matrix. The proposed GGSL algorithm has the capability to simultaneously learn new edges and re-weight edges. To determine the effectiveness of GGSL, we keep the structure of the graphs constant and only learn the edge weights (i.e., ReGCL w/o Re-wiring). The result findings indicate that the learning of new edges contributes to a 2.7% average improvement in GGSL. Similar to higherorder structure learning, re-wiring plays a significant role in both Photo and CS datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8549 Cora 75 80 85 ACC (%) Citeseer 65 70 75 Pubmed 75 80 85 Photo 85 90 95 CS 85 90 95 ReGCL w/o GGSL w/o GW-NCE w/o Both w/o Higher-Order w/o Re-wiring Figure 4: Ablation study of ReGCL. 1 2 3 4 5 Cora 80 82 84 ACC (%) ReGCL GRACE 1 2 3 4 5 Citeseer 65 70 75 ReGCL GRACE 1 2 3 4 5 Pubmed 75 80 85 ReGCL GRACE Figure 5: Sensitivity of the number of orders k. 0.5 1.0 1.5 Cora 80 82 84 ACC (%) ReGCL GRACE 0.5 1.0 1.5 Citeseer 65 70 75 ReGCL GRACE 0.5 1.0 1.5 Pubmed 75 80 85 ReGCL GRACE Figure 6: Sensitivity of the threshold δ. 5.4 Hyperparameter Sensitivity In the ReGCL framework, two crucial hyperparameters: the order of neighbors k which determines the extent of messagepassing, and the threshold δ decides the number of edges in the learned structure. Thus, the effects of the two hyperparameters are shown in Figure 5-6. Number of Orders k. From Figure 5, we find that appropriately increasing the order helps the performance of ReGCL while the first-order and overly higher-order will damage the effectiveness of the model. Specifically, an average 1.7% improvement is observed when increasing the value of k beyond the first order. However, given a larger k, the accuracy of the model declines which suggests that the number of orders should not be too large due to the over-smoothing issue of GNNs. Note that, in contrast to higher-order or multi-hop models, the sparsity of the structure input into the encoder after GGSL remains similar to that of the original graph. This suggests that the effectiveness of GGSL can be attributed to the learned new edges and their weights, which are based on the higher-order neighbors, rather than an increase in the number of edges. Therefore, it is crucial to select an optimal higher-order number for the dataset. Threshold δ. The parameter δ denotes the ratio between the number of edges in the graph generated by GGSL and the number of edges in the original graph. We vary the parameter δ within the range of 0.5 to 1.5, representing a variation of 50% to 150% of the edges. We have made the following observations based on the experimental results in Figure 6. Firstly, a structure that includes a larger number of edges is advantageous for GCL, which is beneficial for most datasets. The observed phenomena indicate that the performance is enhanced when there are more CL-adapted edges. Secondly, excessively increasing the number of edges in GCL is not beneficial, as it will lead to a higher frequency of conflicts. 6 Related Work Inspired by the powerful self-supervised learning ability in CV (Chen et al. 2020; Fang et al. 2023a,b) and NLP (Oord, Li, and Vinyals 2018; Fang et al. 2022), there are multiple studies on graph contrastive learning (GCL) (Wu et al. 2021b; Liu et al. 2022b; Ji et al. 2023b; Liang et al. 2023). DGI (Velickovic et al. 2019) maximizes the mutual information between local and global representations. MVGRL (Hassani and Khasahmadi 2020) uses graph diffusion as a means to generate two distinct views of graphs utilized for contrastive learning. GRACE (Zhu et al. 2020) proposes the use of the InfoNCE loss function (Oord, Li, and Vinyals 2018) on graphs. Inspired by the aforementioned works, there exist several studies that center on node-level GCL, such as BGRL (Thakoor et al. 2021), CCA-SSG (Zhang et al. 2021), and GRADE (Wang et al. 2022). Different from the nodelevel GCL, GraphCL (You et al. 2020) focuses on graph-level tasks. The above GCL methods primarily use graph neural networks (GNNs) (Kipf and Welling 2017; Veliˇckovi´c et al. 2018) as encoders. Recently, there has been an increase in efforts to identify the underlying issues through augmentation mechanisms (Yu et al. 2022; Zhang et al. 2023), negative sampling techniques (Xia et al. 2022). However, GCL still faces the conflict issue proposed in this paper. 7 Conclusion We present ReGCL, a graph contrastive learning framework to mitigate the conflict issue between GNN and GCL. Theoretically, an analysis is performed on gradients to identify the specific locations and mechanisms of conflict occurrence. Leveraging the theoretical findings, we design two gradientbased strategies. Gradient-guided structure learning enables the acquisition of a graph structure adapted to CL, thereby mitigating conflicts within the GNN. Gradient-weighted InfoNCE mitigates the occurrence of false negatives in the context of GNN by integrating the coefficients derived from the gradients. ReGCL achieves the SOTA results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8550 Acknowledgements We thank the anonymous reviewers for their insightful comments and suggestions. The corresponding author is Jianxin Li. The authors of this paper were supported by the NSFC through grant No.62225202. References Abu-El-Haija, S.; Perozzi, B.; Kapoor, A.; Alipourfard, N.; Lerman, K.; Harutyunyan, H.; Steeg, G. V.; and Galstyan, A. 2019. MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. In International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 21–29. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, 1597–1607. Chuang, C.-Y.; Robinson, J.; Lin, Y.-C.; Torralba, A.; and Jegelka, S. 2020. Debiased contrastive learning. Advances in Neural Information Processing Systems, 33: 8765–8775. Fang, X.; Liu, D.; Zhou, P.; and Hu, Y. 2022. Multi-modal cross-domain alignment network for video moment retrieval. IEEE Transactions on Multimedia. Fang, X.; Liu, D.; Zhou, P.; and Nan, G. 2023a. You Can Ground Earlier than See: An Effective and Efficient Pipeline for Temporal Sentence Grounding in Compressed Videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2448–2460. Fang, X.; Liu, D.; Zhou, P.; Xu, Z.; and Li, R. 2023b. Hierarchical local-global transformer for temporal sentence grounding. IEEE Transactions on Multimedia. Hassani, K.; and Khasahmadi, A. H. 2020. Contrastive multiview representation learning on graphs. In International Conference on Machine Learning, 4116–4126. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Computer Vision and Pattern Recognition, 9729– 9738. Ji, C.; Li, J.; Peng, H.; Wu, J.; Fu, X.; Sun, Q.; and Yu, P. S. 2023a. Unbiased and Efficient Self-Supervised Incremental Contrastive Learning. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 922–930. Ji, C.; Zhao, T.; Sun, Q.; Fu, X.; and Li, J. 2023b. HigherOrder Memory Guided Temporal Random Walk for Dynamic Heterogeneous Network Embedding. Pattern Recognition, 109766. Kipf, T. N.; and Welling, M. 2016. Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations. Klicpera, J.; Bojchevski, A.; and Günnemann, S. 2019. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In International Conference on Learning Representations. Liang, K.; Liu, Y.; Zhou, S.; Tu, W.; Wen, Y.; Yang, X.; Dong, X.; and Liu, X. 2023. Knowledge Graph Contrastive Learning Based on Relation-Symmetrical Structure. IEEE Transactions on Knowledge and Data Engineering, 1–12. Liang, K.; Meng, L.; Liu, M.; Liu, Y.; Tu, W.; Wang, S.; Zhou, S.; Liu, X.; and Sun, F. 2022. Reasoning over different types of knowledge graphs: Static, temporal and multi-modal. arXiv preprint arXiv:2212.05767. Liu, N.; Wang, X.; Bo, D.; Shi, C.; and Pei, J. 2022a. Revisiting graph contrastive learning from the perspective of graph spectrum. Advances in Neural Information Processing Systems, 35: 2972–2983. Liu, X.; Zhang, F.; Hou, Z.; Mian, L.; Wang, Z.; Zhang, J.; and Tang, J. 2021. Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering, 35(1): 857–876. Liu, Y.; Jin, M.; Pan, S.; Zhou, C.; Zheng, Y.; Xia, F.; and Philip, S. Y. 2022b. Graph self-supervised learning: A survey. IEEE Transactions on Knowledge and Data Engineering, 35(6): 5879–5900. Logeswaran, L.; and Lee, H. 2018. An efficient framework for learning sentence representations. In International Conference on Learning Representations. Oord, A. v. d.; Li, Y.; and Vinyals, O. 2018. Representation learning with contrastive predictive coding. arXiv:1807.03748. Perozzi, B.; Al-Rfou, R.; and Skiena, S. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 701–710. Shchur, O.; Mumme, M.; Bojchevski, A.; and Günnemann, S. 2018. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868. Thakoor, S.; Tallec, C.; Azar, M. G.; Munos, R.; Veliˇckovi´c, P.; and Valko, M. 2021. Bootstrapped representation learning on graphs. In International Conference on Learning Representations 2021 Workshop on Geometrical and Topological Representation Learning. Tong, Z.; Liang, Y.; Ding, H.; Dai, Y.; Li, X.; and Wang, C. 2021. Directed graph contrastive learning. Advances in Neural Information Processing Systems, 34: 19580–19593. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations. Velickovic, P.; Fedus, W.; Hamilton, W. L.; Liò, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep Graph Infomax. In International Conference on Learning Representations. Wang, F.; and Liu, H. 2021. Understanding the Behaviour of Contrastive Loss. In IEEE Conference on Computer Vision and Pattern Recognition, 2495–2504. Wang, R.; Wang, X.; Shi, C.; and Song, L. 2022. Uncovering the Structural Fairness in Graph Contrastive Learning. Advances in Neural Information Processing Systems, 35: 32465–32473. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8551 Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; and Xie, X. 2021a. Self-supervised Graph Learning for Recommendation. In SIGIR Conference on Research and Development in Information Retrieval, 726–735. Wu, L.; Lin, H.; Tan, C.; Gao, Z.; and Li, S. Z. 2021b. Selfsupervised learning on graphs: Contrastive, generative, or predictive. IEEE Transactions on Knowledge and Data Engineering. Xia, J.; Wu, L.; Wang, G.; Chen, J.; and Li, S. Z. 2022. ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning. In International Conference on Machine Learning, 24332–24346. You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; and Shen, Y. 2020. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33: 5812– 5823. Yu, J.; Yin, H.; Xia, X.; Chen, T.; Cui, L.; and Nguyen, Q. V. H. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of ACM SIGIR conference on research and development in information retrieval, 1294–1303. Zhang, H.; Wu, Q.; Yan, J.; Wipf, D.; and Yu, P. S. 2021. From canonical correlation analysis to self-supervised graph neural networks. Advances in Neural Information Processing Systems, 34: 76–89. Zhang, Y.; Zhu, H.; Song, Z.; Koniusz, P.; and King, I. 2023. Spectral feature augmentation for graph contrastive learning and beyond. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 11289–11297. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2020. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2021. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference, 2069–2080. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8552
2024
950
18,796
D3: A Methodological Exploration of Domain Division, Modeling, and Balance in Multi-Domain Recommendations Pengyue Jia1, Yichao Wang2, Shanru Lin1, Xiaopeng Li1, Xiangyu Zhao1*, Huifeng Guo2, Ruiming Tang2* 1City University of Hong Kong 2Huawei Noah’s Ark Lab {jia.pengyue,xiaopli2-c}@my.cityu.edu.hk, [email protected], [email protected],{wangyichao5,huifeng.guo,tangruiming}@huawei.com Abstract To enhance the efficacy of multi-scenario services in industrial recommendation systems, the emergence of multidomain recommendation has become prominent, which entails simultaneous modeling of all domains through a unified model, effectively capturing commonalities and differences among them. However, current methods rely on manual domain partitioning, which overlook the intricate domain relationships and the heterogeneity of different domains during joint optimization, hindering the integration of domain commonalities and differences. To address these challenges, this paper proposes a universal and flexible framework D3 aimed at optimizing the multi-domain recommendation pipeline from three key aspects. Firstly, an attentionbased domain adaptation module is introduced to automatically identify and incorporate domain sensitive features during training. Secondly, we propose a fusion gate module that enables the seamless integration of commonalities and diversities among domains, allowing for implicit characterization of intricate domain relationships. Lastly, we tackle the issue of joint optimization by deriving loss weights from two complementary viewpoints: domain complexity and domain specificity, alleviating inconsistencies among different domains during the training phase. Experiments on three public datasets demonstrate the effectiveness and superiority of our proposed framework. In addition, D3 has been implemented on a real-life, high-traffic internet platform catering to millions of users daily. Introduction To cater to diverse user interests and business needs, modern recommendation systems are designed to handle multiple scenarios concurrently (Wang et al. 2023b), such as the homepage and the item detail page on e-commerce platforms. Data from these scenarios exhibit both commonalities and diversities. On one hand, users and items overlap across different scenarios, resulting in similar data distributions. On the other hand, users exhibit inconsistent behavioral patterns when facing different scenarios, leading to distinct data distributions. Traditional approaches can be categorized into two types (Sheng et al. 2021): (1) constructing separate models for each scenario, significantly increasing *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Domain Sensitive Feature Selection Domain Division Fusion Commonalities and Diversities Domain Modeling Loss Adaptation Domain Balance Domain Adaptation feature-level modeling-level optimization-level Figure 1: Three aspects on domain adaptation. maintenance and training costs, and (2) simply training a single model using data from all scenarios, thereby failing to capture the commonalities and diversities, resulting in a notable loss of effectiveness. To address these challenges, multi-domain recommendation (MDR) has been proposed and has garnered significant attention. MDR offers a solution to reduce maintenance and training costs by employing a unified model while effectively handling domain adaptation through specifically designed network structures. As depicted in Figure 1, three crucial aspects should be considered in domain adaptation: • Domain Division. Domain division significantly influences the data distribution across different domains, thus impacting the efficacy of MDR modeling. Existing research typically uses business scenario IDs as a direct means of dividing domains, without considering more nuanced approaches (Sheng et al. 2021; Jiang et al. 2022; Shen et al. 2021). Alternatively, some studies (Zhang et al. 2022a; Chang et al. 2023) employ manually selected domain sensitive features for domain division. However, these approaches require high experiential expertise and lack dynamic updating mechanisms to adapt to novel data. • Domain Modeling. Capturing commonalities and diversities across domains presents core challenges in domain modeling. Some works adopt the shared-specific network paradigm (Sheng et al. 2021; Jiang et al. 2022; Shen et al. 2021), where shared networks capture commonalities, while different domains possess independent specific structures to capture their respective diversities. Another approach utilizes the dynamic weight paradigm (Zhang et al. 2022a; Chang et al. 2023; Li et al. 2023b), where weights generated from domain sensitive features are directly applied to the backbone network. While these methodologies have achieved promising results, they overThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8553 look modeling the interconnections between domains and the intricate mechanism of integrating commonalities and diversities. • Domain Balance. During the training process, the difficulty and progress of training differ across different domains, and this inconsistency greatly hinders achieving the optimal state of joint optimization. Presently, specific research on domain adaptation in the joint optimization process of MDR is lacking. Although some efforts in multi-task learning (Wang et al. 2023a; Liu et al. 2023; Li et al. 2023a) offer reference value (Chen et al. 2018; Liu, Johns, and Davison 2019; Guo et al. 2018; Kendall, Gal, and Cipolla 2018), they have not directly addressed the complexity and specificity of domains due to different research settings. To tackle the aforementioned challenges related to multidomain recommendation, we present a unified framework D3, focusing on three crucial aspects – Domain Division, Domain Modeling, and Domain Balance – in domain adaptation in the multi-domain recommendation. Specifically, we introduce three key components in our proposed framework. First, a domain sensitive feature selection (DSFS) module is designed based on the attention mechanism to automatically select domain sensitive features and perform domain division accordingly. Second, a domain fusion (DF) module generates fusion weights for shared and specific parts, implicitly capturing complex relationships among multiple domains. Third, a domain balance optimization (DBO) module calculates the loss weights of each sample based on the domain’s complexity and specificity, effectively addressing the inconsistency in the joint optimization process. Experimental evaluations are performed on three public datasets, demonstrating the consistent improvement of our proposed framework across multiple backbones. Comparative experiments with other similar methods further showcase the superiority of our approach. Importantly, this framework is designed as a plug-and-play plugin, offering high extensibility and convenience. The key contributions of our work can be summarized as follows: • We present a generic and easily applicable plug-in for domain adaptation in multi-domain recommendation. To the best of our knowledge, this is the first work that jointly considers domain division, domain modeling, and domain balance in multi-domain recommendation, making it a novel contribution to the field. • Our framework includes a domain sensitive feature selection module for domain division, and a domain fusion module to integrate shared and specific parts and implicitly capture complex relationships between domains. Additionally, we introduce a domain balance optimization method to alleviate training inconsistency across domains during the joint optimization. • Evaluation experiments conducted on three public datasets demonstrate the effectiveness of our proposed method. Moreover, D3 has been deployed on a real-world, large-scale internet platform, serving millions of users daily. These results highlight the practicality and scalability of our approach. Preliminaries Problem Definition Traditional Click-through Rate (CTR) prediction models take x including user features, item features, and context features as inputs and predict the probability ˆy of the user clicking on the item. The process can be formalized as ˆy = f(x). In MDR, a unified model is trained to serve multiple scenarios simultaneously. We distinguish between the meanings of scenario and domain in this paper for ease of understanding: Definition 1 Scenario. Let S denote the set of senarios. Scenarios are the criterion for partitioning when evaluating model performance, such as the commonly used slotID in the commercial advertising platform. Definition 2 Domain Sensitive Features. F denotes all features in model inputs x and DF denotes domain sensitive features, where DF ⊆F . Domain sensitive features are selected for domain division. Definition 3 Domain. Let D denote the set of domains. Domains are the criterion for partitioning in the modeling process and are divided based on domain sensitive features DF . Domains can be equal to scenarios or more complicated than scenarios. For example, if only the scenario IDs (Sheng et al. 2021) used for model evaluation are selected as domain sensitive features , the division between domain and scenario remains consistent. If more domain sensitive features are chosen for domain partition, the domain will become far more complex than the scenario (Zhang et al. 2022a). With the above definitions, multi-domain CTR estimation can be represented as the following equation: ˆyi = f(xi, d fi) (1) where ˆyi is the predicted CTR of the ith sample, xi is the ith model input, and d fi is the domain sensitive features for ith sample. Please note that in this paper, the domain sensitive features vary for different data samples, whereas in previous studies, the domain sensitive features d f remain consistent across all data samples. Methodology In this section, we will detail the architecture of our proposed framework. An introduction to framework overview is given in Section and we introduce the backbone network in Section . The specific demonstration of the framework modules is from Section to Section , and the optimization is illustrated in Section . Framework Overview Figure 2 showcases the framework’s overall architecture. There are three modules proposed in this paper: the domain sensitive feature selection module, the domain fusion module, and the domain balance optimization module. The domain sensitive feature selection module adaptively selects domain sensitive features for different data samples and generates a weight matrix containing domain information, which is then utilized in the backbone network. The domain The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8554 FC & ReLU FC & ReLU Sigmoid Loss × Embedding Table ... Domain Sensitive Feature Selection Module Input FC FC FC FC FC Concat MLP MLP × Element-wise Multiplication Sum Operator Matrix Multiplication MLP Concat Domain Sensitive Feature Selection Module × × × Domain Sensitive Feature Selection Module Domain Fusion Module Domain Balance Optimization Module Shared Weight Backbone Model Figure 2: Framework Architecture. fusion module aims to capture implicit correlations between divergent domains, assigning weights to the shared and specific parts in the fusion process. The domain balance optimization module utilizes the attention matrix and fusion weights from the first two modules to generate loss weights on domain complexity and specificity, alleviating inconsistencies during optimization. Backbone Network To ensure the universality of our framework, we adopt a simple backbone network structure consisting of two main parts: the embedding layer, and the transformation layer. Embedding Layer. The embedding layer is a commonly used component in recommender systems. It improves the stability and efficiency of network operation by discretizing input features into high-dimensional sparse vectors and mapping them to low-dimensional dense vectors: f ′ j = onehot(f j) (2) e j = M · f ′ j (3) x = concat(e1 | e2 | . . . | en) (4) where f j is the jth feature of F , f ′ j is the discretized vector, ej is the embedding of the jth feature, and x is the model inputs. M is the trainable embedding matrix. Transformation Layer. Transformation layer consists of a feed-forward network enhancing the expressive ability and one sigmoid function mapping the output value to CTR. The procedures are formalized as below: ˆy = sigmoid(σ(x · W1 tr + b1 tr) · W2 tr + b2 tr) (5) where ˆy is the predicted CTR, σ is the activation function, and x is the model inputs. Domain Sensitive Feature Selection Module Domain sensitive features are essential in multi-domain recommendations because they dominate how domains are divided. Former works mainly depend on experimental knowledge to select these features, which require high expertise and lots of labor cost. Furthermore, preset feature combinations cannot be dynamically updated to adapt to the latest data, which is important in modern recommendation systems. To address the above challenges, we design the domain sensitive feature selection module (DSFS). This module is adept at selecting domain sensitive features dynamically at the instance level through end-to-end training. These features are then utilized to generate weight and bias for the backbone network to introduce domain information. To select the domain sensitive features adaptively, we use the following attention mechanism (Fu et al. 2019) at the feature-level, and the selection is processed by multiplying the attention matrix with all model inputs.: Q, K, V = WQ · x,WK · x,WV · x (6) A = so ftmax(Q · K⊤) (7) x′ = A · x + x (8) where Q, K, V are the query, key, and value, and WQ,WK,WV are the corresponding weights. A is the attention matrix, and it is multiplied with the model inputs x to process feature selection. A residual connection is further applied to generate the attention mechanism’s output x′. The remain parts are designed to capture diversities based on the domain divided by the selected domain sensitive features. For ease of understanding, we only describe the process of weight generation in this subsection, and the generation process of bias is the same as weight. Equation (9) shows independent linear transformations of each feature to reduce the mutual influence. To generate finer-grained representations related to domain specific information, we choose nonlinear transformations and residual connections to learn domain diversities, as shown in Equation (10). H = concat(FC1(x′ 1) | FC2(x′ 2) | . . . | FCn(x′ n)) (9) Wspec = σ(H + σ(H · W1 d f s + b1 d f s)) · W2 d f s + b2 d f s (10) where x′ j is the jth feature representation of transformed inputs x′. Wd f s and bd f s are the parameters of linear transforThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8555 mation, σ is the activation function, and Wspec denotes the specific weight matrix. Domain Fusion Module To implicitly model the complex relationships between domains overlooked in previous work (Sheng et al. 2021; Zhang et al. 2022a; Chang et al. 2023), we introduce a gate mechanism to fuse the shared and specific information in a finer-grained perspective. Traditional fusion methods (Sheng et al. 2021) typically aggregate shared and specific information through simple addition or multiplication. However, we argue that different domains may overlap with shared information to varying degrees. Specifically, shared information tends to be closer to major scenarios, and simply fusing shared and specific information will impair the performance of other scenarios. Therefore, dynamic fusion weights should be proposed to incorporate shared and specific information, implicitly modeling the specificity of the current domain alongside the relationship with other domains. To achieve the above objectives, we propose the domain fusion module (DF) to derive a vector of length two that learns the proportional relationship between the shared part and the specific part of the current domain in fusion. The processes are shown below: v = MLP(x′) (11) gi = exp(vi) P1 j=0 exp(vj) , g = [gsp, gsh] (12) where g is the gate vector, gsp is the gate value for specific part and gsh is the gate value for shared part. x′ ∈Rn×dim is the attention mechanism’s output. The output of the DSFS represents the specific part, and we introduce a random initialized matrix Wglob that represents the shared part. They are fused according to the weights vector g, indicating different proportions of the shared and specific information involved in the current domain. W = (gsp · Wspec) ⊗(gsh · Wglob) (13) where Wspec is the output of DSFS and Wglob is the global weight matrix. W is the weight matrix after fusion, gsp and gsh are the gate scalars. ⊗is the element-wise multiplication. To integrate domain information into the backbone, the fused weight W and bias b will be employed in the transformation layer of the backbone model, which is illustrated in Section . The process can be formulized as follows: ˆy = sigmoid(σ(x · W + b) · W2 tr + b2 tr) (14) Domain Balance Optimization Module The complexity of the difference in data volume and data distribution of different domains leads to inconsistency in the training process, reflected in the difference in training difficulty and training progress. In the domain balance optimization module (DBO), we model this inconsistency from two perspectives: domain complexity and domain specificity. These two parts are associated with the domain sensitive feature selection module and the domain fusion module. Domain Complexity. The complexity of a domain is determined by the domain sensitive features selected. The more complex domain is usually more challenging to train. We utilize the entropy of the attention matrix for each data sample in the domain sensitive feature selection module to express the degree of intricacy of domains. Higher entropy of the attention matrix means the present domain focuses on more domain sensitive features, effectively characterizing the complexity. We derive the degree of domain complexity c with the attention matrix A in DSFS: ci = 1 n n X m=1 n X k=1 Ai,m,klog(Ai,m,k) (15) where Ai,m,k denotes the element in the attention matrix in m row, k column for ith data sample. To achieve a discriminative weight distribution, we first normalize the entropies and then filter the weights to enhance training stability. The weights corresponding to domain complexity are derived as follows: c′ = F( c −¯c 2 · σc + 1 2), F(x) =  l , x < l x , l ⩽x ⩽u u , x > u (16) wcpl = λ1 + α1 · c′ (17) where c′ denotes the vector of normalized entropies, ¯c is the mean value, σc is the standard deviation of c, l and u are the lowerbound and upperbound for the output of function F(x), λ1 and α1 are the shift and scale hyperparameters, and wcpl is the loss weight related to domain complexity. Domain specificity. Domain specificity expresses the degree of irrelevance between the current domain and shared information. Domains with a higher specificity often possess less data, requiring more attention during training. According to the domain fusion module in Section , gsp and gsh represent the ratios of the specific and shared parts during fusion. We argue that a higher gsp indicates less overlap with the shared information, so there is a necessity to emphasize the specificity of these data. wsp f = λ2 + α2 · gsp (18) where wsp f is the loss weight related to domain specificity, λ2 and α2 are the shift and scale hyperparameters tuning the range of weights, and gsp is the gate scalar for specific parts in the fusion module. Optimization We regard the entire task as a binary classification task, utilizing the following formula as the optimization objective. LCTR = wcpl i ·wsp f i ·[−(yi·log(ˆyi)+(1−yi)·log(1−ˆyi))] (19) The objective function is the weighted cross entropy loss function. y and ˆy are the ground truth and prediction CTR, and the loss function is weighted by the weight related to domain complexity wcpl and domain specificity wspf . The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8556 Experiments In this section, we will answer the following research questions with a series of experiments: RQ1: how does the proposed structure perform with different backbone networks? RQ2: how do the components perform compared to other state-of-the-art methods? RQ3: what are the specific effects of each component? Experimental Settings Dataset. We conduct experiments on three public datasets: Aliccp (Ma et al. 2018b), Movielens-1M, and ADC (Zhou et al. 2018). Aliccp has 3 scenarios divided by the feature of categorical expression of goods position. For Movielens1M, we use the age feature to divide the whole dataset into three different domains. ADC has 2 scenarios according to the ads scenario. We utilize Aliccp’s standard partitioning ratio of 5:5 for dividing the dataset into training and testing sets. Moreover, an 8:2 split ratio is adopted for splitting training and testing sets in Movielens-1M and ADC. Backbone models and Compared Methods. To validate the efficacy of our proposed framework, we conducted experiments on two fronts. 1) we examine the compatibility of our framework by incorporating it into various backbone models. 2) we compare the components of our framework to other available optional methods within the same backbone model to demonstrate its superiority. We select the following backbone models for the first experiment: Shared-Bottom, MMOE (Ma et al. 2018a), M2M (Zhang et al. 2022a), ADI (Jiang et al. 2022), STAR (Sheng et al. 2021), SARNet (Shen et al. 2021); Other methods with functionalities similar to the components for the second experiment: M2MWG (weight generation method in M2M) and PEPNet-WG (weight generation method in PEPNet (Chang et al. 2023) for weight generation; DWA (Liu, Johns, and Davison 2019) and DT (Guo et al. 2018) for loss adaptation. Evaluation Metrics. We assess the performance of models with AUC (Cheng et al. 2016; Guo et al. 2017) and Logloss metrics in CTR prediction. According to previous studies (Lian et al. 2018; Wang et al. 2021; Song et al. 2019), even a small numerical improvement of 0.001 in AUC can also produce significant positive benefits online. Implementation Details. In the training phase, we use the AdamW (Loshchilov and Hutter 2017) optimizer with β1 = 0.9, β2 = 0.999, and ϵ = 1 × 10−8. The learning rate is set to 0.001, the batch size to 2048, and the embedding size dim to 16. ReLU is chosen as the activation function. We set the lower bound l as 0.1 and the upper bound u as 1. We tune λ1, λ2, α1, and α2 from {0, 1e-1, ..., 1}, and the ratio of introducing loss adaptation during training from {0, 0.25, 0.5, 0.75}. Overall Performance Compatiable experiment performance with different backbone models (RQ1). In this subsection, we will answer the RQ1 by comparing the performance of different backbone models with and without our proposed framework. For Shared-Bottom and MMOE, we replace their towers with a feed-forward network equipped with D3. For M2M and ADI, we replace their modules related to learning scenario knowledge (i.e., meta unit, domain-specific networks and shared networks) with a feed-forward network equipped with D3. STAR and SAR-Net incorporate the partial components we proposed (i.e., domain fusion module and the domain balance optimization method related to domain specificity). According to Table 1, we can observe the following information: 1) Incorporating our framework, all backbones demonstrated substantial improvements in performance on both public datasets. This highlights the effectiveness of our framework in terms of domain sensitive feature selection, integration of commonality and diversity, and alleviating domain inconsistency during the training stage. Additionally, it underscores the flexibility and universality of our framework, which can be directly applied to most backbone models to enhance their performance. 2) For scenarios with limited data, such as Scenario 2 in the Aliccp dataset, the benefits from our proposed framework are more pronounced compared to other scenarios, resulting in greater performance improvements. This can be attributed to (i) the more granular exploration of the domain through the domain sensitive feature selection module and the domain fusion module and (ii) the domain balance optimization module emphasizes data samples with high domain complexity and specificity, alleviating the data sparsity problem. Overall performance against different weight generation and loss adaptation methods (RQ2). This subsection answers RQ2 by comparing our proposed components to other weight generation (M2M-WG, PEPNet-WG) and loss adaptation (DWA, DT) methods. In Table 2, we take ADI as the backbone model (BM). BM+D2 is the BM with domain sensitive feature selection module and domain fusion module, and BM+D3 is the BM with all our proposed components. Weight Generation. In the weight generation aspect, the backbone model equipped with the domain sensitive feature selection module and domain fusion module outperforms BM+M2M-WG and BM+PEPNet-WG. There are two reasons: (1) the attention mechanism is utilized to automatically select domain sensitive features in our framework, thus avoiding the bias of manually selecting features (i.e., missing informative features or selecting ineffective features), and (2) the proposed gate module implicitly captures the relationships between different domains by adaptively fusing shared and specific part with discriminative weight. Loss Adaptation. In the loss adaptation aspect, BM+D3 is superior to BM2+DWA and BM2+DT. There are three reasons: (1) our method considers both domain complexity and domain specificity, mitigating training inconsistencies in joint modeling from more dimensions and perspectives that are more in line with multi-scenario modeling settings. (2) Previous studies do not focus on the task’s attributes but derive loss weights based on the magnitude of loss and metric values. (3) The method we propose operates at the domainlevel, and compared to other scenario-level methods, it focuses on finer-grained domain differences. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8557 Datasets Metrics Shared-Bottom MMOE M2M ADI STAR SAR-Net w/o w w/o w w/o w w/o w w/o w* w/o w* Aliccp AUC S#1 0.6234 0.6237 0.6236 0.6246 0.6223 0.6240 0.6214 0.6261 0.6222 0.6233 0.6237 0.6248 S#2 0.6006 0.6011 0.5921 0.6021 0.5923 0.5975 0.5995 0.6029 0.5980 0.6005 0.5905 0.5942 S#3 0.6180 0.6211 0.6185 0.6211 0.6176 0.6206 0.6175 0.6230 0.6180 0.6191 0.6196 0.6210 Logloss S#1 0.1652 0.1649 0.1656 0.1654 0.1651 0.1650 0.1655 0.1651 0.1653 0.1651 0.1653 0.1650 S#2 0.1785 0.1781 0.1792 0.1790 0.1787 0.1786 0.1789 0.1784 0.1793 0.1781 0.1810 0.1795 S#3 0.1593 0.1588 0.1598 0.1596 0.1597 0.1591 0.1598 0.1594 0.1600 0.1600 0.1598 0.1595 Movie lens-1M AUC S#1 0.7693 0.7772 0.7773 0.7845 0.7543 0.7757 0.7741 0.7830 0.7693 0.7806 0.7677 0.7744 S#2 0.7958 0.7967 0.7967 0.7977 0.7899 0.7928 0.7939 0.7992 0.7961 0.7981 0.7922 0.7971 S#3 0.7877 0.7890 0.7845 0.7879 0.7814 0.7816 0.7791 0.7909 0.7876 0.7876 0.7873 0.7924 Logloss S#1 0.5652 0.5544 0.5548 0.5472 0.5904 0.5589 0.5589 0.5510 0.5630 0.5570 0.5732 0.5612 S#2 0.5367 0.5352 0.5358 0.5346 0.5443 0.5387 0.5387 0.5331 0.5372 0.5349 0.5418 0.5353 S#3 0.5265 0.5243 0.5302 0.5261 0.5360 0.5350 0.5350 0.5232 0.5249 0.5271 0.5273 0.5235 ADC AUC S#1 0.5822 0.5836 0.5822 0.5838 0.5768 0.5826 0.5775 0.5840 0.5806 0.5836 0.5826 0.5847 S#2 0.5864 0.5890 0.5861 0.5884 0.5835 0.5869 0.5831 0.5888 0.5856 0.5859 0.5878 0.5893 Logloss S#1 0.2662 0.2644 0.2653 0.2657 0.2642 0.2586 0.2696 0.2676 0.2686 0.2620 0.2669 0.2616 S#2 0.2504 0.2477 0.2500 0.2495 0.2481 0.2413 0.2570 0.2511 0.2556 0.2448 0.2502 0.2484 Table 1: Experimental results for different multi-domain models without (w/o) or with (w) our framework on three public datasets. w∗denotes the backbone model can only incorporate the partial components we proposed (i.e., domain fusion module and the domain balance optimization method related to domain specificity). The best results are highlighted with bold fonts. All improvements are statistically significant (i.e., two-sided t-tests with p < 0.05). AUC S#1 S#2 S#3 BM 0.6214 0.5995 0.6175 BM+M2M-WG 0.6230 0.5978 0.6204 BM+PEPNet-WG 0.6233 0.5996 0.6203 BM+D2 0.6248 * 0.6018 * 0.6221 * BM+D2+DWA 0.6156 0.5942 0.6124 BM+D2+DT 0.6229 0.5952 0.6122 BM+D3 0.6261 * 0.6029 * 0.6230 * Table 2: Experimental results for our proposed components compared to other similar methods on Aliccp. The best results are bolded. “*” indicates the statistically significant improvements (i.e., two-sided t-test with p < 0.05) over the best baseline. Ablation Study (RQ3) In this subsection, we conduct experiments to verify the effectiveness of each component in our proposed framework. The variants are listed below: • BM We select ADI as the backbone model. • BM+D1 Backbone model with DSFS (domain division). Replace the shared-specific networks with a transformation layer equipped with the DSFS module. • BM+D2 Backbone model with DSFS (domain division) and DF (domain modeling). • BM+D3 Backbone model with all proposed components (domain division, domain modeling, domain balance). Through Table 3, it can be concluded that each component has a positive effect on the backbone model, and more importantly, their contributions to the prediction performance can be accumulated. By comparing BM with BM+D1, it Metrics BM BM+D1 BM+D2 BM+D3 AUC S#1 0.6214 0.6240 0.6248 0.6261 S#2 0.5995 0.6000 0.6018 0.6029 S#3 0.6175 0.6209 0.6221 0.6230 Logloss S#1 0.1655 0.1654 0.1653 0.1651 S#2 0.1789 0.1788 0.1785 0.1784 S#3 0.1598 0.1596 0.1595 0.1594 Table 3: Ablation study on Aliccp. can be concluded that the domain sensitive feature selection module can automatically select domain sensitive features at the instance level, assisting in domain division. The experimental results comparing BM+D1 and BM+D2 validate the effectiveness of the domain fusion module. It can more accurately fuse shared and specific parts and implicitly model the complex relationships between domains. The comparison between BM+D2 and BM+D3 confirms the validity of the domain balance optimization module. By calculating loss weight based on both domain complexity and specificity, it alleviates training inconsistencies in the joint optimization process of different domains. Hyperparameter Analysis In this subsection, we visualize the effects of introducing loss adaptation in different training processes across different scenarios. The x-axis represents the training process to introduce loss adaptation (e.g., 0 means introduce loss adaptation from the start of training, and 1.0 means do not introduce loss adaptation during training), and the y-axis represents the AUC score. Figure 3 demonstrates the considerable influence of the timing of loss adaptation introduction The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8558 0 0.25 0.5 0.75 1.0 0.6250 0.6252 0.6254 0.6256 0.6258 0.6260 0.6262 AUC S#1 0 0.25 0.5 0.75 1.0 Training Progress 0.6014 0.6017 0.6020 0.6023 0.6026 0.6029 S#2 0 0.25 0.5 0.75 1.0 0.6209 0.6213 0.6217 0.6221 0.6225 0.6229 S#3 Figure 3: Effects of introducing loss adaptation in different training processes across different scenarios on Aliccp. 0 5 10 15 20 Features S#1 S#2 S#3 Scenarios Figure 4: Attention vector across different scenarios. into the training process on the overall performance. The key factor behind this could be that the loss weight is contingent upon both the attention mechanism and the gate mechanism. However, these mechanisms are incapable of effectively capturing intricate domain patterns during the initial phases of training, let alone representing the complexity and specificity of the domains. As depicted in Figure 3, implementing loss adaptation between 50% and 75% of the entire training duration proves to be the most productive. This is attributable to the relative stability of both the attention and gate mechanisms at this stage, which in turn provides crucial data for expressing domain complexity and specificity. Visualization In the following subsection, we seek to exemplify the proficiency of the Domain Sensitive Feature Selection (DSFS) module via the visualization of attention mechanisms across a range of scenarios. As visualized in Figure 4, we present attention mechanism heatmaps across three distinct scenarios within the Aliccp dataset. The x-axis represents different features, while the y-axis illustrates the three scenarios present within the dataset. A significantly darker square within the graph illustrates a higher attention weight. Upon evaluation of the figure, it becomes apparent that domain sensitive features display significant alterations across diverse scenarios, affirming the DSFS module’s precision in effectually capturing these discrepancies. Furthermore, it is important to note that Feature 18, represented by the red box, acts as the scenario indicator, assigned a high attention weight. This noteworthy assignment further underpins DSFS’s efficacy in selecting domain-sensitive features. Related Work Multi-Domain Recommendation Multi-Domain Recommendation (Tan et al. 2021; Xu et al. 2023; Wang et al. 2022; Zhang et al. 2022b; Luo et al. 2022; Gao et al. 2023) aims to capture the commonalities and diversities of various scenarios with a unified model. In recent times, a multitude of relevant endeavors has emerged, propelling the advancement of this field. STAR (Sheng et al. 2021) proposes a star topology that divided commonalities and diversities into shared networks and specific networks and a partitioned normalization method transforming data distributions according to their domains. SAR-Net (Shen et al. 2021) introduces multiple experts networks and a multi-scenario gate structure to model capture the commonalities and diversities. ADI (Jiang et al. 2022) applies domain-specific batch normalization, domain interest adaptation layers, and a self training strategy to capture relationships between scenarios. On the other hand, M2M (Zhang et al. 2022a) introduces the meta units, to incorporate scenario knowledge by producing the weights for the backbone model. PEPNet (Chang et al. 2023) proposes a Gate Neural Unit to personalized network parameters. Loss Adaptation In the realm of multi-domain recommendation, limited attention has been given to loss adaptation. While SARNet (Shen et al. 2021) introduces weighted loss for different samples, the focus was on addressing intervention bias rather than mitigating the inconsistencies across different domains during the training process. However, in other fields, such as multi-task learning, numerous relevant studies have been conducted. Adatask (Yang et al. 2023) approaches the issue from a task-centric perspective, separating the accumulated gradients of tasks within shared parameters. Autoloss (Zhao et al. 2021) employs a controller structure to generate weights for multiple losses, selecting the optimal one through a hard selection process. Gradnorm (Chen et al. 2018) addresses the issue by recognizing the imbalance in gradients during backpropagation, considering both the dominance of gradients and the ratio of loss reduction. DWA (Liu, Johns, and Davison 2019) aims to facilitate equal learning rates across tasks by calculating the relationship between loss reduction differences among tasks at adjacent time steps. DT (Guo et al. 2018) combines example-level and task-level strategies with focal loss to alleviate task imbalance, assigning greater weight to more challenging tasks. Conclusion In this paper, we proposed a universal and flexible framework D3 to optimize the multi-domain recommendations from domain division, modeling, and balance. Specifically, we introduce an attention-based domain adaptation module to divide domains automatically and capture diversities across different domains. The fusion gate module is proposed for integrating commonalities and diversities of domains and implicitly characterizing the intricate relationships between domains. In addition, we embarked upon an exploration into loss adaptation, a seldom-explored area in multi-domain recommendations, crafting weights based on the domain complexity and specificity and helping balance domains in the training process. Experiments on three public datasets showcase the effectiveness and superiority of our proposed framework. In addition, D3 has been implemented on a real-life, high-traffic internet platform catering to millions of users daily. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8559 Acknowledgments This research was partially supported by Huawei (Huawei Innovation Research Program), APRC - CityU New Research Initiatives (No.9610565, Start-up Grant for New Faculty of City University of Hong Kong), CityU - HKIDS Early Career Research Grant (No.9360163), Hong Kong ITC Innovation and Technology Fund Midstream Research Programme for Universities Project (No.ITS/034/22MS), Hong Kong Environmental and Conservation Fund (No. 88/2022), and SIRG - CityU Strategic Interdisciplinary Research Grant (No.7020046, No.7020074). References Chang, J.; Zhang, C.; Hui, Y.; Leng, D.; Niu, Y.; and Song, Y. 2023. PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information. arXiv preprint arXiv:2302.01115. Chen, Z.; Badrinarayanan, V.; Lee, C.-Y.; and Rabinovich, A. 2018. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International conference on machine learning, 794–803. PMLR. Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems, 7–10. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3146–3154. Gao, J.; Zhao, X.; Chen, B.; Yan, F.; Guo, H.; and Tang, R. 2023. AutoTransfer: Instance Transfer for Cross-Domain Recommendations. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1478–1487. Guo, H.; Tang, R.; Ye, Y.; Li, Z.; and He, X. 2017. DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247. Guo, M.; Haque, A.; Huang, D.-A.; Yeung, S.; and Fei-Fei, L. 2018. Dynamic task prioritization for multitask learning. In Proceedings of the European conference on computer vision (ECCV), 270–287. Jiang, Y.; Li, Q.; Zhu, H.; Yu, J.; Li, J.; Xu, Z.; Dong, H.; and Zheng, B. 2022. Adaptive domain interest network for multi-domain recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 3212–3221. Kendall, A.; Gal, Y.; and Cipolla, R. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7482–7491. Li, X.; Qiu, Z.; Zhao, X.; Zhang, Y.; Xing, C.; and Wu, X. 2023a. REST: Drug-Drug Interaction Prediction via Reinforced Student-Teacher Curriculum Learning. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 1278–1287. Li, X.; Yan, F.; Zhao, X.; Wang, Y.; Chen, B.; Guo, H.; and Tang, R. 2023b. HAMUR: Hyper Adapter for Multi-Domain Recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 1268–1277. Lian, J.; Zhou, X.; Zhang, F.; Chen, Z.; Xie, X.; and Sun, G. 2018. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 1754–1763. Liu, S.; Johns, E.; and Davison, A. J. 2019. End-toend multi-task learning with attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1871–1880. Liu, Z.; Tian, J.; Cai, Q.; Zhao, X.; Gao, J.; Liu, S.; Chen, D.; He, T.; Zheng, D.; Jiang, P.; et al. 2023. Multi-Task Recommendations with Reinforcement Learning. In Proceedings of the ACM Web Conference 2023, 1273–1282. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Luo, L.; Li, Y.; Gao, B.; Tang, S.; Wang, S.; Li, J.; Zhu, T.; Liu, J.; Li, Z.; and Pan, S. 2022. MAMDR: a model agnostic learning method for multi-domain recommendation. arXiv preprint arXiv:2202.12524. Ma, J.; Zhao, Z.; Yi, X.; Chen, J.; Hong, L.; and Chi, E. H. 2018a. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 1930–1939. Ma, X.; Zhao, L.; Huang, G.; Wang, Z.; Hu, Z.; Zhu, X.; and Gai, K. 2018b. Entire space multi-task model: An effective approach for estimating post-click conversion rate. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 1137–1140. Shen, Q.; Tao, W.; Zhang, J.; Wen, H.; Chen, Z.; and Lu, Q. 2021. Sar-net: a scenario-aware ranking network for personalized fair recommendation in hundreds of travel scenarios. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 4094–4103. Sheng, X.-R.; Zhao, L.; Zhou, G.; Ding, X.; Dai, B.; Luo, Q.; Yang, S.; Lv, J.; Zhang, C.; Deng, H.; et al. 2021. One model to serve all: Star topology adaptive recommender for multidomain ctr prediction. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 4104–4113. Song, W.; Shi, C.; Xiao, Z.; Duan, Z.; Xu, Y.; Zhang, M.; and Tang, J. 2019. Autoint: Automatic feature interaction learning via self-attentive neural networks. In Proceedings of the 28th ACM international conference on information and knowledge management, 1161–1170. Tan, S.; Li, M.; Zhao, W.; Zheng, Y.; Pei, X.; and Li, P. 2021. Multi-Task and Multi-Scene Unified Ranking Model for Online Advertising. In 2021 IEEE International Conference on Big Data (Big Data), 2046–2051. IEEE. Wang, R.; Shivanna, R.; Cheng, D.; Jain, S.; Lin, D.; Hong, L.; and Chi, E. 2021. Dcn v2: Improved deep & cross network and practical lessons for web-scale learning to rank The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8560 systems. In Proceedings of the web conference 2021, 1785– 1797. Wang, Y.; Du, Z.; Zhao, X.; Chen, B.; Guo, H.; Tang, R.; and Dong, Z. 2023a. Single-shot Feature Selection for Multitask Recommendations. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 341–351. Wang, Y.; Guo, H.; Chen, B.; Liu, W.; Liu, Z.; Zhang, Q.; He, Z.; Zheng, H.; Yao, W.; Zhang, M.; et al. 2022. Causalint: Causal inspired intervention for multi-scenario recommendation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4090–4099. Wang, Y.; Zhao, X.; Chen, B.; Liu, Q.; Guo, H.; Liu, H.; Wang, Y.; Zhang, R.; and Tang, R. 2023b. PLATE: A Prompt-Enhanced Paradigm for Multi-Scenario Recommendations. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1498–1507. Xu, S.; Li, L.; Yao, Y.; Chen, Z.; Wu, H.; Lu, Q.; and Tong, H. 2023. MUSENET: Multi-Scenario Learning for RepeatAware Personalized Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 517–525. Yang, E.; Pan, J.; Wang, X.; Yu, H.; Shen, L.; Chen, X.; Xiao, L.; Jiang, J.; and Guo, G. 2023. Adatask: A taskaware adaptive learning rate approach to multi-task learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 10745–10753. Zhang, Q.; Liao, X.; Liu, Q.; Xu, J.; and Zheng, B. 2022a. Leaving no one behind: A multi-scenario multi-task meta learning approach for advertiser modeling. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 1368–1376. Zhang, Y.; Wang, X.; Hu, J.; Gao, K.; Lei, C.; and Fang, F. 2022b. Scenario-Adaptive and Self-Supervised Model for Multi-Scenario Personalized Recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 3674–3683. Zhao, X.; Liu, H.; Fan, W.; Liu, H.; Tang, J.; and Wang, C. 2021. Autoloss: Automated loss function search in recommendations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 3959– 3967. Zhou, G.; Zhu, X.; Song, C.; Fan, Y.; Zhu, H.; Ma, X.; Yan, Y.; Jin, J.; Li, H.; and Gai, K. 2018. Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 1059–1068. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8561
2024
951
18,797
Graph Invariant Learning with Subgraph Co-mixup for Out-of-Distribution Generalization Tianrui Jia1, Haoyang Li2, Cheng Yang1∗, Tao Tao3, Chuan Shi1* 1Beijing University of Posts and Telecommunications 2Tsinghua University 3China Mobile Information Technology Co. Ltd. {jiatianrui, yangcheng, shichuan}@bupt.edu.cn, [email protected], [email protected] Abstract Graph neural networks (GNNs) have been demonstrated to perform well in graph representation learning, but always lacking in generalization capability when tackling out-of-distribution (OOD) data. Graph invariant learning methods, backed by the invariance principle among defined multiple environments, have shown effectiveness in dealing with this issue. However, existing methods heavily rely on well-predefined or accurately generated environment partitions, which are hard to be obtained in practice, leading to sub-optimal OOD generalization performances. In this paper, we propose a novel graph invariant learning method based on invariant and variant patterns comixup strategy, which is capable of jointly generating mixed multiple environments and capturing invariant patterns from the mixed graph data. Specifically, we first adopt a subgraph extractor to identify invariant subgraphs. Subsequently, we design one novel co-mixup strategy, i.e., jointly conducting environment mixup and invariant mixup. For the environment mixup, we mix the variant environment-related subgraphs so as to generate sufficiently diverse multiple environments, which is important to guarantee the quality of the graph invariant learning. For the invariant mixup, we mix the invariant subgraphs, further encouraging to capture invariant patterns behind graphs while getting rid of spurious correlations for OOD generalization. We demonstrate that the proposed environment mixup and invariant mixup can mutually promote each other. Extensive experiments on both synthetic and realworld datasets demonstrate that our method significantly outperforms state-of-the-art under various distribution shifts. Introduction Graph data is ubiquitous in the real world, such as molecular networks, protein networks, social networks. Graph representation learning (Chen et al. 2020; Hamilton, Ying, and Leskovec 2017b) achieves deep learning on graphs by encoding them into vectors in a latent space. Graph neural networks (GNNs) (Kipf and Welling 2016; Xu et al. 2018; Veliˇckovi´c et al. 2018; Hamilton, Ying, and Leskovec 2017a), as one of the most popular graph representation learning methods, have attracted wide attention in the last decade. (Lee, Rossi, and Kong 2018; Xu et al. 2018). *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Despite their noticeable success, existing GNNs heavily rely on the identically distributed (I.D.) assumption (Vapnik 1999), i.e., the training and test data are sampled from an identical distribution. However, various forms of distribution shifts between the training and testing datasets widely exist in the real world, since the uncontrollable data generation mechanisms, resulting in OOD (Hu et al. 2020a; Ji et al. 2022; Koh et al. 2021) scenarios. For instance, in graph classification tasks, there could be significant distribution shifts existing in graph size (Bevilacqua, Zhou, and Ribeiro 2021; Yehudai et al. 2021), node degree (Yoo et al. 2023), and structure (e.g., molecule scaffold) (Ji et al. 2022) between the training and testing graphs. Existing GNNs that perform well on the training data by capturing the spurious correlations significantly fail to generalize to OOD testing graph data. Therefore, it is of paramount importance to capture the invariant relationships between predictive graph patterns and labels. Invariant learning (Arjovsky et al. 2019; Krueger et al. 2021; Creager, Jacobsen, and Zemel 2021; Ahuja et al. 2021) emerges as a prevalent strategy for tackling the challenge of generalization to OOD data. The basic assumption of invariant learning method is the invariance principle among defined multiple environments, namely there existing a proportion of input data capturing invariant relations with the labels across distinct environments (Arjovsky et al. 2019). Consequently, a predictor that performs well across multiple pre-defined environments is guaranteed to possess generalization capabilities for unseen data distributions (Arjovsky et al. 2019). In the field of graphs, existing graph invariant learning methods (Wu et al. 2021; Miao, Liu, and Li 2022; Li et al. 2022b; Chen et al. 2022a; Yang et al. 2022) consider that within each environment the graph data can be decomposed into two components, including invariant subgraphs that have deterministic and truly predictive relations with the labels, and environment subgraphs that could exhibit spurious correlations with the labels. The main goal of them is focused on obtaining diverse training environments. For example, DIR (Wu et al. 2021) generates multiple training environments for invariant learning by implementing distribution interventions on graphs, while GIL (Li et al. 2022b) clusters environment subgraphs and treats each cluster as an environment. The performance of invariant learning heavily relies on the diversity of environment partitioning. In other words, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8562 if different environments are not diverse, these methods will not sufficiently get rid of spurious correlations, showing poor OOD generalization ability. For example, when there exists size distribution shift between the training set and the test set, suppose the size of the graphs in the training set is 6-8, while the size of the graphs in the test set is 30-50, then no matter how environments are partitioned within the training set, it would be challenging for the model to demonstrate satisfactory generalization capabilities in the test set. However, the environments generated by existing graph invariant learning methods cannot possess sufficient distribution shifts among different environments. For DIR (Wu et al. 2021), although it is theoretically possible to mitigate the influence of the environment, all environments are relatively similar in the initial stages of training, performing suboptimally in practice since environments can not be proved to be diverse (Chen et al. 2022b). Similarly, for GIL (Li et al. 2022b), if the training set itself does not have significantly diverse latent environments, the generated environments during the training process will also not be enough to learn invariant patterns. In addition to these two representative methods, existing methods are generally hard to achieve a diverse environment partitioning, resulting in suboptimal performance and largely hindering the OOD generalization. To tackle these problems, in this paper, we are the first to study mixup-based graph invariant learning for graph OOD generalization, to the best of our knowledge. Although Mixup (Zhang et al. 2018) and their variations (Verma et al. 2019; Chou et al. 2020; Kim et al. 2020; Yun et al. 2019), as one type of interpolation-based data augmentation methods that amalgamate two training instances and the labels to generate new instances are proposed in the literature, the existing graph mixup methods (Han et al. 2022; Wang et al. 2021; Park, Shim, and Yang 2022; Guo and Mao 2021) are only based on mixing up entire graphs, which can definitely introduce spurious correlations since they do not explicitly distinguish invariant and environment subgraphs during conducting mixup, so as to degrade the model’s generalization performance on OOD graph data. Incorporating mixup with invariant learning for graph out-of-distribution generalization is promising but poses great challenges as follows and has not been explored: • How to design mixup to generate diverse enough environments which have enough distribution shifts for invariant learning. • How to improve the mixup method so that the mixedup graph data only retains invariant information while excluding environmental-related spurious correlations for OOD generalization. To address the aforementioned challenges, we propose a novel graph invariant learning method based on invariant and variant co-mixup strategy, herein referred to as Invairnt learning on Graph with co-Mixup (IGM)1. Firstly, we design an invariant subgraph extractor to identify the invariant subgraphs and consider their complements as the environmentrelated environment subgraphs. Then, we design an environment Mixup module based on the environment subgraphs to 1Code available at https://github.com/BUPT-GAMMA/IGM encourage the generated environments that are sufficiently diverse for graph invariant learning. We generate a variety of environments by concatenating invariant and environment subgraphs with different labels. The environments generated in this tailored way will have sufficient distribution shifts so as to be diverse enough. Next, in order to ensure that the mixed graph data only retains invariant information, we design an invariant Mixup module to perform mixup only on invariant subgraphs rather than the whole graphs. Performing invariant and environment subgraph co-mixup with these two modules above can effectively get rid of spurious correlations from the entire graph. More importantly, we also show that environment Mixup and invariant Mixup modules of the co-mixup strategy can mutually promote each other, for the promising performances of the OOD generalization capabilities. We conducted extensive experiments on three artificially synthesized datasets and nine real-world datasets to verify the effectiveness of our proposed method for various types of distribution shifts. Compared to the state-of-the-art baselines, our method shows significant improvements, e.g., an average of 7.4% improvement on real-world datasets. Furthermore, we have verified the effectiveness of each module and performed visualization experiments on the learned invariant subgraphs to conduct deeper analyses. Our contributions can be summarized as follows: (1) We design an invariant and environment subgraph co-mixup based graph invariant learning method for OOD generalization. To the best of our knowledge, this is the first work to automatically generate enough diverse environments for graph invariant learning. (2) We design an environment Mixup module to generate environments which have enough distribution shifts, leading to better invariant learning on graphs. (3) We propose an invariant Mixup method to encourage the mixed-up data only retain invariant graph patterns. This novel design mitigates the impact of spurious correlations in the whole graph. We demonstrate that our designed environment Mixup and invariant Mixup can mutually promote each other in practice, thereby enhancing the generalization capability on OOD graph data. (4) We conduct extensive experiments on both synthetic and real-world datasets to show that our proposed method has the most competitive OOD generalization ability via significantly outperforming state-of-the-art on various types of distribution shifts. Related Works Graph Neural Network. Graph Neural Networks (GNNs) (Kipf and Welling 2016; Hamilton, Ying, and Leskovec 2017a; Xu et al. 2018; Veliˇckovi´c et al. 2018) aggregate the neighbors of nodes through a message-passing mechanism to obtain individual node representations. Subsequently, a pooling function is employed to derive a global graph representation, which is then utilized for subsequent classification tasks. Inspired by the spectral method (Bruna et al. 2014; Defferrard, Bresson, and Vandergheynst 2016), GNN is designed to use convolutional neural networks to aggregate neighbors’ features (Kipf and Welling 2016; Hamilton, Ying, and Leskovec 2017a). Due to the good performance of the attention mechanism, attention The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8563 is introduced to GNN, which is known for GAT (Veliˇckovi´c et al. 2018). However, traditional GNNs fail to achieve generalization on OOD data. OOD Generalization on Graphs. Currently, graph OOD generalization methods (Xia et al. 2023; Miao, Liu, and Li 2022; Li et al. 2022b; Wu et al. 2021; Yang et al. 2022; Liu et al. 2022; Sui et al. 2022; Buffelli, Li`o, and Vandin 2022; Zhang et al. 2022; Chen, Xiao, and Kuang 2022; Li et al. 2022a) can be primarily categorized into two approaches. The first, based on information bottleneck methods such as CIGA (Chen et al. 2022a) and GSAT (Miao, Liu, and Li 2022), achieves generalization by maximizing mutual information between labels and invariant subgraphs while minimizing mutual information between the subgraph and the entire graph. The second approach, based on invariant learning methods (Arjovsky et al. 2019; Krueger et al. 2021; Creager, Jacobsen, and Zemel 2021; Ahuja et al. 2021), like DIR (Wu et al. 2021) and GIL (Li et al. 2022b), defines environments within datasets and incorporates a regularization term between these environments. This method aims to learn cross-environment invariant information, thereby facilitating OOD generalization. Notations and Preliminaries Notations. Denote a graph dataset as G = {Gi, Yi}N i=1. Due to the uncontrollable data generation mechanism (Bengio et al. 2020), we follow the literature (Arjovsky et al. 2019; Ahuja et al. 2021) to consider realistic yet challenging scenarios that there exist unobservable distribution shifts between training and test sets P(Gtrain) ̸= P(Gtest) since the training and test graph data are sampled from different environments. The label space of graphs and labels are G, Y. Problem Formulation. Following existing works (Wang et al. 2021; Li et al. 2022b), we assume each graph Gi consists of two parts, namely invariant subgraph GI i and environment subgraph GE i , where GE i is the complement of GI i . Denote the invariant subgraph set as GI = {GI i , Yi}N i=1 and environment subgraph set as GE = {GE i , Yi}N i=1. We use the subscript to denote the corresponding train and test set, i.e., GI train,GE train,GI test,GE test. GI i determine its label Yi so that it has invariant relations with the label and should be captured for OOD generalization, i.e., P(Ytrain|GI train)= P(Ytest|GI test) = P(Y|GI), where GI train, GI test, GI denotes the random variables for GI train, GI test, GI. In contrast, GE i contains information which has spurious relation with Yi so that it has variant relations with the label and should be got rid of for stable performances among different environments. Thus, our objective is to identify the invariant subgraphs within the graph and only use them to make OOD generalized predictions. By extracting the right invariant subgraph of each graph, our model will generalize well in testing environment (Li et al. 2022b; Chen et al. 2020; Wu et al. 2021). Methodology In this section, we first present the overall framework of our IGM. Then we will introduce our invariant subgraph extractor. Finally, we will describe the two mixup modules based on the extracted subgraphs, namely environment Mixup and invariant Mixup, An overview of IGM is shown in Figure 1. Overall Framework To tackle the limitations of the existing graph invariant learning methods’ strong dependence on the predefined environment partitions, we propose to incorporate mixup (Zhang et al. 2018) and invariant learning to generate mixed environments and capture invariant patterns from mixed graphs simultaneously. Given the input data, we first use an invariant subgraph extractor to extract the invariant and environment subgraphs from each graph. Subsequently, we apply environment Mixup and invariant Mixup to update the parameters of the invariant subgraph extractor. Specifically, the environment Mixup module is designed to generate environments with sufficient distribution shifts and the invariant Mixup module is proposed to prevent spurious correlations within the graph from affecting the mixup. Note that these two modules can mutually enhance each other’s learning: on the one hand, the environment Mixup module is able to partition environments with sufficient distribution shifts, thereby facilitating the invariant Mixup to capture more invariant information. On the other hand, as the invariant Mixup captures more invariant information, it can further aid the environment Mixup in achieving a more refined environmental partition, subsequently promoting the invariant learning of the environment Mixup. Invariant Subgraph Extractor We use g to represent the subgraph extractor, GI i = g(Gi), corresponding to the invariant feature extractor g in the previous section. The idealized invariant subgraph extractor g∗(·) should satisfy: Pe1(Y|g∗(G)) = Pe2(Y|g∗(G)) , ∀e1, e2 ∈E, R(f ◦g∗(G)) = min R(f ◦g(G)), (1) where E is the set of environments. R(·) is the risk function that can be a cross-entropy, and f represents the classifier. Now we instantiate g with learnable parameters. For a given graph G, its nodes set and edges set are VG and EG respectively. The p(u,v) represents the probability that edge (u, v) is selected as an edge in the invariant subgraph GI, and we get it via a GNNenc and an MLPenc: Ω= GNNenc(G), ϕ(u,v) = Ωu ∥Ωv, p(u,v) = MLPenc(ϕ(u,v)), (2) where Ωis the nodes representations, Ωu, Ωv is the representation of node u, v. Next, we sample edges based on distribution ξuv ∼ Bern(puv) to get GI. Due to the non-differentiability of this sampling process, we employ the Gumbel Softmax (Jang, Gu, and Poole 2016) technique to make it differentiable. In practice, we set a maximum edge ratio r to avoid the extracted subgraph being overly large. With our subgraph extractor, we can adaptively select edges instead of selecting a fixed ratio of nodes or edges. In the subsequent experimental section, we will report the values of r. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8564 𝒢! 𝒢" Invariant Mixup 𝒢# … ℎ 𝒢!,𝒢" 𝒢! Invariant subgraphs 𝒢! Environment subgraphs 𝒢" Environment generation … . 𝐾Mixed environments 𝒇= 𝒉& 𝒄 ℛ#!"#+𝛾ℛ!$% #!"# ℛ#$+𝛾ℛ!$% #% ℛ#&+𝛾ℛ!$% #& +𝜇ℛ&'$"( Mixed representations 0.1 0.4 0.8 0.2 0.6 0.9 [0.9 0 0 … 0.1][0.4 0.6 0 … 0] [0 0.2 0 … 0.8] + + + Mixed labels 𝓛𝑰 𝓛𝑬 𝓛= 𝓛𝑬+ 𝜹𝓛𝑰 𝒄 𝒇= 𝒉& 𝒄 𝒇= 𝒉& 𝒄 ℎ ℎ Invariant Subgraph extractor 𝒈 Training Data of 𝑚different labels … … … . Invariant Learning … … … Origin training data … … … … … Figure 1: The overall framework of our proposed IGM. Invariant subgraph extractor g splits each graph into invariant subgraph and environment subgraph. Following this, we employ two mixup strategies: (1) Concatenating invariant subgraphs and environment subgraphs from different labels to generate K new environments, upon which we conduct invariant learning. (2) Mixing invariant subgraphs from different labels to augment the data. Here, h represents a GNN used for feature extraction, and c denotes an MLP utilized for classification. Mixup for Out-of-Distribution Generalization Following the extraction of the invariant and environment subgraphs, we proceed to apply two distinct types of mixup to them, namely Invariant Mixup and Environment Mixup. Environment Mixup. Here we design an Environment Mixup method to generate diverse enough environments which have sufficient distribution shift with origin data, which makes invariant learning on graphs more effective. Furthermore, the Environment Mixup enhances the Invariant Mixup, which will be mentioned in the following section. Let L : Y →Y be a mapping function that maps a label to a different one. For each GI i ∈GI train, we random select an GE j ∈GE train, whose label Yj = L(Yi). Then we mixup the two subgraphs by randomly adding edges between the two graphs according to the node degree. The number of edges added is n GI i GE j add = radd(|EGI i | + |EGE j |), and radd is a pre-defined ratio. Since the invariant subgraphs determine the label, we define the label of the augmented graph as GI i ’s label Yi. We can obtain multiple label mapping functions and augmentations for data with multiple classes. For K augmentations, denotes the k-th label function as Lk. We define the k-th augmentation as Gk aug: Gk aug = {Mix(GI i , GE j )|GI i ∈GI train, GE j ∈GE train, Yj = Lk(Yi), i = 1, 2, 3..., |GI train|}. (3) Since GE j is only spuriously related with Yj, graph GI i concatenated with GE j is more different than that with its own spurious subgraph GE i or with a same substructure. So we consider each augmentation Gk aug as an environment that has obvious distribution shifts with the origin training set. After obtaining K augmented environments, we adopt invariant learning on them to enable our model to learn invariant information across environments and extract correct invariant subgraphs that satisfy previous assumptions. Drawing from the invariant learning literature (Chen et al. 2022b), combining different invariant regularizers can improve the generalization ability of modes further. During our training of Environment Mixup, V-REx (Krueger et al. 2021) regularizer is less impactful initially due to similar environments, while the IRM (Arjovsky et al. 2019) contributes more to the optimization procedure. But as spurious correlations increase in later stages, the effectiveness of IRM regularizer reduces, while V-REx gains importance. Hence, using both regularizers together leads to a better ability of generalization. We then formulate the overall risk following the IRMX (Chen et al. 2022b) literature: LE = K X e=0  Re(f) + γRe IRM  + µRV-REx, (4) where K is the number of environments, and e = 0 represents the orignal data. f is the classifier, and Re(f) represents CrossEntropy loss on environment e. γ is the weight of IRM regularizer Re IRM =∥▽w|w=1.0Re(w · f) ∥2, and µ is the weight of of V-REx regualrizer RV-REx = Vare(Re(f)). Var(·) denotes the variance of risks over the environments. We instantiate f with GNNfea and MLPcls as follows: Ψ = GNNfea(G), ψ = Pooling(Ψ), ˆY = SoftMax(MLPcls(ψ)), (5) where Ψ is the node representation of G and Pooling is the readout function. For clarity of presentation, we denote Pooling(GNNfea(G)) as h, MLPcls as c in Figure 1. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8565 In the previous discussions, mixup was primarily utilized as a data augmentation technique to promote invariant learning in different environments. However, mixup also serves as a good regularizer for improving model generalization. In the following section, we will present how to leverage the extracted invariant subgraphs to perform mixup, thereby further enhancing the generalization capability of the model on out-of-distribution data. Invariant Mixup. Recent studies (Pinto et al. 2022) show that Mixup based method leads to learning models exhibiting high entropy throughout, and consequently, Mixup method can improve the model performance on out-of-distribution data. In other words, Mixup is a good regularizer for out-ofdistribution generalization. Existing methods of Mixup on graph (Han et al. 2022; Wang et al. 2021; Park, Shim, and Yang 2022; Guo and Mao 2021) do Mixup operation on the whole graph, while we only perform Mixup method on casual subgraphs. Mixup only on invariant subgraphs enhances the performance. Applying mixup across the entire graph could potentially disrupt the real relationships. However, implementing mixup on invariant subgraphs allows for a more precise preservation and learning of original invariant relationships, reducing the occurrence of erroneous learning. In other words, the mixed-up invariant subgraphs retain as much invariant information as possible and effectively prevent the impact of noise and spurious correlations from the entire graph on classification tasks. We adopt Manifold Mixup on invariant subgraphs we extract in the previous part. We obtain the invariant subgraph representations ψI i and ψI j for Gi and Gj as: ΨI = GNNfea(GI), ψI = Pooling(ΨI), (6) where ΨI is the node representation of GI. The labels for Gi and Gj are Yj and Yj respectively. Our definition of invariant Mixup is as follows: ψI i,j = λψI i + (1 −λ)ψI j , Yi,j = λYi + (1 −λ)Yj, λ ∼Beta(α, α), (7) where ψI i,j is the mixed representation of GI i and GI j. Yi,j is the mixed label of Yi and Yj. λ is derived from the Beta distribution with the parameter α. The loss function of invariant Mixup can be defined as: LI = CrossEntropy(Yij, ˆYI), ˆYI = SoftMax(MLPcls(ψI i,j)), (8) where ˆYI is the predicted label with the mixed representation. Overall, we can jointly optimize these components via the environment loss and invariant loss, i.e., L = LE + δLI, where δ is the balance hyper-parameter. Experiments In this section, we conduct experiments on 11 datasets to answer the following research questions: · RQ1: Is IGMeffective on the graph OOD generalization problem? · RQ2: Is it necessary to use two kinds of Mixup? · RQ3: How about the hyper-parameter sensitivity of IGM? · RQ4: Does the learned invariant subgraph capture invariant information, and does it capture better invariant patterns compared to other methods? Experimental Setup Datasets. We conduct experiments on synthetic and realworld datasets. For synthetic datasets, following DIR (Wu et al. 2021), we use the SPMotif dataset to evaluate our method on structure and degree shift. For real-world datasets, we examine degree shift, size shift, and other distribution shifts. For the degree shift, we employ the Graph-SST5 and Graph-Twitter datasets (Chen et al. 2022a; Yuan et al. 2022; Dong et al. 2014; Socher et al. 2013). To evaluate size shift, we utilize PROTEINS and DD datasets from TU benchmarks (Morris et al. 2020), adhering to the data split as suggested by previous research (Chen et al. 2022a). We also consider the DrugOOD (Ji et al. 2022) and Open Graph Benchmark (OGB) (Hu et al. 2020b) for structural distribution shifts. More details are shown in the Appendix. Evaluation. We employ different evaluation metrics tailored to specific datasets as previous works (Chen et al. 2022a; Yang et al. 2022). For the SPMotif, GraphSST5 (Socher et al. 2013), and Graph-Twitter (Dong et al. 2014) datasets, we use accuracy as the evaluation metric. For the DrugOOD (Ji et al. 2022) and OGB (Hu et al. 2020b) datasets, we assess performance using the ROC-AUC metric. For the TU datasets (Morris et al. 2020), we measure the model with Matthews correlation coefficient. We report the mean results and standard deviations across five runs. The implementation details are given in the Appendix. Baselines. In addition to Empirical Risk Minimization (ERM), we compare our approach with three categories of methods, including mixup-based, invariant learning based and graph OOD generalization methods. In the mixup-based methods, there are Manifold Mixup (Verma et al. 2019) and G-Mixup (Han et al. 2022). For invariant learning based cateDataset SPMotif-0.33 SPMotif-0.6 SPMotif-0.9 ERM 59.49 ± 3.50 55.48 ± 4.84 49.64 ± 4.63 G-mixup 60.31 ± 2.89 58.74 ± 5.58 53.60 ± 5.01 Manifold-mixup 58.33 ± 4.05 56.63 ± 2.96 49.81 ± 4.25 IRM 57.15 ± 3.98 61.74 ± 1.32 45.68 ± 4.88 V-REx 54.64 ± 3.05 53.60 ± 3.74 48.86 ± 9.69 EIIL 56.48 ± 2.56 60.07 ± 4.47 55.79 ± 6.54 DIR 58.73 ± 11.9 48.72 ± 14.8 41.90 ± 9.39 GSAT 56.21 ± 7.08 55.32 ± 6.35 52.11 ± 7.56 CIGA 77.33 ± 9.13 69.29 ± 3.06 63.41 ± 7.38 IGM 82.36 ± 7.39 78.09 ± 5.63 76.11 ± 8.86 Table 1: Graph classification results on synthetic datasets. We use the accuracy ACC (%) as the evaluation metric. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8566 Shift Type Degree Size Structure(Assay, Scaffold) Dataset Graph-SST5 Graph-Twitter PROTEINS DD DrugOODAssay DrugOODScaffold BACE BBBP Metric ACC (%) MCC AUC (%) ERM 43.89 ± 1.73 60.81 ± 2.05 0.22 ± 0.09 0.27 ± 0.09 76.41 ± 0.73 66.83 ± 0.93 77.83 ± 3.49 66.93 ± 2.31 G-Mixup 43.75 ± 1.34 63.91 ± 3.01 0.24 ± 0.03 0.29 ± 0.04 76.53 ± 2.20 66.01 ± 1.35 79.12 ± 2.75 68.44 ± 2.08 Manifold-Mixup 43.11 ± 0.65 62.60 ± 1.87 0.23 ± 0.04 0.28 ± 0.06 77.02 ± 1.15 65.56 ± 0.44 78.85 ± 1.26 68.67 ± 1.38 IRM 43.69 ± 1.26 63.50 ± 1.23 0.21 ± 0.09 0.22 ± 0.08 74.03 ± 0.58 66.32 ± 0.27 77.51 ± 2.46 69.13 ± 1.45 V-REx 43.28 ± 0.52 63.21 ± 1.57 0.22 ± 0.06 0.21 ± 0.07 75.85 ± 0.78 65.37 ± 0.42 76.96 ± 1.88 64.86 ± 2.13 EIIL 42.98 ± 1.03 62.76 ± 1.72 0.20 ± 0.05 0.23 ± 0.10 76.93 ± 1.44 64.13 ± 0.89 79.36 ± 2.72 65.77 ± 3.36 DIR 41.12 ± 1.96 59.85 ± 2.98 0.25 ± 0.14 0.20 ± 0.10 74.11 ± 3.10 64.45 ± 1.69 79.93 ± 2.03 69.73 ± 1.54 GSAT 43.72 ± 0.87 62.50 ± 1.44 0.21 ± 0.06 0.28 ± 0.04 76.64 ± 2.82 66.02 ± 1.13 79.63 ± 1.87 68.48 ± 2.01 CIGA 44.71 ± 1.14 64.45 ± 1.99 0.40 ± 0.06 0.29 ± 0.08 76.15 ± 1.21 67.11 ± 0.33 80.98 ± 1.25 69.65 ± 1.32 IGM 46.69 ± 0.52 66.23 ± 1.58 0.43 ± 0.05 0.36 ± 0.04 78.16 ± 0.65 68.32 ± 0.48 82.65 ± 1.17 71.03 ± 0.79 Table 2: Graph OOD generalization performance with on real-world datasets. We show the graph classification results on datasets with three types of distribution shifts: degree, size, and structure. We use ACC (%) as the evaluation metric on Graph-SST5 and Graph-Twitter datasets, MCC for DD and PROTEIN datasets, and ROC-AUC (%) for DrugOODscaffold, DrugOODassay, BACE, and BBBP datasets. Experimental results indicate that our method outperforms all the baselines. gory, we consider the methods that use known environment partition such as Invariant Risk Minimization (IRM) (Arjovsky et al. 2019) and V-REx (Krueger et al. 2021), as well as methods that automatically partition environments, such as EIIL (Creager, Jacobsen, and Zemel 2021). The third category encompasses those information bottleneck based methods like CIGA (Chen et al. 2022a) and GSAT (Miao, Liu, and Li 2022), as well as methods based on environment divisions within the graph, such as DIR (Wu et al. 2021). Main Results (RQ1) Experiments on synthetic datasets. We report our results on synthetic datasets in Table 1. The bias which set as 0.33, 0.6, and 0.9 represents the degree of spurious correlation between labels and features. Through the experimental results, we can observe that our proposed method outperforms other baselines by a large margin under three different bias settings. Specifically, our model surpasses ERM by an average margin of 44.2% on average and outperforms the state-of-the-art (SOTA) method CIGA by 13.1% on average. This demonstrates that our IGM is more adept at capturing the invariant patterns under distribution shifts, thereby enabling the model to perform better on OOD data. Experiments on real-world datasets. We explore three types of distribution shifts on real-world datasets: degree shift, size shift, and structure shift. The results are presented in Table 2. It can be observed that existing methods uniformly fail to achieve good OOD generalization across all datasets. For instance, G-Mixup underperforms compared to ERM on the Graph-SST5 and NCI109 datasets, while IRM is consistently outdone by ERM on most datasets, and the SOTA method CIGA is outperformed by ERM on DrugOODScaffold dataset. As can be observed across these eight datasets, our model consistently achieved the best performance. Our method demonstrates an overall improvement of 7.7% compared to the state-of-the-art (SOTA) methods. These results show that our model can effectively deal with the complex distribution shift in the real world. It also indicates the model’s strong OOD generalization ability. In detail, for datasets with size shift, our method achieves an average enhancement of 15.9% over SOTA. In instances involving degree shift, the average improvement stands at 3.9%. For datasets subjected to structure shift, our method records an average increase in performance of 1.7%. From these results, it can be inferred that our model excels in identifying invariant patterns in all these distribution shifts. Ablation Study (RQ2) We conduct two types of experiments for the ablation study. First, we explore the necessity of using two mixup methods to find invariant subgraphs. Second, we investigate the contributions of each component of the proposed IGM. For the first part, we initially utilize a previous OOD graph generalization method (CIGA, DIR) for training. We then use the subgraph extractor from the trained model as our model’s invariant subgraph extractor and fix its parameters. Then we train with our two kinds of mixup. For the second part, we compare our model (two kinds of mixup) with two ablated models (only invariant Mixup and only environment Mixup) and ERM. We conduct experiments on the Graph-Twitter, DD, and BBBP datasets, obtaining results under three different distribution shifts. The results are demonstrated in Figure 2. We can observe that the performance of using the subgraph extractor from the previous methods (CIGA, DIR) combined with our two mixup methods for training is superior to ERM but still falls short of our method. Specifically, the IGM outperformed the IGM using a pre-trained extractor by an avThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8567 ERM DIR+IGM CIGA+IGM IGM-I IGM-E IGM 56 60 64 68 ACC (%) Graph-Twitter ERM DIR+IGM CIGA+IGM IGM-I IGM-E IGM 0.2 0.3 0.4 MCC DD ERM DIR+IGM CIGA+IGM IGM-I IGM-E IGM 66 70 74 AUC (%) BBBP 0 5 10 15 20 Epoch 0.0 0.2 0.4 0.6 ACC Graph-Twitter ACC 0.0 0.2 0.4 NMI NMI Figure 2: The first three figures present ablation studies on Graph-Twitter, DD, and BBBP datasets, emphasizing the importance of utilizing both environment and invariant Mixup. The last figure provides an analysis of environment subgraph clustering and the result of classification, demonstrating the mutual enhancement of the two mixup components. erage of 4.6%. This validates the necessity of employing both mixup methods to obtain invariant subgraphs. Furthermore, we can observe that the performance of the two ablated models are somewhat diminished compared to simultaneously using both mixup methods, yet they still outperform ERM by 1.6% on average. This indicates that both types of mixup can achieve OOD generalization to a certain extent. Among them, the model trained only with invariant Mixup shows a more significant reduction in performance than the model using only environment Mixup. Collaboration of Two Mixups. To show the environment Mixup module and invariant Mixup module can be mutually promoted by each other, we record the test accuracy and the Normalized Mutual Information (NMI) (Strehl and Ghosh 2002), which is a common clustering metric and can reflect the quality of generated environments for invariant learning. As shown in Figure 2, the results on Graph-Twitter demonstrate that such two metrics improve synchronously during the training process. One plausible reason is that, during environment Mixup, invariant learning across different environments captures invariant information, thereby promoting the invariant Mixup. Conversely, in the invariant Mixup phase, the invariant information captured amplifies the environment Mixup by delineating environments with larger distribution shifts, subsequently enhancing invariant learning in the environment Mixup segment. Hyper-parameter Sensitivity Analysis (RQ3) We conduct experiments on DrugOOD to examine our model’s sensitivity to hyper-parameters. We select three critical parameters of the model, including the IRM weight γ, V-REx weight µ, invariant Mixup weight δ. We vary γ, µ and δ in {0.1, 0.5, 1, 2, 4}. The results are shown in Figure 3. Our method remains stable and effective across different values of these hyper-parameters. Invariant Subgraph Visualization (RQ4) To verify whether our method captures the invariant information, we first visualize the invariant subgraphs found by our model and other graph OOD methods on the Graph-Twitter, and we use this dataset because it is comprehensible to humans. It can be observed that our model adeptly identifies the specific subgraphs that are pivotal in determining the sentiment of the sentences. In contrast, DIR fails to capture all 0.1 0.5 1 2 4 0.1 0.5 1 2 4 AUC…(%) 73 74 75 76 77 78 79 73 74 75 76 77 78 79 0.1 0.5 1 2 4 70 72 74 76 78 80 AUC(%) ERM IGM Figure 3: AUC sensitivity of hyper-parameters γ, µ, δ no more fear … rechargeable battery packs backup iphone those of one bought (a) DIR no more fear … rechargeable battery packs backup iphone those of one bought (b) CIGA no more fear … rechargeable battery packs backup iphone those of one bought (c) IGM Figure 4: Visualization of the invariant subgraphs extracted by different models on Graph-SST dataset. The original sentence is “Bought one of those rechargeable iPhone backup battery packs... no more fear”. the proper subgraphs, resulting in classification errors, while CIGA tends to capture relatively larger subgraphs. Conclusion In this work, we integrate mixup with invariant learning to address the problem of OOD generalization in graphs. We propose two modules, namely environment Mixup and invariant Mixup, to capture invariant information within graphs, thereby achieving OOD generalization. Extensive experiments demonstrate the efficacy of our methods under various distribution shifts on both synthetic and real-world datasets. In future work, we aim to extend our framework to node classification tasks and explore its applicability to dynamic graphs. Acknowledgments This work is supported in part by the National Natural Science Foundation of China (No. U20B2045, 62192784, U22B2038, 62002029, 62322203, 62172052), Young Elite Scientists Sponsorship Program (No. 2023QNRC001) by CAST. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8568 References Ahuja, K.; Caballero, E.; Zhang, D.; Gagnon-Audet, J.-C.; Bengio, Y.; Mitliagkas, I.; and Rish, I. 2021. Invariance principle meets information bottleneck for out-of-distribution generalization. Advances in Neural Information Processing Systems, 34: 3438–3450. Arjovsky, M.; Bottou, L.; Gulrajani, I.; and Lopez-Paz, D. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893. Bengio, Y.; Deleu, T.; Rahaman, N.; Ke, N. R.; Lachapelle, S.; Bilaniuk, O.; Goyal, A.; and Pal, C. 2020. A meta-transfer objective for learning to disentangle causal mechanisms. In Eighth International Conference on Learning Representations. OpenReview. net. Bevilacqua, B.; Zhou, Y.; and Ribeiro, B. 2021. Size-invariant graph representations for graph classification extrapolations. In International Conference on Machine Learning, 837–851. PMLR. Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2014. Spectral Networks and Locally Connected Networks on Graphs. In ICLR. Buffelli, D.; Li`o, P.; and Vandin, F. 2022. Sizeshiftreg: a regularization method for improving size-generalization in graph neural networks. Advances in Neural Information Processing Systems, 35: 31871–31885. Chen, F.; Wang, Y.-C.; Wang, B.; and Kuo, C.-C. J. 2020. Graph representation learning: a survey. APSIPA Transactions on Signal and Information Processing, 9: e15. Chen, Y.; Zhang, Y.; Bian, Y.; Yang, H.; Kaili, M.; Xie, B.; Liu, T.; Han, B.; and Cheng, J. 2022a. Learning causally invariant representations for out-of-distribution generalization on graphs. Advances in Neural Information Processing Systems, 35: 22131–22148. Chen, Y.; Zhou, K.; Bian, Y.; Xie, B.; Ma, K.; Zhang, Y.; Yang, H.; Han, B.; and Cheng, J. 2022b. Pareto invariant risk minimization. arXiv preprint arXiv:2206.07766. Chen, Z.; Xiao, T.; and Kuang, K. 2022. Ba-gnn: On learning bias-aware graph neural network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), 3012–3024. IEEE. Chou, H.-P.; Chang, S.-C.; Pan, J.-Y.; Wei, W.; and Juan, D.-C. 2020. Remix: rebalanced mixup. In Computer Vision– ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16, 95–110. Springer. Creager, E.; Jacobsen, J.-H.; and Zemel, R. 2021. Environment inference for invariant learning. In International Conference on Machine Learning, 2189–2200. PMLR. Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In NeurIPS, 3837–3845. Dong, L.; Wei, F.; Tan, C.; Tang, D.; Zhou, M.; and Xu, K. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), 49–54. Guo, H.; and Mao, Y. 2021. ifmixup: Towards intrusionfree graph mixup for graph classification. arXiv e-prints, arXiv–2110. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017a. Inductive representation learning on large graphs. Advances in neural information processing systems, 30. Hamilton, W. L.; Ying, R.; and Leskovec, J. 2017b. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584. Han, X.; Jiang, Z.; Liu, N.; and Hu, X. 2022. G-mixup: Graph data augmentation for graph classification. In International Conference on Machine Learning, 8230–8248. PMLR. Hu, W.; Fey, M.; Zitnik, M.; Dong, Y.; Ren, H.; Liu, B.; Catasta, M.; and Leskovec, J. 2020a. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33: 22118–22133. Hu, W.; Fey, M.; Zitnik, M.; Dong, Y.; Ren, H.; Liu, B.; Catasta, M.; and Leskovec, J. 2020b. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33: 22118–22133. Jang, E.; Gu, S.; and Poole, B. 2016. Categorical Reparameterization with Gumbel-Softmax. In International Conference on Learning Representations. Ji, Y.; Zhang, L.; Wu, J.; Wu, B.; Huang, L.-K.; Xu, T.; Rong, Y.; Li, L.; Ren, J.; Xue, D.; et al. 2022. DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery–A Focus on Affinity Prediction Problems with Noise Annotations. arXiv preprint arXiv:2201.09637. Kim, J.; Choo, W.; Jeong, H.; and Song, H. O. 2020. CoMixup: Saliency Guided Joint Mixup with Supermodular Diversity. In International Conference on Learning Representations. Kipf, T. N.; and Welling, M. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations. Koh, P. W.; Sagawa, S.; Marklund, H.; Xie, S. M.; Zhang, M.; Balsubramani, A.; Hu, W.; Yasunaga, M.; Phillips, R. L.; Gao, I.; et al. 2021. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, 5637–5664. PMLR. Krueger, D.; Caballero, E.; Jacobsen, J.-H.; Zhang, A.; Binas, J.; Zhang, D.; Le Priol, R.; and Courville, A. 2021. Out-ofdistribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, 5815–5826. PMLR. Lee, J. B.; Rossi, R.; and Kong, X. 2018. Graph classification using structural attention. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1666–1674. Li, H.; Wang, X.; Zhang, Z.; and Zhu, W. 2022a. Ood-gnn: Out-of-distribution generalized graph neural network. IEEE Transactions on Knowledge and Data Engineering. Li, H.; Zhang, Z.; Wang, X.; and Zhu, W. 2022b. Learning invariant graph representations for out-of-distribution generalization. Advances in Neural Information Processing Systems, 35: 11828–11841. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8569 Liu, G.; Zhao, T.; Xu, J.; Luo, T.; and Jiang, M. 2022. Graph rationalization with environment-based augmentations. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1069–1078. Miao, S.; Liu, M.; and Li, P. 2022. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, 15524– 15543. PMLR. Morris, C.; Kriege, N. M.; Bause, F.; Kersting, K.; Mutzel, P.; and Neumann, M. 2020. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663. Park, J.; Shim, H.; and Yang, E. 2022. Graph transplant: Node saliency-guided graph mixup with local structure preservation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7966–7974. Pinto, F.; Yang, H.; Lim, S. N.; Torr, P.; and Dokania, P. 2022. Using mixup as a regularizer can surprisingly improve accuracy & out-of-distribution robustness. Advances in Neural Information Processing Systems, 35: 14608–14622. Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C. D.; Ng, A. Y.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, 1631–1642. Strehl, A.; and Ghosh, J. 2002. Cluster ensembles—a knowledge reuse framework for combining multiple partitions. Journal of machine learning research, 3(Dec): 583–617. Sui, Y.; Wang, X.; Wu, J.; Lin, M.; He, X.; and Chua, T.S. 2022. Causal attention for interpretable and generalizable graph classification. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1696–1705. Vapnik, V. 1999. The nature of statistical learning theory. Springer science & business media. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations. Verma, V.; Lamb, A.; Beckham, C.; Najafi, A.; Mitliagkas, I.; Lopez-Paz, D.; and Bengio, Y. 2019. Manifold mixup: Better representations by interpolating hidden states. In International conference on machine learning, 6438–6447. PMLR. Wang, Y.; Wang, W.; Liang, Y.; Cai, Y.; and Hooi, B. 2021. Mixup for node and graph classification. In Proceedings of the Web Conference 2021, 3663–3674. Wu, Y.; Wang, X.; Zhang, A.; He, X.; and Chua, T.-S. 2021. Discovering Invariant Rationales for Graph Neural Networks. In International Conference on Learning Representations. Xia, D.; Wang, X.; Liu, N.; and Shi, C. 2023. Learning Invariant Representations of Graph Neural Networks via Cluster Generalization. In Thirty-seventh Conference on Neural Information Processing Systems. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018. How Powerful are Graph Neural Networks? In International Conference on Learning Representations. Yang, N.; Zeng, K.; Wu, Q.; Jia, X.; and Yan, J. 2022. Learning substructure invariance for out-of-distribution molecular representations. Advances in Neural Information Processing Systems, 35: 12964–12978. Yehudai, G.; Fetaya, E.; Meirom, E.; Chechik, G.; and Maron, H. 2021. From local structures to size generalization in graph neural networks. In International Conference on Machine Learning, 11975–11986. PMLR. Yoo, H.; Lee, Y.-C.; Shin, K.; and Kim, S.-W. 2023. Disentangling Degree-related Biases and Interest for Out-ofDistribution Generalized Directed Network Embedding. In Proceedings of the ACM Web Conference 2023, 231–239. Yuan, H.; Yu, H.; Gui, S.; and Ji, S. 2022. Explainability in graph neural networks: A taxonomic survey. IEEE transactions on pattern analysis and machine intelligence, 45(5): 5782–5799. Yun, S.; Han, D.; Oh, S. J.; Chun, S.; Choe, J.; and Yoo, Y. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, 6023–6032. Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2018. mixup: Beyond Empirical Risk Minimization. In International Conference on Learning Representations. Zhang, Z.; Wang, X.; Zhang, Z.; Li, H.; Qin, Z.; and Zhu, W. 2022. Dynamic graph neural networks under spatio-temporal distribution shift. Advances in Neural Information Processing Systems, 35: 6074–6089. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8570
2024
952
18,798
Enhancing Multi-Scale Diffusion Prediction via Sequential Hypergraphs and Adversarial Learning Pengfei Jiao1, 4, Hongqian Chen1, Qing Bao1, Wang Zhang2, Huaming Wu3* 1School of Cyberspace, Hangzhou Dianzi University, China 2College of Intelligence and Computing, Tianjin University, China 3Center for Applied Mathematics, Tianjin University, China 4Data Security Governance Zhejiang Engineering Research Center, Hangzhou Dianzi University, China {pjiao, hqchen, qbao}@hdu.edu.cn, {wangzhang, whming}@tju.edu.cn Abstract Information diffusion prediction plays a crucial role in understanding the propagation of information in social networks, encompassing both macroscopic and microscopic prediction tasks. Macroscopic prediction estimates the overall impact of information diffusion, while microscopic prediction focuses on identifying the next user to be influenced. While prior research often concentrates on one of these aspects, a few tackle both concurrently. These two tasks provide complementary insights into the diffusion process at different levels, revealing common traits and unique attributes. The exploration of leveraging common features across these tasks to enhance information prediction remains an underexplored avenue. In this paper, we propose an intuitive and effective model that addresses both macroscopic and microscopic prediction tasks. Our approach considers the interactions and dynamics among cascades at the macro level and incorporates the social homophily of users in social networks at the micro level. Additionally, we introduce adversarial training and orthogonality constraints to ensure the integrity of shared features. Experimental results on four datasets demonstrate that our model significantly outperforms state-of-the-art methods. Introduction Online social platforms have become an essential part of our daily lives, enriching instant communication among individuals and expediting the swift dissemination of information. The activity patterns of users in social networks play a pivotal role in the spread of information, leading to the emergence of information cascades. Gaining a deeper understanding of the underlying mechanisms of information diffusion carries significant economic and social advantages, with applications in various fields, including fake news detection (Zhang et al. 2023), viral marketing (Miller and Lammas 2010), and recommender system (Ko et al. 2022). As shown in Fig. 1, current researches on modeling information cascades primarily focus on two key aspects: 1) Macroscopic prediction, such as DeepCas (Li et al. 2016) and CasCN (Chen et al. 2019b), estimating the incremental or total size of a cascade; 2) Microscopic prediction, such as TopoLSTM (Wang et al. 2017) and SNIDSA (Wang, *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. …… t1 t2 t10 Observed influenced users How many influenced users at last ? ? t∞ …… Observed influenced users t1 t2 t10 t11 ? Who is the next influenced user ? ? ? Macroscopic Microscopic Figure 1: Illustrations depicting macroscopic cascade size prediction (left) and microscopic next influenced user prediction (right). Chen, and Li 2018), predicting the subsequent user to be influenced within the cascade. On the one hand, macro-prediction concentrates on overarching patterns and trends, employing network topology and dissemination models to forecast information propagation. On the other hand, micro-prediction delves into the particulars of individual users’ behaviors and attributes, utilizing analyses of user and content characteristics to anticipate the impact of information diffusion. Macro-prediction and micro-prediction collectively provide a comprehensive understanding of information dissemination across various levels and can mutually reinforce and enhance each other. Since both tasks require learning propagation features from observed cascades, they inherently share commonalities. Hence, the imperative to enhance prediction accuracy by extracting common features between these tasks assumes paramount importance. However, the extraction of such common features is confronted with challenges. Firstly, information dissemination involves complex interactions not only within a given cascade but also between different cascades. Moreover, the evolution of cascades over time demands an approach capable of encapsulating both global interactions and dynamic changes. Secondly, ensuring the purity of public features in the presence of potential contamination by private features poses a significant challenge. To the best of our knowledge, only a limited number of studies have introduced a unified model catering to both macro and micro scales. The most representative works are FOREST (Yang et al. 2019) and DMT-LIC (Chen et al. 2019a). Nevertheless, FOREST (Yang et al. 2019) primarily The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8571 utilizes the outcomes of micro-prediction to guide macroprediction, lacking a comprehensive recognition of the mutually reinforcing synergy inherent in these two tasks. Similarly, while DMT-LIC (Chen et al. 2019a) incorporates a shared representation layer to capture cascade graph representations and diffusion processes, it fails to address the issue of potential contamination and redundancy between the shared features and task-specific features. Moreover, both methods primarily concentrate on user interactions within individual cascades, neglecting the intricate interactions and dynamics among cascades at a global level. To address the above challenges, we propose MINDS, a streamlined and efficient model for Multi-scale INformation DiffuSion prediction. Specifically, at the macro level, we construct sequential hypergraphs to effectively capture the interactions and dynamics among cascades. From a global perspective, modeling complex interactions among users and cascades is consistent with the concept of the hypergraph. Constructing sequential hypergraphs by dividing the time period into sequential time windows can accurately describe the dynamic evolution of the cascades. At the micro level, we focus on understanding the social homophily among users within social networks. We design a shared module to learn shared features for both macro and micro tasks. Furthermore, we incorporate adversarial training and orthogonality constraints to mitigate feature redundancy and contamination between shared and task-specific features. In summary, the main contributions of this paper are three-fold: • We propose an effective and straightforward model that tackles both macro and micro prediction, leveraging their mutual reinforcement to enhance overall performance. • We introduce an approach that captures the interactions and dynamics among cascades by modeling information diffusion in sequential hypergraphs. To address feature redundancy, we incorporate adversarial training and orthogonality constraints. • We conducted comprehensive experiments to evaluate our model’s performance. The results demonstrate its superiority over state-of-the-art methods in both macro and micro prediction. Problem Formulation To commence, we present the social graph and diffusion hypergraphs that constitute the foundation for diffusion prediction within our model. The social graph is denoted as GS = (U, E), where U is the user set and E is the edge set. Each edge (ui, uj) ∈E represents a social relationship between user ui and uj. The observed diffusion cascades D = {d1, d2, . . . , dM}, |D| = N are split into T subsets according to timestamps for constructing sequential diffusion hypergraphs GD = {Gt D|t = 1, 2, . . . , T}, Gt D = (U t, Et), where U t is the user set and Et is the hyperedge set. In the diffusion hypergraph, users participate in the same cascade and are connected by a hyperedge, in other words, a hyperedge represents a cascade. Note that the set of nodes connected by hyperedge is different in each hypergraph. It means that if ui participates in dm during the t-th time interval, then ui being connected to hyperedge em only occurs in diffusion hypergraph Gt D. In this work, we aim to address both the macroscopic and microscopic diffusion prediction problems based on the above introductions. Macroscopic Diffusion Prediction: Given a social graph GS, diffusion hypergraphs GD and an observed diffusion sequence dm = {(um i , tm i )|um i ∈U}, estimate the final size |dm| of cascade dm. Microscopic Diffusion Prediction: Given a social graph GS, diffusion hypergraphs GD and an observed diffusion sequence dm = {(um i , tm i )|um i ∈U}, predict which user will participate in dm in the next step. Method In this section, we will provide a comprehensive introduction to the proposed model. The architectural overview of the proposed model is depicted in Fig. 2, which comprises four primary modules: User Global Interactive Learning Module: This module is responsible for extracting user preferences at each time interval and characterizing the dynamic changes of cascades. A fusion layer at the cascade level facilitates this process. User Social Homophily Learning Module: It captures users’ social relationship at the individual user level using Graph Convolutional Networks (GCN). Shared-private Representation Learning Module: This module learns task-specific representations and shared representations to facilitate diffusion prediction. Diffusion Prediction Module: This module concatenates task-specific features with shared representation for macroscopic and microscopic diffusion prediction, respectively. User Global Interactive Learning In order to simultaneously account for global interactions among cascades and dynamic changes of cascades. On the basis of the constructed sequential diffusion hypergraphs, we introduce the HGNN to learn the user global interactions of each independent time interval at the cascade level, and add a fusion layer between two continuous time intervals to model the dynamics of cascades. Hypergraph Neural Network At each time interval, we model the interactions of users through HGNN. The process of HGNN is illustrated in Fig. 3. For a simple graph, graph convolution takes the aggregation of its neighbor vertices to get a new representation of the central vertex. The information of vertices is passed through edges in a simple graph. Similarly, hyperedges play a role in information transmission in a hypergraph. The message aggregating in the hypergraph can be summarized in a two-stage procedure: 1) Vertex-to-Hyperedge; 2) Hyperedge-to-Vertex. Vertex-to-Hyperedge. Given a diffusion hypergraph Gt D, the first stage of HGNN aims to update the feature yj,t of hyperedge et j by aggregating the information of all its connected vertices, which can be defined as: yl j,t = σ  wet j · X ut i∈Nv(et j) xl i,t |Nv(et j)|  , (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8572 HGNN HGNN Fusion Layer Fusion Layer …… HGNN GCN LSTM Shared LSTM LSTM MLP MLP MLP MLP Macro Prediction Micro Prediction ? ? Ldiff Ldiff Ladv User Global Interactive Learning User Social Homophily Learning Shared-private Representation Learning Diffusion Prediction Social graph Diffusion hypergraph Diffusion sequence ? ? XD XS GD1 GD2 GDT Figure 2: The architectural overview of our model. …… …… …… Xl Yl Xl+1 Figure 3: The two stages of hypergraph convolution. where σ is an non-linear activation function ReLU and Nv(et j) is the set of vertices connected by hyperedge et j. wet j is a weight associated to hyperedge et j. We consider each cascade to be of equal importance and give the same weight to each hyperedge when aggregating, i.e. wet j = 1. Hyperedge-to-Vertex. After updating features of hyperedges, the second stage aims to aggregate the information of all hyperedges participated by ut i for updating the feature xi,t of ut i at t-th time interval. The update process can be defined as: xl+1 i,t = σ  Θl · X et j∈Ne(ut i) yl j,t |Ne(ut i)|  , (2) where Ne(ut i) is the set of hyperedges connected by vertex ut i. Θl ∈Rd×d is a trainable parameter of layer l and d is the dimension of embedding. Sequential HGNNs with Fusion Layer The above twostage convolution operation only learns user interaction at a specific time interval, which can not adequately characterize the evolution of cascades in propagation. Therefore, we design a fusion strategy to connect the interactions at different time intervals learned by HGNN in chronological order. The fusion strategy is defined as: x0 i,t+1 = αxL i,t + (1 −α)x0 i,t α = exp(WT F2σ(WF1xL i,t)) exp(WT F2σ(WF1xL i,t))+exp(WT F2σ(WF1x0 i,t)), (3) where x0 i,t is the initial feature of user ut i and xL i,t is the updated feature of user ut i learned from diffusion hypergraph Gt D through L-layer HGNN. At the first time interval, we initialize the user feature embedding from a normal distribution. σ(·) is the activation function ReLU. WF1 represents the transformation matrix, while WT F2 denotes the vector used for calculating attention scores. We can obtain the final global interactive representation XD through sequential HGNNs. User Social Homophily Learning User tends to have more social interactions with users who are similar to them and this refers to the principle called social homophily. Close friends, who are usually friends alike in certain qualities or interests, have more influence on each other than dissimilar ones. Users’ social homophily can be reflected through social network structure. We introduce the social graph to model user social relationships and apply a multi-layer GCN to embed social homophily. Given social graph GS = (U, E), the user social homophily embedding matrix Xl S at l-th layer is updated by: Xl+1 S = σ( ˜D −1 2 S ˜AS ˜D −1 2 S Xl SWS), (4) where σ is the ReLU activation function, WS is a trainable weight matrix, ˜ AS and ˜ DS are the adjacent and degree The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8573 matrix of self-looped GS. The initial homophily embedding matrix X0 S ∈RN×d is randomly initialized from a normal distribution, and d is the dimension of embedding. We can obtain the final social homophily representation XS after several layers of GCN. Shared-private Representation Learning Graph-based representation learning captures the cooccurrence relationship of users at the user level and cascade level, however, it does not enable further analysis of context interactions within cascades. Therefore, due to the excellent performance of the LSTM in sequential tasks such as natural language processing, we apply two LSTM modules to learn the social and global context interactions within cascades, respectively. We hold the belief that there exists a hidden common feature between the macro-prediction and the micro-prediction tasks, with the potential to enhance the performance of each task individually. Drawing inspiration from the principles of multi-task learning (Liu, Qiu, and Huang 2017), we propose a shared LSTM architecture aimed at capturing these shared characteristics between the two tasks. Furthermore, to tackle the issue of feature redundancy, we introduce a combination of adversarial training and orthogonality constraints. Private Representation Learning We utilize the LSTM to model the cascade diffusion process sequentially, where a hidden state is employed to capture the diffusion history. The update of each LSTM unit can be shortened as: ht = LSTM(ht−1, xt, θp), (5) where ht ∈Rd is the hidden state and xt ∈Rd is the input at the current time step. θp represents all the parameters in LSTM. Based on the defined LSTM, we can compute the representation of the context interaction for the user social homophily XS and user global interaction matrix XD as follows: hcas t = LSTM(hcas t−1, xd t , θcas) huser t = LSTM(huser t−1 , xs t, θuser), (6) where LSTM(., θ) is defined as Eq. 5. Thus, the task-specific embeddings are represented as Hcas ∈RN×d and Huser ∈RN×d. Shared Representation Learning Inspired by the gated mechanisms used in LSTM, we design a novel sharedLSTM, that takes XD and XS as the input. The detail of the module is described as follows: ft = σ(xD t−1Wf + xS t−1Uf + ht−1Vf + bf), it = σ(xD t−1Wi + xS t−1Ui + ht−1Vi + bi), ot = σ(xD t−1Wo + xS t−1Uo + ht−1Vo + bo), ˜ct = tanh(xD t−1Wc + xS t−1Uc + ht−1Vc + bc), ct = ˜ct · it + ct−1 · ft, ht = ot · tanh ct, (7) where σ is the sigmoid function. W∗∈Rd×d, U∗∈Rd×d, V∗∈Rd×d and b∗∈Rd are trainable parameters. The input gate it controls the amount of new information added to the hidden state, while the forget gate ft regulates the amount of information discarded from the previous memory cells ct. Additionally, the output gate ot determines the amount of information to be output in the hidden state ht. By integrating the forget gate, input gate, update memory unit, and output gate, the shared LSTM can effectively handle the intricate relationship between micro-features and macro-features. We finally obtain a comprehensive representation of shared features between the macro-prediction task and micro-prediction task from the shared LSTM, which is denoted as Hshare ∈RN×d. Adversarial Training Although the shared-private LSTM is designed to learn the shared and task-specific features, there is no guarantee that shared features can not be preserved in private feature space, or vice versa. Therefore, a simple principle can be applied to shared LSTM that a reliable shared feature should primarily consist of common information without any task-specific information. Inspired by adversarial networks, we introduce adversarial training to solve this problem. A task discriminator is used to map the representation into a probability distribution, estimating which tasks the encoded feature comes from. D(h, θD) = softmax(b + Uh), (8) where U ∈Rd×d is a learnable parameter and b ∈Rd is a bias. To prevent task-specific features from infiltrating the shared representation, we design a task adversarial loss, denoted as Ladv. This loss function is employed to train the model in such a way that the shared features generated are not easily predictable by a classifier in terms of their corresponding tasks. Formally, the task adversarial loss, Ladv, is defined as follows: Ladv = min θshare max θD 2 X k=1 N X n=1 (log D(hk n)+log(1−D(hshare n ))), (9) where θshare represents all the parameters in Shared-LSTM, and k denotes the task type (either macro or micro). The optimization process involves a min-max framework, with the underlying concept being that the shared LSTM generates a representation to intentionally confuse the task discriminator. As the training progresses, the shared feature extractor and task discriminator gradually reach a point of convergence, beyond which achieving additional enhancements becomes challenging. As a result, the task discriminator becomes progressively incapable of distinguishing among various tasks. This convergence indicates the successful acquisition of shared feature generation by the feature extractor, resulting in shared features that exhibit indistinguishability across all tasks. Orthogonality Constraints It is worth noting that the above model has a potential disadvantage. The disadvantage is that task-invariant features can appear in both shared and private representations. To alleviate this drawback, we introduce orthogonality constraints, which penalize redundant latent representations and encourage the shared and private The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8574 LSTM to encode different aspects of the inputs. The orthogonality constraints are defined as: Ldiff = HshareT Hcas 2 F + HshareT Huser 2 F , (10) where ∥·∥2 F is the squared Frobenius norm. Diffusion Prediction We concatenate the task-specific representation Hcas and Huser for each task with the shared representation Hshare, respectively. These concatenated representations are then separately fed into distinct output layers dedicated to the prediction process. Macroscopic Diffusion Prediction For macroscopic diffusion prediction, we aim to predict the final cascade size in the future. We calculate the final size of diffusion cascade dm by: Sm = MLP(concat(hcas, hshare)), (11) where concat(·, ·) is the concatenation operation. We train the macroscopic task by minimizing the following loss function: Lmacro = 1 M M X m=1 (Sm −ˆSm)2, (12) where M is the number of diffusion cascades and ˆSm is the ground truth. Microscopic Diffusion Prediction For microscopic diffusion prediction, we predict the next influenced probability pi ∈R|dm| for user ui: pi = softmax(MLP(concat(huser, hshare))). (13) We adopt the cross entropy loss for microscopic training: Lmicro = − |dm| X j=2 |U| X i=1 ˆpji log(pji), (14) where |U| is the number of users and ˜p is true probability. If user ui participate in cascade dm at the step j, then ˆpji = 1, otherwise ˆpji = 0. The overall loss function of our model is defined as: L = λLmacro + (1 −λ)Lmicro + Ladv + γLdiff, (15) where λ is a balance parameter and γ is a hyperparameter. Experiment In this section, we conduct experiments on both microscopic and macroscopic cascade predictions to demonstrate the effectiveness of our proposed model. Experimental Setting Datasets We conduct experiments on four datasets, i.e., Christianity, Android, Douban and Memetracker. The statistics of these datasets are shown in Table 1. A detailed description of the datasets can be found in the Appendix. Dataset Christ Android Douban Meme # Users 2,897 9,958 12,232 4,709 # Links 35,624 48,573 39,658 209,194 # Cascades 589 679 3,475 12,661 Avg. Length 22.9 33.3 21.76 16.24 Table 1: Statistics of datasets. Christ is short for the dataset Christianity, and Meme is short for the dataset Memetracker. Baselines We compare thirteen representative baseline models with our models. For macroscopic prediction, we evaluate five models: DeepCas (Li et al. 2016), DeepHawkes (Cao et al. 2017), CasCN (Chen et al. 2019b), CasFlow (Xu et al. 2023b) and TCSE-net (Wu et al. 2022). For microscopic prediction, we evaluate six models: TopoLSTM (Wang et al. 2017), NDM (Yang et al. 2021), SNIDSA (Wang, Chen, and Li 2018), Inf-VAE (Sankar et al. 2020), DyHGCN (Yuan et al. 2020) and TAN-DRUD (Liu et al. 2022). For multi-scale prediction, we evaluate two models: FOREST (Yang et al. 2019) and DMT-LIC (Chen et al. 2019a). A detailed description of baselines can be found in the Appendix. Evaluate Metrics For macroscopic prediction, we use Mean Squared Logarithmic Error (MSLE) as the evaluation metric, which is also used in previous experiments (Cao et al. 2017; Li et al. 2016). For microscopic prediction, we use two ranking metrics used in (Yang et al. 2019): Mean Average Precision on top k (MAP@k) and Hits Scores on top k (Hits@k) for evaluation, k = [10, 50, 100]. Parameters Settings For each dataset, we employ a random sampling method to allocate 80% of cascades for training, 10% for validation, and the remaining 10% for testing. Baseline methods follow the original paper settings. For MINDS, we implement the model using PyTorch and utilize the Adam optimizer with a learning rate of 0.001. The embedding dimension is set to 64, and the batch size is 32. The balance parameter λ is assigned a value of 0.3, while the hyperparameter γ is set to 0.05. Social homophily learning utilizes a 2-layer GCN, and global interaction learning is facilitated through a single-layer HGNN. Additionally, the number of time intervals is set to 8. Performance Comparison We conduct a comprehensive comparison of MINDS with various baselines on four datasets, focusing on microscopic and macroscopic diffusion prediction. The results are summarized in Tables 2, 3, and 4, and we observe the following: 1) MINDS consistently outperforms all state-of-the-art baselines in microscopic prediction tasks. Compared to the second-best model DyHGCN, MINDS leverages sequential hypergraphs to dynamically represent cascade interactions, leading to remarkable improvements of up to 3% in Hits scores and MAP scores. 2) MINDS consistently outperforms all state-of-the-art baselines in macroscopic prediction tasks, achieving at least The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8575 Models Christianity Android Douban Memetracker @10 @50 @100 @10 @50 @100 @10 @50 @100 @10 @50 @100 TopoLSTM 0.1559 0.3653 0.4777 0.0460 0.1318 0.2103 0.0306 0.0143 0.0184 0.1908 0.3687 0.4683 NDM 0.0464 0.1145 0.1461 0.0170 0.0423 0.0555 0.0388 0.0506 0.0528 0.0931 0.1228 0.1279 SNIDSA 0.0660 0.2098 0.3502 0.0271 0.0829 0.1299 0.0702 0.1807 0.2324 0.1395 0.2945 0.3977 Inf-VAE 0.0767 0.2569 0.3853 0.0318 0.0938 0.1452 0.1364 0.2361 0.3059 0.1165 0.3096 0.4200 DyHGCN 0.2380 0.4689 0.5923 0.0748 0.1746 0.2596 0.1438 0.2648 0.3329 0.2522 0.4603 0.5710 TAN-DURD 0.1908 0.4406 0.5697 0.0281 0.1024 0.1658 0.0841 0.1604 0.2175 0.2139 0.4247 0.5383 FOREST 0.2746 0.4665 0.5603 0.0866 0.1739 0.2314 0.1106 0.1986 0.2559 0.2648 0.4502 0.5499 DMT-LIC 0.2768 0.4442 0.5669 0.0932 0.1639 0.2315 0.1465 0.2506 0.3054 0.2746 0.4619 0.5656 MINDS 0.3214 0.4978 0.6250 0.1096 0.1989 0.2766 0.1956 0.3087 0.3641 0.2819 0.4760 0.5790 Table 2: Results on four datasets (Hits@k scores for k = 10, 50 and 100), where higher scores indicate better performance. Models Christianity Android Douban Memetracker @10 @50 @100 @10 @50 @100 @10 @50 @100 @10 @50 @100 TopoLSTM 0.0523 0.0619 0.0635 0.0166 0.0202 0.0213 0.0354 0.0824 0.0884 0.0870 0.0955 0.0969 NDM 0.0144 0.0177 0.0182 0.0059 0.0070 0.0072 0.0141 0.0824 0.0884 0.0463 0.0480 0.0481 SNIDSA 0.0246 0.0306 0.0326 0.0100 0.0122 0.0129 0.0371 0.0419 0.0148 0.0605 0.0674 0.0689 Inf-VAE 0.0172 0.0254 0.0272 0.0076 0.0103 0.0110 0.0543 0.0588 0.0598 0.0425 0.0509 0.0525 DyHGCN 0.1062 0.1167 0.1184 0.0392 0.0434 0.0446 0.0801 0.0856 0.0865 0.1410 0.1502 0.1518 TAN-DURD 0.0752 0.1167 0.1184 0.0099 0.0130 0.0139 0.0359 0.0401 0.0409 0.0991 0.1086 0.1102 FOREST 0.1569 0.1658 0.1672 0.0628 0.0667 0.0675 0.0655 0.0694 0.0702 0.1429 0.1514 0.1528 DMT-LIC 0.1649 0.1728 0.1746 0.0622 0.0652 0.0662 0.0812 0.0856 0.0897 0.1496 0.1581 0.1595 MINDS 0.1955 0.2037 0.2054 0.0677 0.0716 0.0727 0.1142 0.1199 0.1213 0.1535 0.1623 0.1638 Table 3: Results on four datasets (MAP@k scores for k = 10, 50 and 100), where higher scores indicate better performance. Model Christ Android Douban Meme DeepCas 1.446 2.122 2.122 2.231 DeepHawkes 1.111 1.971 1.725 1.143 CasCN 1.046 0.981 1.476 0.967 CasFlow 0.765 1.041 0.465 0.535 TCSE-net 2.391 2.882 1.033 2.285 FOREST 1.726 0.556 0.825 0.621 DMT-LIC 1.692 0.201 0.741 0.701 MINDS 0.572 0.151 0.404 0.506 Table 4: Experimental results on four datasets in terms of MSLE, where lower scores indicate better performance. Christ is short for the dataset Christianity, and Meme is short for the dataset Memetracker. a 10% decrease in MSLE. By combining macroscopic and microscopic prediction, MINDS achieves more promising performance. 3) MINDS convincingly outperforms representative baselines on multi-scale prediction tasks. The improvements stem from the pure shared features, avoiding impurities. MINDS’ ability to handle both prediction tasks in a single model enables multi-scale information diffusion prediction. Ablation Study We conduct ablation studies on the Christianity and Douban datasets to evaluate the individual contributions of different submodules in MINDS. As shown in Table 5, MINDS achieves the best results compared to other variants, indicating the effectiveness of its design. Specifically, the observations are as follows: 1) Model performance declines after removing Ladv, Ldiff, or both, validating the importance of introducing adversarial training and orthogonality constraints to address feature redundancy. 2) Introducing a series of interactive hypergraphs effectively captures cascade interactions from a global perspective, as demonstrated by the results of w/o HGNN. 3) Macroscopic prediction improves microscopic prediction by accurately predicting the propagation behavior of individual users. Conversely, microscopic prediction enhances the understanding and interpretation of overall propagation trends by macroscopic prediction. Significant differences between w/o Macro, w/o Micro, and MINDS in macro and micro indicators reveal the mutual reinforcement between the two tasks, leading to improved performance. Parameter Analysis In this subsection, we investigate how different hyperparameter settings affect the performance of our model on the Android and Douban datasets. We explore the sensitivity of λ, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8576 Models Christianity Douban Hits@100 MAP@100 MSLE Hits@100 MAP@100 MSLE w/o AdvDiff 0.5893 0.1958 0.971 0.3682 0.1170 0.642 w/o Diff 0.6004 0.1949 1.222 0.3688 0.1173 0.712 w/o Adv 0.5915 0.1926 0.861 0.3572 0.1193 0.742 w/o HGNN 0.5871 0.2013 1.074 0.3692 0.1178 0.581 w/o Macro 0.5580 0.1874 9.255 0.3665 0.1191 4.669 w/o Micro 0.5871 0.1937 0.865 0.3591 0.1174 0.711 MINDS 0.6250 0.2054 0.572 0.3736 0.1213 0.549 Table 5: Ablation study on Christianity and Douban datasets. We design six variants to demonstrate the rationale behind our model: w/o AdvDiff removes Ladv and Ldiff. w/o Diff removes Ldiff. w/o Adv removes Ladv. w/o HGNN replaces sequential hypergraphs with sequential digraphs and HGNN with GAT. w/o Macro removes Lmacro. w/o Micro removes Lmicro. 0.25 0.50 0.75 0.114 0.116 0.118 0.120 0.122 0.025 0.050 0.075 0.175 0.225 0.275 0.325 0.375 8 16 32 64 128 256 Embedding…size 0.0 0.2 0.4 0.6 0.8 1.0 2 4 6 8 10 12 Intervals 0.0 0.2 0.4 0.6 0.8 1.0 MAP@100 MAP@50 MAP@10 Hits@100 Hits@50 Hits@10 MSLE (a) Douban 0.25 0.50 0.75 0.064 0.066 0.068 0.070 0.072 0.025 0.050 0.075 0.075 0.115 0.155 0.195 0.235 0.275 8 16 32 64 128 256 Embedding…size 0.0 0.2 0.4 0.6 0.8 1.0 2 4 6 8 10 12 Intervals 0.0 0.2 0.4 0.6 0.8 1.0 MAP@100 MAP@50 MAP@10 Hits@100 Hits@50 Hits@10 MSLE (b) Android Figure 4: Parameter sensitivity on Douban and Android dataset. For balance parameter λ ∈(0, 1) and the number of time intervals ∈[2, 12], we evaluate all map and MSLE scores. For hyper parameter γ ∈(0, 0.1) and embedding size ∈{8, 16, 32, 64, 128, 256}, we evaluate all hits scores and MSLE score. In this figure, the macro indicator (MSLE) is presented with an inverted Y-axis to align with the increasing trend of the micro indicator (MAP and Hits). γ, embedding size, and the number of time intervals, testing each parameter while keeping others fixed. Fig. 4 illustrates the model’s performance on multi-scale prediction under various hyperparameter configurations. During the process of parameter value selection, we carefully consider both macro and micro indicators. Optimal model performance occurs when the macro index is minimized, and the micro index is maximized. Remarkably, we observe that MINDS maintains stable performance when hyperparameters are varied within a reasonable range. This experiment highlights the robustness of our model. Finally, we determine that the optimal hyperparameter configuration corresponds to (λ, γ, embedding size, number of time intervals) = (0.3, 0.05, 64, 8). Conclusion In this paper, we propose MINDS, a streamlined yet effective multi-scale diffusion prediction model, capable of handling both microscopic and macroscopic predictions. Our approach involves constructing sequential hypergraphs to capture intricate influences and dynamics among cascades from a macro perspective. Simultaneously, we learn implicit structures and user characteristics in social networks from a micro perspective. A shared LSTM is then employed to extract common features between macro- and micro-tasks, while adversarial training and orthogonality constraints ensure the purity of these shared features. Experimental results on the next-influenced user and cascade size predictions demonstrate the effectiveness of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8577 Appendix Related Work Macroscopic Diffusion Prediction Previous studies can be categorized into three main approaches: feature-based, generative process-based, and deep learning-based methods. Feature-based (Kong et al. 2014) approaches focus on extracting handcrafted features from the input data, which are then used in machine learning algorithms for regression or classification tasks. However, these methods heavily rely on domain knowledge and lack generalizability. Generative process-based approaches (Zhao et al. 2015) model the arrival of infected users as a point process. While these methods enhance interpretability, they may overlook implicit information within the cascade dynamics. Recently, deep learning-based approaches have shown their effectiveness. For example, DeepCas (Li et al. 2016) utilizes RNN to encode sampled sequences from social graphs and cascades. DeepHawkes (Cao et al. 2017) incorporates the Hawkes process within an RNN architecture. CoupledGNN (Cao et al. 2019) and CasCN (Chen et al. 2019b) utilize GNNs to capture diffusion patterns across the underlying social network. VaCas (Zhou et al. 2020) combines graph wavelets, hierarchical variational autoencoders, and Bi-GRUs to learn the structures of cascade graphs. Microscopic Diffusion Prediction Conventional methods for microscopic diffusion prediction can be categorized into three groups: independent cascade (IC)-model-based approaches, embedding-based approaches, and deep learningbased approaches. IC-model-based approaches (Wang et al. 2014) assume independent diffusion probabilities for user pairs and employ Monte Carlo simulations to predict microscopic diffusion. Embedding-based approaches (Feng et al. 2018a) extend the IC model by representing each user as a parameterized vector. They model diffusion probabilities between users based on their embeddings, considering factors such as global user similarity. However, these methods overlook infection history. Deep learning techniques have shown promise in modeling information diffusion. Approaches like TopoLSTM (Wang et al. 2017) structure hidden states as directed acyclic graphs, while DeepDiffuse (Islam et al. 2018) and HiDAN (Wang and Li 2019) incorporate attention mechanisms to leverage infection timestamp information. NDM (Yang et al. 2021) combines self-attention and CNNs, while Inf-VAE (Sankar et al. 2020) integrates a VAE framework to capture social homophily and temporal influence. SNIDSA (Wang, Chen, and Li 2018) and DyHGCN (Yuan et al. 2020) utilize diffusion paths, social networks, and temporal information for prediction. Furthermore, methods like MS-HGAT (Sun et al. 2022) and HyperINF (Jin et al. 2022) leverage hypergraphs to learn global user dependencies. Hypergraph Neural Network Hypergraph offers a natural way to represent group relations by connecting entities through hyperedges. Recently, several approaches have emerged to leverage hypergraphs for learning latent node representations and capturing high-order structural information. HGNN (Feng et al. 2018b) stands as the pioneering spatial approach that uncovers latent node representations by exploring high-order structural information within hypergraphs. Hyper-Atten (Bai, Zhang, and Torr 2019) introduced an attention mechanism to hypergraphs, enhancing their learning capabilities. UniGNN (Huang and Yang 2021) and HyperSAGE (Arya et al. 2020) take a direct messagepassing approach on hypergraphs to learn representations. AllSet (Chien et al. 2021) has presented a powerful framework that unifies existing hypergraph learning methods. In various fields like social networks (Sun et al. 2023), recommendation (Ding et al. 2023), and natural language processing (Xu et al. 2023a), hypergraphs have demonstrated their efficacy in tackling complex problems. Datasets We used four datasets, i.e. Christianity, Android, Douban and Memetracker, to conduct experiments. Christianity (Sankar et al. 2020) consists of the user friendship network and cascading interactions related to Christian themes on Stack-Exchanges. Android (Sankar et al. 2020) is collected from StackExchanges, which is a community Q&A website. It includes users’ interactions across different channels, which form their friendship relations. Douban (Zhong et al. 2012) is a Chinese social website where users can update their book reading statuses and follow the statuses of other users. Memetracker (Leskovec, Backstrom, and Kleinberg 2009) collects a million news stories and blog posts from online websites, tracking the most frequent memes to analyze their migration among people. Each meme is considered an informational entity, while individual website URLs are treated as representations of users in the analysis. Baselines We compare thirteen representative baseline models with our models. Macroscopic prediction models: DeepCas (Li et al. 2016) transforms the cascade graph into node sequences through random walks and learns representations for each cascade using a deep learning framework. DeepHawkes (Cao et al. 2017) integrates an end-to-end deep learning technique into the Hawkes process for cascade prediction. CasCN (Chen et al. 2019b) applies GCN to capture the structures of information diffusion and uses LSTM to learn inherent dependencies between users’ retweeting behaviors in sequential cascade information. CasFlow (Xu et al. 2023b) leverages normalizing flows to learn node-level and cascade-level latent factors, enabling hierarchical pattern learning of information diffusion. TCSE-net (Wu et al. 2022) preserves distinguishable structure patterns and eliminates potential noise by aligning and fusing temporal popularity and cascade information. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8578 u1 u2 u3 u4 u5 u6 u7 u8 dm u1 u2 u3 u4 u5 u6 u7 u8 em u1 u2 u3 u4 u5 u6 u7 u8 em u1 u2 u3 u4 u5 u6 u7 u8 em u1 u2 u3 u4 u5 u6 u7 u8 em GD1 GD2 GD3 GD4 Figure 5: Hyperedge em connects different nodes in sequential hypergraphs. Models Android Memetracker Hits@100 MAP@100 MSLE Hits@100 MAP@100 MSLE w/o AdvDiff 0.2696 0.0711 0.467 0.5609 0.1605 0.895 w/o Diff 0.2758 0.0716 0.265 0.5747 0.1614 0.853 w/o Adv 0.2712 0.0718 0.369 0.5771 0.1634 0.844 w/o HGNN 0.5871 0.2013 1.074 0.3692 0.1178 0.581 w/o Macro 0.5580 0.1874 9.255 0.3665 0.1191 4.669 w/o Micro 0.5871 0.1937 0.865 0.3591 0.1174 0.711 MINDS 0.2766 0.0727 0.151 0.5790 0.1638 0.506 Table 6: Ablation study on Android and Memetracker datasets. Microscopic prediction models: TopoLSTM (Wang et al. 2017) extends the standard LSTM model to simulate the information diffusion process and combines it with the social network. NDM (Yang et al. 2021) applies CNN to learn the diffusion representation of users and utilizes self-attention to make diffusion predictions. SNIDSA (Wang, Chen, and Li 2018) explores diffusion paths and the social network to jointly learn heterogeneous information representations. Inf-VAE (Sankar et al. 2020) embeds social homophily through GNNs and designs a co-attentive fusion network to integrate social and temporal variables. DyHGCN (Yuan et al. 2020) jointly learns the structural characteristics of the social graph and dynamic diffusion graph, while encoding temporal information into a heterogeneous graph to capture users’ dynamic preferences. TAN-DRUD (Liu et al. 2022) models information cascades by capturing the dual role user dependencies of information senders and receivers. Unified multi-scale prediction models: FOREST (Yang et al. 2019) incorporates macroscopic information into an RNN-based microscopic diffusion model to simultaneously predict microscopic and macroscopic diffusion. DMT-LIC (Chen et al. 2019a) designs a sharedrepresentation layer to capture both the underlying structure of a cascade graph and the node sequence in the diffusion process. Construction of Sequential Hypergraphs In the case that the cascade dm is divided into four parts based on the time periods, the nodes connected by the hyperedge em in the hypergraph corresponding to each time period are visually represented in Figure 5. Supplementary Results to Ablation Study We observe that the ablation study across two datasets in Table 5 may be insufficient. For example, w/o AdvDiff shows the best, worst, and average performance on three metrics compared to w/o Adv and w/o Diff respectively. To address this concern, we conducted ablation experiments on the other two datasets. The result is shown in Table 6. The suboptimal results on the Christianity and Douban datasets could be due to their unique characteristics, such as sparse network connections leading to minimal feature overlap. Acknowledgements This work was supported in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LDT23F01012F01 and Grant LDT23F01015F01, in part by the Fundamental Research Funds for the Provincial Universities of Zhejiang Grant GK229909299001-008, in part by the National Natural Science Foundation of China under Grant 62372146, Grant 62071327 and Grant 61806061, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8579 in part by the Zhejiang Laboratory Open Research Project under Grant K2022QA0AB01. References Arya, D.; Gupta, D. K.; Rudinac, S.; and Worring, M. 2020. HyperSAGE: Generalizing Inductive Representation Learning on Hypergraphs. arXiv:2010.04558. Bai, S.; Zhang, F.; and Torr, P. H. S. 2019. Hypergraph Convolution and Hypergraph Attention. arXiv:1901.08150. Cao, Q.; Shen, H.; Cen, K.; Ouyang, W. R.; and Cheng, X. 2017. DeepHawkes: Bridging the Gap between Prediction and Understanding of Information Cascades. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. Cao, Q.; Shen, H.; Gao, J.; Wei, B.; and Cheng, X. 2019. Popularity Prediction on Social Platforms with Coupled Graph Neural Networks. Proceedings of the 13th International Conference on Web Search and Data Mining. Chen, X.; Zhang, K.; Zhou, F.; Trajcevski, G.; Zhong, T.; and Zhang, F. 2019a. Information Cascades Modeling via Deep Multi-Task Learning. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Chen, X.; Zhou, F.; Zhang, K.; Trajcevski, G.; Zhong, T.; and Zhang, F. 2019b. Information Diffusion Prediction via Recurrent Cascades Convolution. 2019 IEEE 35th International Conference on Data Engineering (ICDE), 770–781. Chien, E.; Pan, C.; Peng, J.; and Milenkovic, O. 2021. You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. arXiv:2106.13264. Ding, C.; Zhao, Z.; Li, C.; Yu, Y.; and Zeng, Q. 2023. Session-based recommendation with hypergraph convolutional networks and sequential information embeddings. Expert Systems with Applications, 223: 119875. Feng, S.; Cong, G.; Khan, A.; Li, X.; Liu, Y.; and Chee, Y. M. 2018a. Inf2vec: Latent Representation Model for Social Influence Embedding. 2018 IEEE 34th International Conference on Data Engineering (ICDE), 941–952. Feng, Y.; You, H.; Zhang, Z.; Ji, R.; and Gao, Y. 2018b. Hypergraph Neural Networks. In AAAI Conference on Artificial Intelligence. Huang, J.; and Yang, J. 2021. UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks. arXiv:2105.00956. Islam, M. R.; Muthiah, S.; Adhikari, B.; Prakash, B. A.; and Ramakrishnan, N. 2018. DeepDiffuse: Predicting the ’Who’ and ’When’ in Cascades. 2018 IEEE International Conference on Data Mining (ICDM), 1055–1060. Jin, H.; Wu, Y.; Huang, H.; Song, Y.; Wei, H.; and Shi, X. 2022. Modeling Information Diffusion With Sequential Interactive Hypergraphs. IEEE Transactions on Sustainable Computing, 7: 644–655. Ko, H.; Lee, S.; Park, Y.; and Choi, A. 2022. A survey of recommendation systems: recommendation models, techniques, and application fields. Electronics, 11(1): 141. Kong, S.; Mei, Q.; Feng, L.; Ye, F.; and Zhao, Z. 2014. Predicting bursts and popularity of hashtags in real-time. Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. Leskovec, J.; Backstrom, L.; and Kleinberg, J. M. 2009. Meme-tracking and the dynamics of the news cycle. In Knowledge Discovery and Data Mining. Li, C.; Ma, J.; Guo, X.; and Mei, Q. 2016. DeepCas: An End-to-end Predictor of Information Cascades. Proceedings of the 26th International Conference on World Wide Web. Liu, B.; Yang, D.; Wang, Y.; and Shi, Y. 2022. Improving Information Cascade Modeling by Social Topology and Dual Role User Dependency. In International Conference on Database Systems for Advanced Applications. Liu, P.; Qiu, X.; and Huang, X. 2017. Adversarial Multi-task Learning for Text Classification. In Annual Meeting of the Association for Computational Linguistics. Miller, R.; and Lammas, N. 2010. Social media and its implications for viral marketing. Asia Pacific Public Relations Journal, 11(1): 1–9. Sankar, A.; Zhang, X.; Krishnan, A.; and Han, J. 2020. InfVAE: A Variational Autoencoder Framework to Integrate Homophily and Influence in Diffusion Prediction. Proceedings of the 13th International Conference on Web Search and Data Mining. Sun, L.; Rao, Y.; Zhang, X.; Lan, Y.; and Yu, S. 2022. MSHGAT: Memory-Enhanced Sequential Hypergraph Attention Network for Information Diffusion Prediction. In AAAI Conference on Artificial Intelligence. Sun, X.; Cheng, H.; Liu, B.; Li, J.; Chen, H.; Xu, G.; and Yin, H. 2023. Self-supervised hypergraph representation learning for sociological analysis. IEEE Transactions on Knowledge and Data Engineering. Wang, J.; Zheng, V. W.; Liu, Z.; and Chang, K. C.-C. 2017. Topological Recurrent Neural Network for Diffusion Prediction. 2017 IEEE International Conference on Data Mining (ICDM), 475–484. Wang, S.; Hu, X.; Yu, P. S.; and Li, Z. 2014. MMRate: inferring multi-aspect diffusion networks with multi-pattern cascades. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. Wang, Z.; Chen, C.; and Li, W. 2018. A Sequential Neural Information Diffusion Model with Structure Attention. Proceedings of the 27th ACM International Conference on Information and Knowledge Management. Wang, Z.; and Li, W. 2019. Hierarchical Diffusion Attention Network. In International Joint Conference on Artificial Intelligence. Wu, D.; Tan, Z.; Xia, Z.; and Ning, J. 2022. TCSE: Trend and cascade based spatiotemporal evolution network to predict online content popularity. Multimedia Tools and Applications, 82: 1459–1475. Xu, H.; Zheng, C.; Zhao, Z.; and Sun, X. 2023a. MultiHypergraph Neural Networks for Emotion Recognition in Multi-Party Conversations. Applied Sciences, 13(3): 1660. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8580 Xu, X.; Zhou, F.; Zhang, K.; Liu, S.; and Trajcevski, G. 2023b. CasFlow: Exploring Hierarchical Structures and Propagation Uncertainty for Cascade Prediction. IEEE Transactions on Knowledge and Data Engineering, 35: 3484–3499. Yang, C.; Sun, M.; Liu, H.; Han, S.; Liu, Z.; and Luan, H. 2021. Neural Diffusion Model for Microscopic Cascade Study. IEEE Transactions on Knowledge and Data Engineering, 33: 1128–1139. Yang, C.; Tang, J.; Sun, M.; Cui, G.; and Liu, Z. 2019. FullScale Information Diffusion Prediction With Reinforced Recurrent Networks. IEEE Transactions on Neural Networks and Learning Systems, 34: 2271–2283. Yuan, C.; Li, J.; Zhou, W.; Lu, Y.; Zhang, X.; and Hu, S. 2020. DyHGCN: A Dynamic Heterogeneous Graph Convolutional Network to Learn Users’ Dynamic Preferences for Information Diffusion Prediction. arXiv:2006.05169. Zhang, Q.; Guo, Z.; Zhu, Y.; Vijayakumar, P.; Castiglione, A.; and Gupta, B. B. 2023. A deep learning-based fast fake news detection model for cyber-physical social services. Pattern Recognition Letters, 168: 31–38. Zhao, Q.; Erdogdu, M. A.; He, H. Y.; Rajaraman, A.; and Leskovec, J. 2015. SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Zhong, E.; Fan, W.; Wang, J.; Xiao, L.; and Li, Y. 2012. ComSoc: adaptive transfer of user behaviors over composite social network. In Knowledge Discovery and Data Mining. Zhou, F.; Xu, X.; Zhang, K.; Trajcevski, G.; and Zhong, T. 2020. Variational Information Diffusion for Probabilistic Cascades Prediction. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, 1618–1627. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8581
2024
953
18,799
Multi-Domain Recommendation to Attract Users via Domain Preference Modeling Hyunjun Ju1, SeongKu Kang1, 2, Dongha Lee3, Junyoung Hwang1, Sanghwan Jang1, Hwanjo Yu1* 1 Pohang University of Science and Technology (POSTECH), Republic of Korea 2 University of Illinois at Urban-Champaign (UIUC), United States 3 Yonsei University, Republic of Korea {hyunjunju, jyhwang, s.jang, hwanjoyu}@postech.ac.kr, [email protected], [email protected] Abstract Recently, web platforms are operating various service domains simultaneously. Targeting a platform that operates multiple service domains, we introduce a new task, MultiDomain Recommendation to Attract Users (MDRAU), which recommends items from multiple “unseen” domains with which each user has not interacted yet, by using knowledge from the user’s “seen” domains. In this paper, we point out two challenges of MDRAU task. First, there are numerous possible combinations of mappings from seen to unseen domains because users have usually interacted with a different subset of service domains. Second, a user might have different preference for each of the target unseen domains, which requires recommendations to reflect users’ preference on domains as well as items. To tackle these challenges, we propose DRIP framework that models users’ preference at two levels (i.e., domain and item) and learns various seen-unseen domain mappings in a unified way with masked domain modeling. Our extensive experiments demonstrate the effectiveness of DRIP in MDRAU task and its ability to capture users’ domain-level preferences. Introduction Nowadays, web platforms are operating various service domains simultaneously (e.g., music streaming, game store, and eBook subscription). They allow users to experience diverse domains within a single platform and promote the mutual growth of all service domains through the Recommender System (RS). For such multi-domain platforms, recommending items from unseen domains with which each user has not interacted yet plays an essential role in the platform’s growth, user satisfaction, and business success. That is, users typically utilize a few domains rather than all domains, and accurate recommendations that align with user preference can attract users into unexplored domains. To this end, Cross-Domain Recommendation (CDR), which recommends items from unseen (target) domains based on user interaction history in seen (source) domains, has gained significant research attention.1 *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1In this paper, the terms “seen” and “unseen” are defined from the perspective of each user; as each user interacts with a different subset of service domains, the seen and unseen domain sets vary for Limitation을나눌때 - Source가여러개여서나타날수있는문제점 - Target이여러개여서나타날수있는문제점 - Source & Targets이모두여러개여서나타날수있는문제점 3가지측면으로정리? Domain Set Users Sources (Seen) Targets (Unseen) Domain Set Users Sources (Seen) Targets (Unseen) Figure 1: A conceptual illustration of MDRAU task. A platform operates five different service domains, and each user partially interacts with a subset of the entire service domains. MDRAU aims to provide recommendations from each user’s unseen domains to attract users. Most CDR studies (Man et al. 2017; Kang et al. 2019; Zhu et al. 2022, 2021b; Fu et al. 2019) have focused on transferring user preference information from a source domain to a target domain. Given a recommender system employed for each domain, they learn a mapping function that acts as a bridge between the representation spaces of the two domains. To provide unseen domain recommendations, they transfer the user embedding from the source domain to the target domain using the mapping function and generate recommendations based on the transferred user embedding. More recently, (Cao et al. 2022b, 2023, 2022a) have achieved improved recommendation accuracy by modeling domain-specific and domain-shared information separately, and selectively transferring the domain-shared information. Despite their effectiveness, the existing studies have targeted the case of a single seen-unseen domain pair (i.e., one-toone), and the case of multiple seen-unseen domains (i.e., many-to-many) has not been studied well. Especially, they do not consider making unified recommendations that include items from multiple domains. In practical scenarios, it is becoming increasingly common for platforms to offer services in more than two domains. This creates a need to promote user engagement across multiple unseen domains by attracting users with accurate personalized recommendations. We refer to this probeach user. A user can be considered a cold-start user in the user’s unseen service domain (Man et al. 2017; Kang et al. 2019). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8582 lem as Multi-Domain Recommendation to Attract Users (MDRAU) task (Fig. 1). Formally, MDRAU aims to provide a recommendation list that consists of items from each user’s unseen domain(s) that the user has not tried before. MDRAU brings several practical values to multi-domain platforms. It encourages users to explore new content beyond their previously interacted domains, fostering diverse and serendipitous discoveries. This diversified user experience can enhance user satisfaction and engagement, which helps provide more accurate recommendations in each domain. Addressing MDRAU task presents two major challenges. First, since each user has interacted with different service domains, there are numerous possible combinations of seenunseen domains. With K domains, the number of combinations can reach up to 2K−2, except in the case that a user has used all services. This large number of combinations makes it difficult to apply the previous CDR methods that learn a one-to-one mapping function for each domain pair. Second, a user naturally has different preference for each unseen domain, and these varying domain preference need to be properly reflected in the recommendation process. That is, the recommendation needs to consider user preference in both domain-level, i.e., the inclination of a user to explore each unseen domain, and item-level, i.e., the inclination of a user to interact with a new item within a domain. The previous CDR methods have focused on improving item-level preference for a specific unseen domain without directly considering domain-level preference. As a result, they show limited performance when applied to MDRAU task. To effectively solve our new task MDRAU, we propose DRIP, a new framework that learns various seen-unseen domain mappings in a unified way via masked domain modeling and models user preference at the domain and item levels. We formulate the training process of DRIP as a prediction task of missing information based on its contexts (Devlin et al. 2019; Bao et al. 2022). Then, we model the two-level preferences using a multi-domain encoder that incorporates user preference across multiple domains. The key idea is to randomly mask the user preference of some seen domains in the model input, and train the model to predict the user preference in the masked domains. During the training, we regard the masked domains as the user’s unseen domains, allowing the model to simulate and learn from numerous scenarios involving different combinations of seen and unseen domains. This enables the model to achieve the generalization capability of inferring user preferences in unseen domains from ones in seen domains. Furthermore, we introduce an adaptive masking scheme to make the model more focused on learning domains that a user is more likely to prefer. We validate the superiority of DRIP by extensive experiments on real-world datasets and provide a thorough comparison with various state-of-the-art methods. Related Work The existing CDR studies can be divided into two groups according to the type of target domain for recommendations. CDR for Seen Domain Recommendation. It aims to improve the recommendation quality of seen domains with which the user has already interacted. Many studies alleviate the data sparsity problem in the sparse target domains by utilizing information from the source domain. To this end, they transfer knowledge among domains via bridging information, such as overlapping users or items. For example, CoNet (Hu, Zhang, and Yang 2018) introduces crossconnection units to transfer and integrate knowledge between source and target domains. DTCDR (Zhu et al. 2019) proposes a dual-target framework to improve the recommendation accuracy in both two involved domains simultaneously. GA-DTCDR (Zhu et al. 2020) extends DTCDR by adopting graph information. Recently, several studies have focused on multi-domain cases having more than two domains. GA-MTCDR (Zhu et al. 2023) extends GA-DTCDR with element-wise attention to integrating embeddings of overlapping users from multiple domains. CAT-ART (Li et al. 2023) proposes a contrastive autoencoder to encode a global user embedding and a mechanism to transfer user embeddings from each source domain to the target domain. UniCDR (Cao et al. 2023) introduces domain-specific and domain-shared embeddings along with aggregation schemes to make a universal model for existing CDR scenarios. CDR for Unseen Domain Recommendation. It aims to provide recommendations in unseen domains with which the user has not yet interacted. Their focus lies on the method of obtaining user embeddings in the target domain space. For example, EMCDR (Man et al. 2017) proposes an embedding and mapping framework, which learns a mapping function that transfers user embeddings from the source to the unseen target domain. SSCDR (Kang et al. 2019) proposes a semi-supervised embedding and mapping framework to train a mapping function, even when only a few labeled data are available. PTUPCDR (Zhu et al. 2022) uses a meta-network that generates a personalized mapping function. UniCDR (Cao et al. 2023) can also be applied to recommend unseen domains using domain-shared embeddings. They mainly focus on a single unseen target domain rather than multiple unseen target domains. Problem Formulation Notations In this work, we focus on a scenario where a provider operates services for multiple domains (e.g., music streaming, game store, and eBook subscription), each of which employs a distinct recommender system. Each service domain has its own user and item set. Items of each domain are mutually exclusive, while users may use one or multiple service domain(s). Formally, given K domains D = {d1, · · · , dK}, Uk and Vk denote the set of users and items for the k-th domain, respectively. The user-item interaction history for dk is represented by a matrix R(k) ∈{0, 1}|Uk|×|Vk|, where Ru,v = 1 if user u has interacted with the item v, otherwise Ru,v = 0. Without loss of generality, we define an interaction matrix of all domains R ∈{0, 1}|U|×|V|, where U = SK k=1 Uk and V = SK k=1 Vk. We additionally define the user-domain relations as G ∈{0, 1}|U|×|D|, where Gu,k = 1 if the user u has interacted with items of domain The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8583 dk, otherwise Gu,k = 0. Overlapping users indicate users who have interacted with at least two domains. MDRAU Task Definition 1 (Multi-Domain Recommendation to Attract Users) Given user-item interaction history from multiple service domains, MDRAU refers to the task of providing a ranking list (i.e., recommendation) that consists of items from each user’s unseen domain(s) that the user has not tried before. In some cases, providers may have a need to promote a specific target domain within the platform. We refer to the scenario with a single target domain as MDRAU-ST, and the scenario with multiple target domains as MDRAU-MT. MDRAU provides preferable items from each user’s unseen domains, encouraging the exploration of new content beyond their previously interacted domains. By doing so, MDRAU helps to diversify user experience and facilitate serendipitous discoveries, enhancing user engagement. This enhanced engagement, in turn, helps to provide more accurate recommendations in each domain, ultimately contributing to the growth and revenue of the platform. Proposed Framework Overview We present a unified framework for MDRAU task, named as DRIP (Domain pReference-aware unseen domain Item Prediction). DRIP optimizes the parameters of its model by maximizing the likelihood given the training data R. Let Ru ∈{0, 1}|V| denote a multi-hot vector representing a user’s interaction with the items over all domains. We assume the observed data are drawn from the Multinomial distribution, and the likelihood is described by p(Ru) = Y v∈V p(v|u)Ru,v Ru ∼Mult(Nu, p(v|u)), (1) where Nu = P v Ru,v and p(v|u) is the probability that user u prefers the item v over the entire item set. We decompose the likelihood based on the domain-level and item-level preference as follows: p(Ru) = Y v∈V p(v|u)Ru,v = Y dk∈D Y v∈Vk p(v, dk|u)Ru,v = Y dk∈D Y v∈Vk  p(v|u, dk) · p(dk|u) Ru,v, (2) where p(v|u, dk) denotes user u’s preference for item v in domain dk and p(dk|u) denotes the user’s preference for domain dk. These preferences are modeled by a unified neural model with a multi-domain encoder based on self-attention. Then, to maximize the likelihood, we train the model via masked domain modeling that predicts the item preference of the masked domains. The recommendations are produced by considering both domain- and item-level preferences for unseen domains of each user. Fig. 2 illustrates the overall DRIP architecture. Domain-Specific Encoder We assume that a service platform operates multiple service domains and has deployed its own RS for each domain. An RS model contains encoders that encode the user and item information into representation space, where the user-item similarity is measured for recommendations. In specific, for each domain dk, let fθk : R|Uk| →Rd, fξk : R|Vk| →Rd, and simk(·, ·) denote a user encoder, an item encoder, and similarity function, respectively. Diverse architectures can be adopted for the encoders (e.g., id-based (Koren, Bell, and Volinsky 2009; Rendle et al. 2009) and graph-based (Wang et al. 2019)), and simk(·, ·) can be either a simple metric or a learnable function. In this work, we use a simple id-based encoder with the inner product similarity, as done in (Rendle et al. 2009; Man et al. 2017; Zhu et al. 2022). Multi-Domain Encoder The main component of our unified neural model is a multidomain encoder that aims to enrich user embedding from each domain-specific encoder with that from the encoders for other domains. It adopts the self-attention mechanism to aggregate information from other domains based on the similarity of user preferences across multiple domains. Constructing Masked Input. Let xu,k = fθk(u) denote user embedding of user u in domain dk. A user u is represented as the set of the corresponding user embeddings for all domains {xu,k}K k=1. Note that some domains may have no interaction history with the user (i.e., Gu,k = 0), as users typically utilize a few domains rather than all domains. To handle this case, we replace the embedding for the user’s unseen domains with e[M], which is the learnable embedding of a special mask token [M] as follows: ¯ Xu = {¯xu,k}K k=1, ¯xu,k = (1 −Gu,k)e[M] + Gu,kxu,k. (3) Since the embeddings are obtained from independently trained domain-specific RS models, they have different distributions. We align the distributions using projectors gϕk(xu,k), where gϕk : Rd →Rm. Also, we insert e[S] ∈ Rm at the beginning of the input for the encoder, which is the learnable embedding of a special token [S]; this will be used to estimate each user’s domain-level preference. The final input representation of user u is constructed as H0 u =  e[S], gϕ1(¯xu,1), · · · , gϕK(¯xu,K) ⊤∈R(K+1)×m. (4) We do not use position embedding because spatial position information is unnecessary for our target task. Contextualizing User Embeddings over Multi-Domains. The input H0 u is forwarded into the multi-domain encoder to contextualize each domain-specific user embedding over the user’s multiple domains based on the self-attention mechanism. The multi-domain encoder is basically a stack of L transformer layers (Vaswani et al. 2017; Devlin et al. 2019). The details are described in the Appendix. The l-th transformer layer can be simply described by Hl+1 u = Transformer(Hl u), ∀l ∈{0, 1, · · · , L −1} . (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8584 Multi-Domain Encoder Domain Alignment Domain Transfer Domain-Specific Recommender Systems Item Prediction #!,%'$ Item Interactions ⋯ Masked Domain Modeling Domain Preference Random & Adaptive Masking ⋯ RS1 ⋯ RS2 RS) #!, ( ⋯ ⋯ *!,% + ) + ) *!,$ + * *!,$ ∅ *!,%'$ *!,% Input user embeddings , / -+ ,+ . /%, 1% /&, 1& /', 1' Figure 2: The overview of the proposed DRIP framework. We denote a set of learnable parameters in the multi-domain encoder consisting of transformer layers as Ω. In the end, the final output of L-th layer is obtained by HL u =  hL u,[S], hL u,1, · · · , hL u,K ⊤∈R(K+1)×m. (6) Through the transformer layers, the user embedding from each domain gets contextualized by attending to the embeddings from other domains based on the embedding similarity across the domains. As a result, hL u,k encodes user preference for domain dk, being enriched by preference information from the other remaining domains. Preference Modeling Domain-Level Preference. The domain-level preference can be inferred by using the contextualized representation of the special token [S] for user u, denoted by hL u,[S], which is generated by aggregating a user’s preference information for multiple domains. We introduce a domain-preference predictor qψ[S] : Rm →RK to predict each user’s inclination to each domain: zu,[S] = qψ[S](hL u,[S]). The domainlevel preference is defined by p(dk|u; Θ) = exp (zu,[S]k) P di∈D exp (zu,[S]i), (7) where zu,[S]k indicates k-th logit value in zu,[S]. Item-Level Preference. The item-level preference in each domain is obtained from the similarity of item embeddings to the contextualized user embedding for the domain. Using a domain-specific projector qψk : Rm →Rd, we project hL u,k to the representation space of domain dk: zu,k = qψk(hL k ). We compute the in-domain item-level preference based on user u’s similarity distribution over item set Vk, p(v|u, dk; Θ) = exp simk(zu,k, xv,k)  P ˆv∈Vk exp simk(zu,k, xˆv,k) , (8) where xv,k = fξk(v) is the embedding of item v in the domain dk. Model Learning Masked Domain Modeling We formulate the training process of DRIP as a prediction task of missing domain information based on its contexts (Devlin et al. 2019; Bao et al. 2022). Our key idea is to randomly mask some of the domain-specific user embeddings (among the ones for a user’s seen domains) in the input, and train the model to predict the user preference in the masked domains. That is, in the training process, we regard randomly-masked domains as the user’s unseen domains, which allows our model to simulate and learn various scenarios of mapping user preference from seen domains to unseen domains. As a result, the model can capture the relations of user preferences across domains, eventually achieving the generalization capability of inferring user preferences in unseen domains from those in seen domains. Let mu ∈{0, 1}K denote a random masking vector for user u, where mu,k = 1 indicates that the user embedding for domain dk is masked. We apply the making operation to the user embedding for the seen domains (i.e., Gu,k = 1) with the probability of pu,k. In specific, mu,k is drawn from a Bernoulli distribution: mu,k ∼Bern(pu,k). At the beginning of training, we set the equal masking probability for all seen domains (i.e., random masking). Then, during the training process, we gradually adjust the probability based on the domain-level preferences, i.e., pu,k ∝p(dk|u; Θ), to encourage the model to more focus on learning domains that the user is more likely to prefer (i.e, adaptive masking). The masked embedding set is represented as ˜ Xu = {˜xu,k}K k=1, where its elements are obtained by ˜xu,k = (1−Gu,k)e[M]+Gu,k mu,ke[M]+(1−mu,k)xu,k  . (9) During the training, we use ˜ Xu (Eq. (9)) instead of ¯ Xu (Eq. (3)) for the input of the multi-domain encoder, and obtain their contextualized representations. Note that we discard the case in which all seen domains are masked. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8585 Learning Objective We train the model to predict the preference information of the masked domains (i.e., mu,k = 1). Instead of predicting the masked embedding itself, we directly maximize the likelihood of the user’s interaction history. This makes the recommendation accuracy directly aligned with the optimization objective. Based on the negative log-likelihood of Eq. (2) and the masking process, the final loss is defined as follows: LDRIP = − X u∈U X dk∈D mu,k X v∈Vk Ru,v log[p(v|u, dk) · p(dk|u)], (10) where each probability parameterized by Θ (Eq. (7), (8)) and the learning parameters Θ include the multi-domain encoder Ω, two types of projectors {ϕk}K k=1 and {ψk}K k=1, and domain-preference predictor ψ[S]. MDRAU Recommendation At the test phase, for each user, we construct the input representation with the user’s unseen domains masked (Eq. (4)), and calculate the domain-level and item-level preferences, p(dk|u; Θ) and p(v|u, dk; Θ). In the scenario where there are multiple target service domains (MDRAU-MT), the recommendation is generated by sorting the items by their score p(v|u, dk; Θ) · p(dk|u; Θ). Otherwise, in the scenario where we have a specific target service domain to promote (MDRAU-ST), the recommendation is generated by considering only in-domain item-level preference p(v|u, dk; Θ). Experiments 2 Experimental Settings Datasets and Domain Setup We use the widely-used Amazon dataset (He and McAuley 2016; Kang et al. 2019, 2023), which consists of multiple item domains. To simulate the platform environment, we select two subsets of these domains that have been previously used in related studies (Zhu et al. 2021a). The first scenario (P1) includes Book, Movie, CD, and Game domains, and the second scenario (P2) includes Home, Health, Grocery, and Tools domains (Table 1). Evaluation Metrics We focus on the top-K recommendation task for implicit feedback. We evaluate the recommendation accuracy of each method by using the two ranking metrics (Kang et al. 2020, 2022): Recall (R@K) and Normalized Discounted Cumulative Gain (N@K). Compared Methods We compare DRIP with various methods from related research fields. We have modified the original methods to perform MDRAU, and the modified versions are annotated with the suffix ‘+’. The first group of baselines learns a recommendation model BPRMF (Rendle et al. 2009) while treating a union of multiple domains as a single domain (Singh and Gordon 2008). The second group includes multi-task learning methods (MMOE (Ma et al. 2018) and PLE (Tang et al. 2020)), which define a single task as item-level preference learning for each domain. They are widely used for RS in multi-domain cases (Cao 2More detailed experimental settings and results are in the Appendix. Domains #Users #Items #Interaction Density P1 Book 35,987 39,049 1,726,231 0.12% Movie 17,056 15,620 609,552 0.23% CD 5,941 9,069 211,617 0.39% Game 1,049 1,064 20,568 1.84% P2 Home 10,317 7,605 127,091 0.16% Health 8,690 5,750 123,905 0.25% Grocery 4,869 2,884 71,269 0.51% Tools 1,804 1,422 18,040 0.70% Table 1: Statistics of the two platform scenarios of MDRAU. et al. 2023). For each task (i.e., domain), they employ the binary cross-entropy loss to predict user-item interactions from implicit feedback (He et al. 2017). The third group includes CDR methods (EMCDR+ (Man et al. 2017) and PTUPCDR+ (Zhu et al. 2022)) for unseen domain recommendation, which learn a mapping function for each sourcetarget domain pair. Due to a large number of seen-unseen domain combinations, it is infeasible to directly apply them to MDRAU. For this reason, we tailor their learning task for a many-to-one mapping. The last group includes state-of-theart methods (CAT-ART+ (Li et al. 2023) and UniCDR (Cao et al. 2023)) that partially handle multi-domain cases. They are designed to exploit information from multiple domains to improve recommendation accuracy in each domain. For MDRAU-MT, recommendations from each unseen domain are integrated into a unified recommendation list using postprocessing (e.g., normalization). Performance Comparison for MDRAU-ST We first compare the recommendation performance of various methods for MDRAU-ST on our two simulated platforms, P1 and P2. In this task, we set each of the domains as the target domain, and for evaluation, we consider only the users who have not tried the domain yet as test users. The results for each target domain are reported in Table 2. We observe that DRIP consistently achieves higher recommendation performance than all the other methods in each of the target domains. Specifically, the CDR methods designed for a single pair of source-target domain based on one-to-one mapping (i.e., EMCDR+ and PTUPCDR+) show lower performance compared to the ones that can effectively handle multiple source domains based on many-to-one mapping (i.e., CAT-ART+, UniCDR, and DRIP). They are not capable of capturing the relevance among multiple source domains, whereas the multi-domain CDR methods integrate the user preferences of multiple source domains in advanced ways. In particular, the multi-domain encoder of DRIP contextualizes domain-specific user embeddings over multiple source domains with attention mechanism that is effective in capturing the inter-domain relationship; this brings a significant performance improvement for MDRAU-ST. Furthermore, unlike the CDR methods for recommending items in a user’s unseen domain (i.e., EMCDR+, PTUPCDR+, and DRIP), the CDR baselines that aim to enhance the recommendation in the user’s seen domains (i.e., CAT-ART+ and UniCDR) have to rely on each user’s global The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8586 Methods (P1) Book Movie CD Game BPRMF 0.0182 0.0224 0.0371 0.0548 MMOE 0.0164 0.0310 0.0341 0.0613 PLE 0.0108 0.0296 0.0337 0.0592 EMCDR+ 0.0348 0.0439 0.0610 0.0755 PTUPCDR+ 0.0338 0.0427 0.0589 0.0836 CAT-ART+ 0.0339 0.0445 0.0578 0.0745 UniCDR 0.0380 0.0502 0.0607 0.0823 DRIP 0.0423* 0.0517 0.0699* 0.0873* Improv. 11.56% 3.13% 14.56% 4.44% Methods (P2) Home Grocery Tools Health BPRMF 0.0328 0.0655 0.0647 0.0415 MMOE 0.0279 0.0928 0.0678 0.0520 PLE 0.0279 0.0912 0.0697 0.0525 EMCDR+ 0.0423 0.0968 0.0827 0.0639 PTUPCDR+ 0.0433 0.0969 0.0836 0.0651 CAT-ART+ 0.0396 0.0975 0.0758 0.0573 UniCDR 0.0436 0.1068 0.0863 0.0647 DRIP 0.0472* 0.1086* 0.0919* 0.0715* Improv. 8.33% 3.23% 14.97% 9.82% Table 2: Recommendation performance (N@20) comparison for MDRAU-ST on the platform scenario 1 (P1) and scenario 2 (P2). * denotes the improvement over the best baseline is statistically significant with p < 0.05, using the paired t-test. embedding shared over multiple domains to predict the user preference for a target unseen domain. This is because they are suffering from the incapability of inferring a user’s unseen domain embedding, caused by their training process aiming at only predicting items in a user’s seen domains. In contrast, DRIP is good at inferring user embedding of a target unseen domain, conferred by masked domain modeling. To sum up, DRIP outperforms all existing methods in terms of MDRAU-ST, by (1) effectively contextualizing user embeddings over multiple domains using attention mechanism and (2) accurately inferring a user’s preference for a target unseen domain via masked domain modeling. Performance Comparison for MDRAU-MT We also evaluate DRIP and various methods for MDRAUMT on two simulated platforms, P1 and P2. In this task, the set of target domains varies depending on users, because each user has different unseen domains. For each user, we assess the accuracy of recommendations for the user’s multiple unseen domains, and report the averaged accuracy over all test users as the final performance. Note that MDRAUMT scenario differs from MDRAU-ST in that it evaluates the models’ ability to make aggregated recommendations over multiple unseen domains. In this sense, MDRAU-MT requires to properly capture domain-level preference with item-level preference. Table 3 presents overall performances for MDRAU-MT. We observe that DRIP outperforms the best baseline method more in MDRAU-MT than in MDRAU-ST. The limited MDRAU-MT performance of baselines stems from two factors. First, they focus on predicting items in single domains without considering multi-domain preferences, unlike DRIP, which captures both domain and item-level preferences. Second, most baseline methods need a postprocessing step to merge recommended item lists over multiple target domains, limiting final accuracy. On the contrary, DRIP optimizes a unified model that makes recommendations for multiple target domains in an end-to-end manner, improving MDRAU-MT performance by reducing its gap from the training process. In conclusion, DRIP achieves the best performance among all the baselines with the help of its item-level preference accurately predicted for each of the target domains as well as its capability of inferring domainlevel preference obtained by the training process. Domain-Level Preference Analysis We analyze the models’ ability to capture domain-level preference, essential for accurate recommendations in multiple unseen domains (i.e., MDRAU-MT). We compare two domain distributions obtained from (1) a user’s interaction history and (2) the recommendation list generated by each method for the user.3 Let P and Q denote the ground-truth distribution in user history and the predicted distribution, respectively. We calculate the Kullback-Leibler Divergence (KLD@K) between the two distributions DKLD(P∥Q) = P i Pi log(Pi/Qi) to measure how closely the model’s prediction captures the distribution of actual domain preferences. Note that KLD only measures the domain-level accuracy, not in-domain item prediction accuracy. In Fig. 3, we assess the KLD@K score for the top-K recommendation and observe the following: (1) In comparison to other competing methods, BPRMF exhibits significantly better KLD scores. BPRMF treats a union of all domains as a single domain and learns the pair-wise ranking of itemlevel preference over the entire domains. This makes a user’s domain-level preference implicitly captured during the training process. This result can also be interpreted with the previous performance comparison for MDRAU-MT/ST. The performance of BPRMF is highly limited in MDRAU-ST due to its limited capability of capturing item-level preferences. However, BPRMF achieves comparable performance with the state-of-the-art methods in MDRAU-MT. We interpret that this improvement mainly comes from its capability of capturing domain-level preferences. (2) Other competing methods show considerably worse KLD scores compared to DRIP. As discussed earlier, they need a post-processing step to generate the unified ranking list encompassing all unseen domains (e.g., z-score normalization), which may not yield optimal recommendation accuracy. This result highlights the importance of holistic model training that considers both domain-level and item-level preferences in MDRAU task. (3) Among all baseline methods, DRIP achieves the best KLD scores in both scenarios. This result shows that DRIP 3In this analysis, we assume that the domain distributions in each user’s interaction history reflect the user’s actual domain-level preferences to a considerable extent. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8587 Methods P1 P2 R@20 N@20 R@20 N@20 BPRMF 0.0249 0.0256 0.0528 0.0503 MMOE 0.0188 0.0191 0.0486 0.0513 PLE 0.0190 0.0190 0.0491 0.0516 EMCDR+ 0.0362 0.0374 0.0531 0.0525 PTUPCDR+ 0.0367 0.0377 0.0546 0.0538 CAT-ART+ 0.0395 0.0395 0.0522 0.0497 UniCDR 0.0434 0.0446 0.0588 0.0574 DRIP 0.0545* 0.0556* 0.0780* 0.0773* Improv. 25.64% 24.55% 32.66% 34.80% Table 3: Recommendation performance comparison for MDRAU-MT. * denotes the improvement over the best baseline is statistically significant with p < 0.05, using the paired t-test. Figure 3: Domain-level preference analysis. KL-divergence scores of each method (Best viewed in color). indeed effectively captures user’s domain-level preferences, and also supports its superior performance in MDRAU-MT. Design Choice Analysis We analyze alternative design choices for DRIP in Table 4 to verify the effectiveness of our design choice. We report the performance for MDRAU-MT on platform scenario P1. First, we compare alternative training paradigms for DRIP: Single-domain learning, where all domains are treated as one, and the model is trained to maximize the likelihood of the training data; Many-to-one learning, where a single model is trained for each target domain to predict item-level preferences. To generate a unified recommendation list encompassing all domains, we apply a postprocessing step. Post-processing A uses z-normalization, performing best in our test. We also consider emphasizing active domains in post-processing; specifically, for postprocessing B, we multiply the ratio of the total number of interactions in each domain by the z-normalized scores and use the scores for the recommendation. We observe that the alternative training paradigms show considerably degraded performance. Single-domain learning achieves a highly limited performance, showing the necessity of proper domain modeling in MDRAU task. Also, the many-to-one learning neglects the domain-level preference during the training, and the post-processing is applied Designs R@20 N@20 DRIP 0.0545 0.0556 Training Paradigm Single-domain learning 0.0281 0.0272 Many-to-one learning w/ post-processing A 0.0432 0.0447 Many-to-one learning w/ post-processing B 0.0409 0.0424 Domain Preference Modeling Uniform dist. 0.0122 0.0120 Domain activeness dist. 0.0409 0.0419 Masking Scheme w/o Adaptive Masking 0.0513 0.0520 Table 4: Performance comparison of different design choices. Results for MDRAU-MT on P1. independently from the training process, which results in limited MDRAU performance. Further, in our experiments, sophisticated designs for post-processing do not bring further improvements. These results support the superiority of our training strategy that decomposes user preference into domain- and item-level preferences and jointly learns them through a unified model. Second, we compare ablations for the domain-level preference modeling. Instead of estimating personalized domain-level preference, they use globally fixed distributions: the uniform and domain activeness distribution (the latter assumes users prefer more active domains). Both fixed domain preference approaches yield suboptimal recommendation performance. This result supports the effectiveness of our strategy that models the domain-level preference for each individual user considering their different preference. Lastly, we provide the results without the adaptive masking. For the ablation, we use random masking with the same masking ratio as adaptive masking. The adaptive masking brings slight improvements to the final performance, indicating that our masking strategy is well-aligned with our masked domain modeling. Conclusion This paper highlights the importance of MDRAU task based on its practical advantages in multi-domain platforms. We propose DRIP, a new framework to provide accurate unseen domain recommendations to attract users into new service domains that they have not interacted with yet. The DRIP decomposes user preference into domain-level preference and in-domain item-level preference and then jointly learns them via a unified model with the help of a training strategy based on masked domain modeling. We conduct extensive comparisons with a wide range of CDR methods. DRIP consistently achieves superior performance compared to all competing methods in both cases of a specific target domain (MDRAUST) and multiple target domains (MDRAU-MT). We expect that DRIP can enhance the user experience by fostering diverse and serendipitous discoveries and potentially promote the influx of new users to each service domain, benefiting providers in multi-domain service platforms. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8588 Acknowledgements This work was supported by the IITP grant funded by the MSIT (No.2018-0-00584, No.2019-0-01906) and the NRF grant funded by the MSIT (No.2020R1A2B5B03097210, No.RS-2023-00217286). References Bao, H.; Dong, L.; Piao, S.; and Wei, F. 2022. BEiT: BERT Pre-Training of Image Transformers. In International Conference on Learning Representations. Cao, J.; Li, S.; Yu, B.; Guo, X.; Liu, T.; and Wang, B. 2023. Towards Universal Cross-Domain Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 78–86. Cao, J.; Lin, X.; Cong, X.; Ya, J.; Liu, T.; and Wang, B. 2022a. DisenCDR: Learning Disentangled Representations for Cross-Domain Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 267–277. Cao, J.; Sheng, J.; Cong, X.; Liu, T.; and Wang, B. 2022b. Cross-Domain Recommendation to Cold-Start Users via Variational Information Bottleneck. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), 2209– 2223. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–4186. Fu, W.; Peng, Z.; Wang, S.; Xu, Y.; and Li, J. 2019. Deeply fusing reviews and contents for cold start users in crossdomain recommendation systems. In Proceedings of the AAAI Conference on Artificial Intelligence, 94–101. He, R.; and McAuley, J. 2016. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with OneClass Collaborative Filtering. In Proceedings of the 25th International Conference on World Wide Web, WWW ’16, 507–517. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee. ISBN 9781450341431. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.-S. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, 173–182. Hu, G.; Zhang, Y.; and Yang, Q. 2018. CoNet: Collaborative Cross Networks for Cross-Domain Recommendation. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 667–676. Kang, S.; Hwang, J.; Kweon, W.; and Yu, H. 2020. DERRD: A knowledge distillation framework for recommender system. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 605– 614. Kang, S.; Hwang, J.; Lee, D.; and Yu, H. 2019. Semisupervised learning for cross-domain recommendation to cold-start users. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1563–1572. Kang, S.; Kweon, W.; Lee, D.; Lian, J.; Xie, X.; and Yu, H. 2023. Distillation from Heterogeneous Models for Top-K Recommendation. In Proceedings of the ACM Web Conference 2023, 801–811. Kang, S.; Lee, D.; Kweon, W.; Hwang, J.; and Yu, H. 2022. Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering. In Proceedings of the ACM Web Conference 2022, 1965–1976. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix Factorization Techniques for Recommender Systems. Computer, 42(8): 30–37. Li, C.; Xie, Y.; Yu, C.; Hu, B.; Li, Z.; Shu, G.; Qie, X.; and Niu, D. 2023. One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 366–374. Ma, J.; Zhao, Z.; Yi, X.; Chen, J.; Hong, L.; and Chi, E. H. 2018. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 1930–1939. Man, T.; Shen, H.; Jin, X.; and Cheng, X. 2017. CrossDomain Recommendation: An Embedding and Mapping Approach. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 2464– 2470. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2009. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence, 452–461. Singh, A. P.; and Gordon, G. J. 2008. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, 650–658. Tang, H.; Liu, J.; Zhao, M.; and Gong, X. 2020. Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations. In Proceedings of the 14th ACM Conference on Recommender Systems, 269– 278. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 2017, volume 30. Wang, X.; He, X.; Wang, M.; Feng, F.; and Chua, T.-S. 2019. Neural Graph Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 165–174. Zhu, F.; Chen, C.; Wang, Y.; Liu, G.; and Zheng, X. 2019. Dtcdr: A framework for dual-target cross-domain recommendation. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1533–1542. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8589 Zhu, F.; Wang, Y.; Chen, C.; Liu, G.; and Zheng, X. 2020. A Graphical and Attentional Framework for Dual-Target Cross-Domain Recommendation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, 3001–3008. Zhu, F.; Wang, Y.; Chen, C.; Zhou, J.; Li, L.; and Liu, G. 2021a. Cross-Domain Recommendation: Challenges, Progress, and Prospects. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, 4721–4728. Zhu, F.; Wang, Y.; Zhou, J.; Chen, C.; Li, L.; and Liu, G. 2023. A Unified Framework for Cross-Domain and CrossSystem Recommendations. IEEE Transactions on Knowledge and Data Engineering, 35(2): 1171–1184. Zhu, Y.; Ge, K.; Zhuang, F.; Xie, R.; Xi, D.; Zhang, X.; Lin, L.; and He, Q. 2021b. Transfer-Meta Framework for Cross-Domain Recommendation to Cold-Start Users. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1813–1817. Zhu, Y.; Tang, Z.; Liu, Y.; Zhuang, F.; Xie, R.; Zhang, X.; Lin, L.; and He, Q. 2022. Personalized transfer of user preferences for cross-domain recommendation. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 1507–1515. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8590
2024
954